Page 20«..10..19202122..3040..»

Category Archives: Quantum Physics

UMN-led team receives $1.4M Keck Foundation grant to study possible breakthrough in quantum computing – UMN News

Posted: July 13, 2022 at 8:32 am

A University of Minnesota Twin Cities-led team received a $1.4 million award from the W. M. Keck Foundation to study a new process that combines quantum physics and biochemistry. If successful, the research could lead to a major breakthrough in the quantum computing field.

The project is one of two proposals the University of Minnesota submits each year to the Keck Foundation and is the first grant of its kind the University has received in 20 years.

Quantum computers have the potential to solve very complex problems at an unprecedented fast rate. They have applications in fields like cryptography, information security, supply chain optimization and could one day assist in the discovery of new materials and drugs.

One of the biggest challenges for scientists is that the information stored in quantum bits (the building blocks of quantum computers) is often short-lived. Early-stage prototype quantum computers do exist, but they lose the information they store so quickly that solving big problems of practical relevance is currently unachievable.

One approach researchers have studied to attempt to make quantum devices more stable is by combining semiconductors and superconductors to obtain robust states called Majorana modes, but this approach has been challenging and so far inconclusive since it requires very high-purity semiconductors. U of M School of Physics and Astronomy Associate Professor Vlad Pribiag, who is leading the project, has come up with a new idea that could yield stable Majorana quantum structures.

Pribiags proposed method leverages recent advances in DNA nanoassembly, combined with magnetic nanoparticles and superconductors, in order to detect Majoranas, which are theoretical particles that could be a key element for protecting quantum information and creating stable quantum devices.

This is a radically new way to think about quantum devices, Pribiag said. When I heard about this technique of DNA nanoassembly, I thought it fit right into this problem I had been working on about Majoranas and quantum devices. Its really a paradigm shift in the field and it has tremendous potential for finding a way to protect quantum information so that we can build more advanced quantum machines to do these complex operations.

The project, entitled Topological Quantum Architectures Through DNA Programmable Molecular Lithography, will span three years. Pribiag is collaborating with Columbia University Professor Oleg Gang, whose lab will handle the DNA nanoassembly part of the work.

About the W. M. Keck FoundationBased in Los Angeles, the W. M. Keck Foundation was established in 1954 by the late W. M. Keck, founder of the Superior Oil Company. The Foundations grant making is focused primarily on pioneering efforts in the areas of medical research and science and engineering. The Foundation also supports undergraduate education and maintains a Southern California Grant Program that provides support for the Los Angeles community, with a special emphasis on children and youth. For more information, visit the Keck Foundation website.

About the College of Science and EngineeringThe University of Minnesota College of Science and Engineering brings together the Universitys programs in engineering, physical sciences, mathematics and computer science into one college. The college is ranked among the top academic programs in the country and includes 12 academic departments offering a wide range of degree programs at the baccalaureate, master's, and doctoral levels. Learn more at cse.umn.edu.

Follow this link:

UMN-led team receives $1.4M Keck Foundation grant to study possible breakthrough in quantum computing - UMN News

Posted in Quantum Physics | Comments Off on UMN-led team receives $1.4M Keck Foundation grant to study possible breakthrough in quantum computing – UMN News

Lee Smolin: the laws of the universe are changing – IAI

Posted: at 8:32 am

We tend to think of the laws of nature as fixed. They came into existence along with the universe, and have been the same ever since. But once you start asking why the laws of the universe are what they are, their invariance also comes into question. Lee Smolin is the type of theoretical physicist who likes asking such why questions. His inquiries have led him to believe that the laws of the universe have evolved from earlier forms, along the lines of natural selection. In this in depth interview he offers an account of how he came to this view of the evolving universe and explains why physics needs to change its view of time.

Lee Smolin is a rare breed of theoretical physicist. Whereas most physicists see themselves in the business of discovering what the laws of the universe are, Lee Smolin goes a step further: he wants to know why the laws of the universe are what they are.

I believe in an aspirational form of Leibnizs Principle of Sufficient Reason. When seeking knowledge, we should act on the assumption that the principle of sufficient reason is true, otherwise we are likely to give up too soon.

The Principle of Sufficient Reason being the idea that there is a reason for why things are the way they are and Leibniz being a 17th century rationalist philosopher. Lee Smolin is not like other physicists in another way: he draws inspiration from many different fields, including philosophy.

___

Smolin admits that it might be the case that at some point our explanations simply run out and there are no further why questions we can ask.

___

Its perhaps hard to appreciate how unconventional this way of thinking about physics is. Leibniz was a key figure of early modern rationalist philosophy that held necessity to be the key concept that would unlock the mysteries of the universe things are the way they are because they had to be this way, and reason could explain why that was. Modern science on the other hand for the most part has given up on this idea that the world is governed by rational necessity. Instead, contingency rules: the way things are is the way things are, we cant really know why. For many scientists the question doesnt even make sense. Smolin admits that it might be the case that at some point our explanations simply run out and there are no further why questions we can ask.

Of course this might be the case, and it might not be. The only way to find out is to try to see how far we can go.

And Smolin is prepared to go a lot further in his questioning than most. Pushing the boundaries of explanation has led him to put forward some extraordinary theories, including the idea that the laws of the universe are not invariable across space and time, but are evolving. When asked to give an account of how he arrived at this theory he offers a kind of intellectual autobiography, and why he sees the issue of time as crucial to how we think about laws of nature.

Smolin came of age during the era when the main puzzle of theoretical physics was how to make Einsteins General Relativity consistent with Quantum Mechanics. Time according to General Relativity was seen as a relational property not as something absolute or external to the universe, as Newton had thought. This means that time becomes secondary, as Smolin says a merely relational property between events in the universe, not something fundamental. Quantum mechanics, on the other hand, still seemed to depend on an absolute framework of time that wasnt relational. This was one the key contradictions at the heart of physics at the time Smolin was still a physics student and laws of nature were seen as invariant as time-independent.

___

Smolin wasnt satisfied with having just a description of how particles interact he also wanted to know why. Why is the neutron slightly heavier than the proton?

___

The second big issue playing on Smolins mind when he was a graduate student at Harvard came from particle physics. The Standard Model had just started going, and it seemed to be an immensely powerful tool for explaining the interactions of fundamental particles. But Smolin wasnt satisfied with having just a description - even if it was a very good description - of how particles interact he also wanted to know why. Why is the neutron slightly heavier than the proton? And why is the mass of the electron 1800 times smaller than that of the neutron?

These near coincidences are very important for how the world turned out to be. Smolin adds.

There was a group of cosmologists at the time who were also asking this question of why the universe seemed to be so perfectly tuned to allow for matter to be formed all these constants, including the Cosmological Constant, had just the right values to allow life to eventually develop. Was this mere accident? Or was there a reason for it? Cosmologists like Martin Rees of Cambridge developed the idea of the Anthropic Principle that postulated the existence of many different universes in which these constants all had different values, leading to completely different outcomes. Life was possible in our universe because we got lucky in other universes not only is life not possible, there are no atoms to begin with.

Smolin admits this is a pretty cool idea but he doesnt think its really a scientific theory since it doesnt make any predictions. But the puzzle it tried to tackle was a real one, and Smolin had a better idea for how to solve it. He thought to himself, where else do we find systems that are fine tuned for the emergence of complexity? Biology was to him the obvious answer. Im pretty good at stealing ideas from other fields. Everybody has a trick, and thats mine he says jokingly.

___

This seemingly paradoxical balance of the cosmos is not a mere accident - there was a process behind it, akin to natural selection, that gave rise to it.

___

Thats how Smolin came up with the idea of applying the principles of evolutionary biology to the universe as a whole. In the same way that in biology Darwinian evolution was able to explain the existence of perfectly developed organisms, with organs that work just the right way to keep them alive and functioning, the idea that the universe as a whole has been undergoing a process of evolution can explain the existence of this fine tuning of cosmological constants. This seemingly paradoxical balance of the cosmos is not a mere accident - there was a process behind it, akin to natural selection, that gave rise to it. Its an idea that he was surprised to find the American pragmatist philosopher Charles Peirce had also hinted at in the early 20th century.

Putting forward this theory of the dynamically evolving universe led to the other central idea in Smolins work: a reassessment of the centrality of time.

Smolin is always talking about his collaborators - many of them unconventional thinkers and eccentric in their own way - and how theyve contributed to his work. Roberto Mangabeira Unger is one of them, a professor at Harvards Law School, a Brazilian politician, and a philosopher. Smolin credits Mangabeira with forcing him to come to terms with the contradiction he was seemingly committed to. On the one hand Quantum Gravity that Smolin was working on saw laws of nature as fundamental, and time as secondary, as emergent. But applying natural selection to cosmology we get the opposite: time becomes fundamental, and laws of nature evolve, are emergent. This led to a collaboration between the two thinkers, and the publication of their book The Singular Universe and the Reality of Time. Smollin ended up espousing the view that time is fundamental, not secondary as General Relativity would have it, and space an emergent property of it. This was a view that Fotini Markopoulou, another collaborator of Smolins, also arrived at independently a view that most theoretical physicists, including Carlo Rovelli, oppose (although Smolin thinks Rovelli is coming around to that view in his recent publications).

Both these theories, that the universe and its laws are changing, and that time is a fundamental property of the universe, whereas space derivative, pose several questions, questions that Smolin sees as invitations for further elaboration and investigation, rather than as objections.

One of the questions I was curious to find out more about was how Smolin thought of the evolution of the universe. What is the mechanism here, exactly?

Smolin has three possible answers to this question, all of them hypotheses, as he stresses to me, given that they arent capable of making predictions: Im not Darwin! he says.

The most prominent hypothesis is that the universe gives birth to other universes through black holes. This, in itself, was not a new idea. Theoretical physicists John Wheeler and Bryce DeWitt first put forward this hypothesis before, but Smolin tweaked it to fit his view of a universe that evolves, almost along the lines of natural selection. Whereas Wheeler and DeWitt thought the new universe produced each time has random values of the cosmological constant and other key parameters, Smolin took a more Darwinian approach, proposing that each universe embodies very small changes to those cosmological values, allowing for a cumulative change and fine tuning, until we arrive at the universe we have today.

___

How can the universe learn anything, and how does the universe remember what has happened in the past, and use it as a precedent to decide what will happen in the future?

___

The question I immediately raise is whether this picture of an evolving set of universes, in which the laws of nature are not fixed, but are ever changing, requires us to postulate a kind of meta-law, a law that would dictate the way that this evolution can take place. So are we not back to where we started, the cosmos being dictated by some fixed meta-laws? Smolin is not happy with this solution, you cant solve this by just accepting that there are fixed laws, theyre just meta-laws he says. But he also doesnt really have a definitive answer either. Its a question he takes seriously, however, and has spent much of his book with Roberto Mangabeira Unger tackling this issue.

Smolin has two other hypotheses for how the universe might be changing. One he calls The Autodidactic Universe, the self-learning universe, the other The Principle of Precedence - borrowing a concept from jurisprudence when thinking about laws seems quite clever, and in line with Smolins trick of stealing ideas from other disciplines. They each come with their own conceptual challenges how can the universe learn anything, and how does the universe remember what has happened in the past, and use it as a precedent to decide what will happen in the future? Thinking of the universe in these terms seems to bend our concepts to breaking point, although admittedly things like machine learning, a technology that is very much real, does the same. If machines can learn from a trial-and-error process, why not the universe as a whole? In fact, Smolin has collaborated with Microsoft computer scientist Jaron Lanier, to model how the universe might be understood as a giant machine learning process.

The other major challenge to Smolins theory is directed at his view that time is more fundamental than space. How is that even possible, I asked him. If time is some measure of change, how can there be change without space? Where is the change taking place?

Here Smolin brings up another collaborator, Julian Barbour, who he acknowledges as his mentor when it comes to the philosophy of fundamental physics. In work they did together they showed that it is indeed possible to do dynamics, the study of evolving quantities, without space. In order to do that, Smolin tells me, you need to think of time as playing a causal role itself as creating new events from past ones. If we think of time this way, all we have to do is look back in time coming at you from your past and the causes that have made you, to see change.

___

I dont claim to have complete ideas, but I believe I have done enough to show that these are things worth thinking about. I havent built a new paradigm yet, but Im having a lot of fun in the process.

___

These are fascinating ideas that really capture the imagination, which goes some way to explain why Smolin, a theoretical physicist who is mostly in the business of publishing highly technical papers, impenetrable to the uninitiated, has acquired something of a cult status beyond the world of academia. But even though his theories are these beautiful mosaics of ideas from physics, philosophy, biology, computer science, the question is, do they ultimately offer us answers to the puzzles they set out to tackle? Smolin offers a humble self-diagnosis that captures both the joy of research, but also the hope of an enduring legacy:

I dont claim to have complete ideas, but I believe I have done enough to show that these are things worth thinking about. I havent built a new paradigm yet, but Im having a lot of fun in the process.

The rest is here:

Lee Smolin: the laws of the universe are changing - IAI

Posted in Quantum Physics | Comments Off on Lee Smolin: the laws of the universe are changing – IAI

Quantum Computing Will Breach Your Data Security BRINK Conversations and Insights on Global Business – BRINK

Posted: at 8:32 am

Researchers talk to each other next to the IBM Q System One quantum computer at IBM's research facility in Yorktown Heights, New York. Quantum computing's speed and efficacy represent one of the biggest threat to data security in the future.

Photo: Misha Friedman/Getty Images

Quantum computing (QC) represents the biggest threat to data security in the medium term, since it can make attacks against cryptography much more efficient. With quantum computing capabilities having advanced from the realm of academic exploration to tangible commercial opportunities, now is the time to take steps to secure everything from power grids and IoT infrastructures to the burgeoning cloud-based information-sharing platforms that we are all increasingly dependent upon.

Despite encrypted data appearing random, encryption algorithms follow logical rules and can be vulnerable to some kinds of attacks. All algorithms are inherently vulnerable to brute-force attacks, in which all possible combinations of the encryption key are tried.

According to Verizons 2021 Data Breach report, 85% of breaches caused by hacking involve brute force or the use of credentials that have been lost or stolen. Moreover, cybercrime costs the U.S. economy $100 billion a year and costs the global economy $450 billion annually.

Although traditionally, a 128-bit encryption key establishes a secure theoretical limit against brute-force attacks, this is a bare-minimum requirement for Advanced Encryption Standard symmetric keys, which are currently the default symmetric encryption cipher used for public and commercial purposes.

These are considered to be computationally infeasible to crack, and most experts consider todays 128-bit and 256-bit encryption keys to be generally secure. However, within the next 20 years, sufficiently large quantum computers will be able to break essentially all public-key schemes currently in use in a matter of seconds.

Quantum computing speeds up prime number factorization, so computers with quantum computation can easily break cryptographic keys via quickly calculating and exhaustively searching secret keys. A task thought to be computationally impossible by conventional computer architectures becomes easy by compromising existing cryptographic algorithms, shortening the span of time needed to break public-key cryptography from years to hours.

Quantum computers outperform conventional computers for specific problems by leveraging complex phenomena such as quantum entanglement and the probabilities associated with superpositions (when quantum bits [qubits] exist in several states at the same time) to perform a series of operations in such a way that favorable probabilities are enhanced. When a quantum algorithm is applied, the probability of measuring the correct answer is maximized.

Algorithms such as RSA, AES, and Blowfish remain worldwide standards in cybersecurity. The cryptographic keys of these algorithms are based mainly on two mathematical procedures the integer factorization problem and the discrete logarithm problem that make it difficult to crack the key, preserving the systems security.

Two algorithms for quantum computers challenge current cryptography systems. Shors algorithm greatly speeds up the time required for solving the integer factorization problem. Grovers quantum search algorithm, while not as fast, still significantly increases the speed of decryption keys that, with traditional computing technologies, would take time on the order of quintillions of years.

All widely used public-key cryptographic algorithms are theoretically vulnerable to attacks based on Shors algorithm, but the algorithm depends on operations that can only be achieved by a large-scale quantum computer (>7000 qubits). Quantum computers are thus likely to make encryption systems based on RSA and discrete logarithm assumptions (DSA, ECDSA) obsolete. Companies like D-Wave Systems promise to deliver a 7000+ qubit solution by 2023-2024.

Quantum technologies are expected to bring about disruption in multiple sectors. Cybersecurity will be one of the main industries to feel this disruption; and although there are already several players preparing for and developing novel approaches to cybersecurity in a post-quantum world, it is vital for corporations, governments, and cybersecurity supply-chain stakeholders to understand the impact of quantum adoption and learn about some of the key players working on overcoming the challenges that this adoption brings about.

Businesses can implement quantum-safe cybersecurity solutions that range from developing risk management plans to harnessing quantum mechanics itself to fight the threats QC poses.

Related Reading

The replacement of encryption algorithms generally requires steps including replacing cryptographic libraries, implementation of validation tools, deployment of hardware required by the algorithm, updating dependent operating systems and communications devices, and replacing security standards and protocols. Hence, post-quantum cryptography needs to be prepared for eventual threats as many years in advance as is practical, despite quantum algorithms not currently being available to cyberattackers.

Quantum computing has the potential for both disrupting and augmenting cybersecurity. There are techniques that leverage quantum physics to protect from quantum-computing related threats, and industries that adopt these technologies will find themselves significantly ahead of the curve as the gap between quantum-secure and quantum-vulnerable systems grows.

Link:

Quantum Computing Will Breach Your Data Security BRINK Conversations and Insights on Global Business - BRINK

Posted in Quantum Physics | Comments Off on Quantum Computing Will Breach Your Data Security BRINK Conversations and Insights on Global Business – BRINK

Research Fellow in Quantum Computing job with UNIVERSITY OF SURREY | 300383 – Times Higher Education

Posted: at 8:32 am

Physics

Location: GuildfordSalary:32,344 to 33,309 per annumFixed TermPost Type: Full TimeClosing Date: 23.59 hours BST on Friday 05 August 2022Reference:045522

Applications are invited for a Postdoctoral Research Associate (PDRA) position in the theoretical nuclear physics group at the University of Surrey to work on a research project developing and applying quantum computing algorithms to tackle problems in nuclear structure as part of an STFC-funded Developing Quantum Technologies for Fundamental Physics programme. The work will involve developing new algorithms and/or applying existing algorithms to solve nuclear models such as the shell model and mean field model, and to look at dynamical processes such as nuclear decay.

The successful applicant will join a group working on nuclear quantum algorithms led by Dr Paul Stevenson, and will collaborate with the PhD students in the group, along with other staff members in the nuclear theory group and quantum foundations centre. Outside the university we collaborate with a mix of industry and academic partners. The quantum algorithms group is part of the theoretical nuclear physics group, which sits in the Physics Department along with groups in experimental nuclear physics, astrophysics, radiation and medical physics, soft matter, and photonics. A virtual quantum foundations group links beyond the Physics Department to those working on open quantum systems and other foundational aspects of quantum mechanics research in the University.

The University supports development of research skills as well as generic transferrable skills such as leadership, communication and project management. The Department is diverse and inclusive, and we welcome applications from candidates of any gender, ethnicity or background.

Candidates must hold (or be close to completion of) a PhD in physics, computer science, applied mathematics or a closely related discipline, with a track record commensurate with the ability to work in the stated research area.

Candidates should apply online and provide a CV with publication list, and a 1-2 page covering letter with statement summarising past research.

The position runs for up to two years with a start date ideally as soon as possible. Candidates are encouraged to email Dr Paul Stevenson (p.stevenson@surrey.ac.uk) if they have any questions.

Interviews to take place Wednesday 31 August.

Furtherdetails:JobDescription

Please note, it is University Policy to offer a starting salary equivalent to Level 3.6 (32,344) to successful applicants who have been awarded, but are yet to receive, their PhD certificate. Once the original PhD certificate has been submitted to the local HR Department, the salary will be increased to Level 4.1 (33,309).

More here:

Research Fellow in Quantum Computing job with UNIVERSITY OF SURREY | 300383 - Times Higher Education

Posted in Quantum Physics | Comments Off on Research Fellow in Quantum Computing job with UNIVERSITY OF SURREY | 300383 – Times Higher Education

D-Waves 500-Qubit Machine Hits the Cloud – IEEE Spectrum

Posted: at 8:32 am

While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely availablealong with the burgeoning quantities of data that can be easily harvested and used to train neural networks.

The amount of computing power at people's fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be harnessed for nongraphical calculations, a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google's Tensor Processing Unit (TPU) being a prime example.

Here, I will describe a very different approach to this problemusing optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood.

Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons.

Reducing the energy needs of neural networks might require computing with light

For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.

While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results).

What are these mysterious linear-algebra calculations? They aren't so complicated really. They involve operations on matrices, which are just rectangular arrays of numbersspreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.

This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.

Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1,600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.

Advancing from LeNet's initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore's law provided much of that increase. The challenge has been to keep this trend going now that Moore's law is running out of steam. The usual solution is simply to throw more computing resourcesalong with time, money, and energyat the problem.

As a result, training today's large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.

Improvements in digital electronic computers allowed deep learning to blossom, to be sure. But that doesn't mean that the only way to carry out neural-network calculations is with such machines. Decades ago, when digital computers were still relatively primitive, some engineers tackled difficult calculations using analog computers instead. As digital electronics improved, those analog computers fell by the wayside. But it may be time to pursue that strategy once again, in particular when the analog computations can be done optically.

It has long been known that optical fibers can support much higher data rates than electrical wires. That's why all long-haul communication lines went optical, starting in the late 1970s. Since then, optical data links have replaced copper wires for shorter and shorter spans, all the way down to rack-to-rack communication in data centers. Optical data communication is faster and uses less power. Optical computing promises the same advantages.

But there is a big difference between communicating data and computing with it. And this is where analog optical approaches hit a roadblock. Conventional computers are based on transistors, which are highly nonlinear circuit elementsmeaning that their outputs aren't just proportional to their inputs, at least when used for computing. Nonlinearity is what lets transistors switch on and off, allowing them to be fashioned into logic gates. This switching is easy to accomplish with electronics, for which nonlinearities are a dime a dozen. But photons follow Maxwell's equations, which are annoyingly linear, meaning that the output of an optical device is typically proportional to its inputs.

The trick is to use the linearity of optical devices to do the one thing that deep learning relies on most: linear algebra.

To illustrate how that can be done, I'll describe here a photonic device that, when coupled to some simple analog electronics, can multiply two matrices together. Such multiplication combines the rows of one matrix with the columns of the other. More precisely, it multiplies pairs of numbers from these rows and columns and adds their products togetherthe multiply-and-accumulate operations I described earlier. My MIT colleagues and I published a paper about how this could be done in 2019. We're working now to build such an optical matrix multiplier.

Optical data communication is faster and uses less power. Optical computing promises the same advantages.

The basic computing unit in this device is an optical element called a beam splitter. Although its makeup is in fact more complicated, you can think of it as a half-silvered mirror set at a 45-degree angle. If you send a beam of light into it from the side, the beam splitter will allow half that light to pass straight through it, while the other half is reflected from the angled mirror, causing it to bounce off at 90 degrees from the incoming beam.

Now shine a second beam of light, perpendicular to the first, into this beam splitter so that it impinges on the other side of the angled mirror. Half of this second beam will similarly be transmitted and half reflected at 90 degrees. The two output beams will combine with the two outputs from the first beam. So this beam splitter has two inputs and two outputs.

To use this device for matrix multiplication, you generate two light beams with electric-field intensities that are proportional to the two numbers you want to multiply. Let's call these field intensities x and y. Shine those two beams into the beam splitter, which will combine these two beams. This particular beam splitter does that in a way that will produce two outputs whose electric fields have values of (x + y)/2 and (x y)/2.

In addition to the beam splitter, this analog multiplier requires two simple electronic componentsphotodetectorsto measure the two output beams. They don't measure the electric field intensity of those beams, though. They measure the power of a beam, which is proportional to the square of its electric-field intensity.

Why is that relation important? To understand that requires some algebrabut nothing beyond what you learned in high school. Recall that when you square (x + y)/2 you get (x2 + 2xy + y2)/2. And when you square (x y)/2, you get (x2 2xy + y2)/2. Subtracting the latter from the former gives 2xy.

Pause now to contemplate the significance of this simple bit of math. It means that if you encode a number as a beam of light of a certain intensity and another number as a beam of another intensity, send them through such a beam splitter, measure the two outputs with photodetectors, and negate one of the resulting electrical signals before summing them together, you will have a signal proportional to the product of your two numbers.

Simulations of the integrated Mach-Zehnder interferometer found in Lightmatter's neural-network accelerator show three different conditions whereby light traveling in the two branches of the interferometer undergoes different relative phase shifts (0 degrees in a, 45 degrees in b, and 90 degrees in c).Lightmatter

My description has made it sound as though each of these light beams must be held steady. In fact, you can briefly pulse the light in the two input beams and measure the output pulse. Better yet, you can feed the output signal into a capacitor, which will then accumulate charge for as long as the pulse lasts. Then you can pulse the inputs again for the same duration, this time encoding two new numbers to be multiplied together. Their product adds some more charge to the capacitor. You can repeat this process as many times as you like, each time carrying out another multiply-and-accumulate operation.

Using pulsed light in this way allows you to perform many such operations in rapid-fire sequence. The most energy-intensive part of all this is reading the voltage on that capacitor, which requires an analog-to-digital converter. But you don't have to do that after each pulseyou can wait until the end of a sequence of, say, N pulses. That means that the device can perform N multiply-and-accumulate operations using the same amount of energy to read the answer whether N is small or large. Here, N corresponds to the number of neurons per layer in your neural network, which can easily number in the thousands. So this strategy uses very little energy.

Sometimes you can save energy on the input side of things, too. That's because the same value is often used as an input to multiple neurons. Rather than that number being converted into light multiple timesconsuming energy each timeit can be transformed just once, and the light beam that is created can be split into many channels. In this way, the energy cost of input conversion is amortized over many operations.

Splitting one beam into many channels requires nothing more complicated than a lens, but lenses can be tricky to put onto a chip. So the device we are developing to perform neural-network calculations optically may well end up being a hybrid that combines highly integrated photonic chips with separate optical elements.

I've outlined here the strategy my colleagues and I have been pursuing, but there are other ways to skin an optical cat. Another promising scheme is based on something called a Mach-Zehnder interferometer, which combines two beam splitters and two fully reflecting mirrors. It, too, can be used to carry out matrix multiplication optically. Two MIT-based startups, Lightmatter and Lightelligence, are developing optical neural-network accelerators based on this approach. Lightmatter has already built a prototype that uses an optical chip it has fabricated. And the company expects to begin selling an optical accelerator board that uses that chip later this year.

Another startup using optics for computing is Optalysis, which hopes to revive a rather old concept. One of the first uses of optical computing back in the 1960s was for the processing of synthetic-aperture radar data. A key part of the challenge was to apply to the measured data a mathematical operation called the Fourier transform. Digital computers of the time struggled with such things. Even now, applying the Fourier transform to large amounts of data can be computationally intensive. But a Fourier transform can be carried out optically with nothing more complicated than a lens, which for some years was how engineers processed synthetic-aperture data. Optalysis hopes to bring this approach up to date and apply it more widely.

Theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

There is also a company called Luminous, spun out of Princeton University, which is working to create spiking neural networks based on something it calls a laser neuron. Spiking neural networks more closely mimic how biological neural networks work and, like our own brains, are able to compute using very little energy. Luminous's hardware is still in the early phase of development, but the promise of combining two energy-saving approachesspiking and opticsis quite exciting.

There are, of course, still many technical challenges to be overcome. One is to improve the accuracy and dynamic range of the analog optical calculations, which are nowhere near as good as what can be achieved with digital electronics. That's because these optical processors suffer from various sources of noise and because the digital-to-analog and analog-to-digital converters used to get the data in and out are of limited accuracy. Indeed, it's difficult to imagine an optical neural network operating with more than 8 to 10 bits of precision. While 8-bit electronic deep-learning hardware exists (the Google TPU is a good example), this industry demands higher precision, especially for neural-network training.

There is also the difficulty integrating optical components onto a chip. Because those components are tens of micrometers in size, they can't be packed nearly as tightly as transistors, so the required chip area adds up quickly. A 2017 demonstration of this approach by MIT researchers involved a chip that was 1.5 millimeters on a side. Even the biggest chips are no larger than several square centimeters, which places limits on the sizes of matrices that can be processed in parallel this way.

There are many additional questions on the computer-architecture side that photonics researchers tend to sweep under the rug. What's clear though is that, at least theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

Based on the technology that's currently available for the various components (optical modulators, detectors, amplifiers, analog-to-digital converters), it's reasonable to think that the energy efficiency of neural-network calculations could be made 1,000 times better than today's electronic processors. Making more aggressive assumptions about emerging optical technology, that factor might be as large as a million. And because electronic processors are power-limited, these improvements in energy efficiency will likely translate into corresponding improvements in speed.

Many of the concepts in analog optical computing are decades old. Some even predate silicon computers. Schemes for optical matrix multiplication, and even for optical neural networks, were first demonstrated in the 1970s. But this approach didn't catch on. Will this time be different? Possibly, for three reasons.

First, deep learning is genuinely useful now, not just an academic curiosity. Second, we can't rely on Moore's Law alone to continue improving electronics. And finally, we have a new technology that was not available to earlier generations: integrated photonics. These factors suggest that optical neural networks will arrive for real this timeand the future of such computations may indeed be photonic.

Originally posted here:

D-Waves 500-Qubit Machine Hits the Cloud - IEEE Spectrum

Posted in Quantum Physics | Comments Off on D-Waves 500-Qubit Machine Hits the Cloud – IEEE Spectrum

Physicists see electron whirlpools for the first time – MIT News

Posted: at 8:32 am

Though they are discrete particles, water molecules flow collectively as liquids, producing streams, waves, whirlpools, and other classic fluid phenomena.

Not so with electricity. While an electric current is also a construct of distinct particles in this case, electrons the particles are so small that any collective behavior among them is drowned out by larger influences as electrons pass through ordinary metals. But, in certain materials and under specific conditions, such effects fade away, and electrons can directly influence each other. In these instances, electrons can flow collectively like a fluid.

Now, physicists at MIT and the Weizmann Institute of Science have observed electrons flowing in vortices, or whirlpools a hallmark of fluid flow that theorists predicted electrons should exhibit, but that has never been seen until now.

Electron vortices are expected in theory, but theres been no direct proof, and seeing is believing, says Leonid Levitov, professor of physics at MIT. Now weve seen it, and its a clear signature of being in this new regime, where electrons behave as a fluid, not as individual particles.

The observations, reported today in the journal Nature, could inform the design of more efficient electronics.

We know when electrons go in a fluid state, [energy] dissipation drops, and thats of interest in trying to design low-power electronics, Levitov says. This new observation is another step in that direction.

Levitov is a co-author of the new paper, along with Eli Zeldov and others at the Weizmann Institute for Science in Israel and the University of Colorado at Denver.

A collective squeeze

When electricity runs through most ordinary metals and semiconductors, the momenta and trajectories of electrons in the current are influenced by impurities in the material and vibrations among the materials atoms. These processes dominate electron behavior in ordinary materials.

But theorists have predicted that in the absence of such ordinary, classical processes, quantum effects should take over. Namely, electrons should pick up on each others delicate quantum behavior and move collectively, as a viscous, honey-like electron fluid. This liquid-like behavior should emerge in ultraclean materials and at near-zero temperatures.

In 2017, Levitov and colleagues at the University of Manchester reported signatures of such fluid-like electron behavior in graphene, an atom-thin sheet of carbon onto which they etched a thin channel with several pinch points. They observed that a current sent through the channel could flow through the constrictions with little resistance. This suggested that the electrons in the current were able to squeeze through the pinch points collectively, much like a fluid, rather than clogging, like individual grains of sand.

This first indication prompted Levitov to explore other electron fluid phenomena. In the new study, he and colleagues at the Weizmann Institute for Science looked to visualize electron vortices. As they write in their paper, the most striking and ubiquitous feature in the flow of regular fluids, the formation of vortices and turbulence, has not yet been observed in electron fluids despite numerous theoretical predictions.

Channeling flow

To visualize electron vortices, the team looked to tungsten ditelluride (WTe2), an ultraclean metallic compound that has been found to exhibit exotic electronic properties when isolated in single-atom-thin, two-dimensional form.

Tungsten ditelluride is one of the new quantum materials where electrons are strongly interacting and behave as quantum waves rather than particles, Levitov says. In addition, the material is very clean, which makes the fluid-like behavior directly accessible.

The researchers synthesized pure single crystals of tungsten ditelluride, and exfoliated thin flakes of the material. They then used e-beam lithography and plasma etching techniques to pattern each flake into a center channel connected to a circular chamber on either side. They etched the same pattern into thin flakes of gold a standard metal with ordinary, classical electronic properties.

They then ran a current through each patterned sample at ultralow temperatures of 4.5 kelvins (about -450 degrees Fahrenheit) and measured the current flow at specific points throughout each sample, using a nanoscale scanning superconducting quantum interference device (SQUID) on a tip. This device was developed in Zeldovs lab and measures magnetic fields with extremely high precision. Using the device to scan each sample, the team was able to observe in detail how electrons flowed through the patterned channels in each material.

The researchers observed that electrons flowing through patterned channels in gold flakes did so without reversing direction, even when some of the current passed through each side chamber before joining back up with the main current. In contrast, electrons flowing through tungsten ditelluride flowed through the channel and swirled into each side chamber, much as water would do when emptying into a bowl. The electrons created small whirlpools in each chamber before flowing back out into the main channel.

We observed a change in the flow direction in the chambers, where the flow direction reversed the direction as compared to that in the central strip, Levitov says. That is a very striking thing, and it is the same physics as that in ordinary fluids, but happening with electrons on the nanoscale. Thats a clear signature of electrons being in a fluid-like regime.

The groups observations are the first direct visualization of swirling vortices in an electric current. The findings represent an experimental confirmation of a fundamental property in electron behavior. They may also offer clues to how engineers might design low-power devices that conduct electricity in a more fluid, less resistive manner.

Signatures of viscous electron flow have been reported in a number of experiments on different materials, says Klaus Ensslin, professor of physics at ETH Zurich in Switzerland, who was not involved in the study. The theoretical expectation of vortex-like current flow has now been confirmed experimentally, which adds an important milestone in the investigation of this novel transport regime.

This research was supported, in part, by the European Research Council, the German-Israeli Foundation for Scientific Research and Development, and by the Israel Science Foundation.

See the article here:

Physicists see electron whirlpools for the first time - MIT News

Posted in Quantum Physics | Comments Off on Physicists see electron whirlpools for the first time – MIT News

Will These Algorithms Save You From Quantum Threats? – WIRED

Posted: at 8:32 am

The first thing organizations need to do is understand where they are using crypto, how, and why, says El Kaafarani. Start assessing which parts of your system need to switch, and build a transition to post-quantum cryptography from the most vulnerable pieces.

There is still a great degree of uncertainty around quantum computers. No one knows what theyll be capable of or if itll even be possible to build them at scale. Quantum computers being built by the likes of Google and IBM are starting to outperform classical devices at specially designed tasks, but scaling them up is a difficult technological challenge and it will be many years before a quantum computer exists that can run Shors algorithm in any meaningful way. The biggest problem is that we have to make an educated guess about the future capabilities of both classical and quantum computers, says Young. There's no guarantee of security here.

The complexity of these new algorithms makes it difficult to assess how well theyll actually work in practice. Assessing security is usually a cat-and-mouse game, says Artur Ekert, a quantum physics professor at the University of Oxford and one of the pioneers of quantum computing. Lattice based cryptography is very elegant from a mathematical perspective, but assessing its security is really hard.

The researchers who developed these NIST-backed algorithms say they can effectively simulate how long it will take a quantum computer to solve a problem. You don't need a quantum computer to write a quantum program and know what its running time will be, argues Vadim Lyubashevsky, an IBM researcher who contributed to the the CRYSTALS-Dilithium algorithm. But no one knows what new quantum algorithms might be cooked up by researchers in the future.

Indeed, one of the shortlisted NIST finalistsa structured lattice algorithm called Rainbowwas knocked out of the running when IBM researcher Ward Beullens published a paper entitled Breaking Rainbow Takes a Weekend on a Laptop. NISTs announcements will focus the attention of code breakers on structured lattices, which could undermine the whole project, Young argues.

There is also, Ekert says, a careful balance between security and efficiency: In basic terms, if you make your encryption key longer, it will be more difficult to break, but it will also require more computing power. If post-quantum cryptography is rolled out as widely as RSA, that could mean a significant environmental impact.

Young accuses NIST of slightly naive thinking, while Ekert believes a more detailed security analysis is needed. There are only a handful of people in the world with the combined quantum and cryptography expertise required to conduct that analysis.

Over the next two years, NIST will publish draft standards, invite comments, and finalize the new forms of quantum-proof encryption, which it hopes will be adopted across the world. After that, based on previous implementations, Moody thinks it could be 10 to 15 years before companies implement them widely, but their data may be vulnerable now. We have to start now, says El Kaafarani. Thats the only option we have if we want to protect our medical records, our intellectual property, or our personal information.

Read the original post:

Will These Algorithms Save You From Quantum Threats? - WIRED

Posted in Quantum Physics | Comments Off on Will These Algorithms Save You From Quantum Threats? – WIRED

VC hosts first science of the future webinar – University of Cape Town News

Posted: at 8:32 am

Kicking off a four-part series themed Using the science of the future to shape your present on Sunday, 11July, University of Cape Town (UCT) Vice-Chancellor (VC) Professor Mamokgethi Phakeng facilitated a discussion around the quantum revolution and advanced artificial intelligence(AI) with DrDivine Fuh and Professor Francesco Petruccione.

The online science series, which takes place over the course of four weeks in July, is hosted in conjunction with Switzerland-based think and do tank Geneva Science and Diplomacy Anticipator(GESDA). The partnership is aimed at creating a participation initiative through critical and thought-provoking conversations to help drive UCTs vision of producing future leaders who are able to tackle social injustice.

GESDA looks to anticipate, accelerate and translate the use of emerging science-driven topics. Using, among other tools, its Scientific Breakthrough Radar, the body aims to ensure that predictive talent advances can be harnessed to improve well-being and promote inclusive development.

Oftentimes, the voices of the African youth [are] forgotten.

With the Sunday sessions, both UCT and GESDA seek to bridge the understanding of how science might shape the future, as well as how these predicted futures can be used to shape the present while ensuring that decisions and discussions include voices of the African youth.

Oftentimes, the voices of the African youth [are] forgotten, noted Professor Phakeng. We have to grab this moment and invite the African youth to come on board to shape the future to ensure that we start working now to mitigate both emerging and longstanding indignities and inequalities.

By bringing diverse voices and ideas, we can ensure we get the best thinking from every part of our society and corner of the world. We need your perspectives in these dialogues and debates. It is a matter of intergenerational justice young people will inherit the future and they must therefore be involved in shaping how science should be used to affect it.

The quantum revolution and advanced AI

As the information revolution has transformed the ways in which we live and work, our lives and our understanding of our shared environment have become intricately intertwined with the flow of data. With advanced AI and quantum computing, however, future impacts will be even more profound.

Professor Petruccione, who is a global expert on quantum technology, a contributor to GESDA and the founder of the largest quantum technology research group in South Africa, elucidated exactly how these technologies are changing our present and shaping our future.

There are many examples where quantum technology will impact our existence substantially.

Quantum computing is a completely different paradigm of computing that is based on the laws of quantum physics. It uses all of these crazy some call them spooky properties as a resource to speed up calculations of certain problems, he said.

There are many examples where quantum technology will impact our existence substantially. Specifically, at the intersection between machine learning and quantum technology is quantum machine learning. This brings together the two worlds of artificial intelligence and quantum technology.

One area in which this could have a massive impact is energy production, he noted. For example, using a model similar to photosynthesis to extract solar energy. We are facing big energy challenges in South Africa and we know that one possible solution and probably the best is the use of renewable energies.

We know that plants can do this very well and there is strong evidence that plants use quantum effects to be efficient in converting solar energy into the energy that they need to grow. We can learn from these effects to produce better artificial photosynthesis and produce power, he explained.

Interdisciplinary technological advancement

Dr Fuh, who is a social anthropologist and the director of the Institute for Humanities in Africa at UCT, spoke to the various ethical and people-centric challenges that these leaps in technology present. In this vein, he highlighted the importance of interdisciplinary work and collaboration to ensure the best outcomes.

We cannot produce technology without looking at the ways in which that technology will locate itself in the lives of people. We have seen that, despite the best intentions to create only positives in society, over time they can create all sorts of horrible consequences, he said.

In addition to mapping the potential positive and negative effects of the technologies that are produced, Fuh pointed out that it is important to focus on who is producing the technology and the spaces in which it is being produced.

We need to invest in asking and explaining, and we need lots of young people to do that.

We invest a lot in the humanities in trying to understand who these people are and the kind of ideas that shape the work that they do, and, in turn, the kinds of technology that is being produced. This helps to ensure that when these technologies are put into practice, they are put to good use and they are ethical, he added.

Inviting the youth to come on board and explore these issues through interrogating the technology, Fuh noted, provides an opportunity for Africa to take advantage of the quantum revolution to solve the problems we are facing as a continent.

We need to invest in asking and explaining, and we need lots of young people to do that. I think thats whats going to change the key infrastructure for the future that we have young people who are asking questions that make our experiences intelligible, he said.

This is especially pertinent as it relates to industries and sectors in which machines and artificial intelligence are predicted to replace human employees. For example, healthcare.

What becomes of care? What becomes of the human aspects? Going to the hospital is not just about being treated, but about having human touch to help with your healing. So, what happens when there is just a machine treating you and you cannot get a hug?

These are core questions we need to ask and why we are finding that there needs to be deep collaboration between the humanities, the natural sciences and technology, he explained.

The next Sunday session is scheduled for 17July, with a discussion on human augmentation. Phakeng encourages all youth to watch the upcoming webinars to help them think about how they can use any of the future technologies spoken about in the series to help them shape the present.

Young people joining the sessions stand a chance to win an all-expenses paid trip to attend the 2022 Geneva Science and Diplomacy Anticipation Summit from 12 to 14October. Submission requirements, deadlines and other details will be announced on 1August 2022, after the fourth and final session.

Read more about the competition.Read the VC Desk.

See more here:

VC hosts first science of the future webinar - University of Cape Town News

Posted in Quantum Physics | Comments Off on VC hosts first science of the future webinar – University of Cape Town News

Following Graphene Along the Industrial Supply Chain – AZoNano

Posted: at 8:32 am

Graphene has been hailed as a revolutionary new nanomaterial with significant potential applications in high-value manufacturing contexts. However, the fate of the unique two-dimensional carbon material rests in the ongoing maturation of its supply chain.

Image Credit:Golden Sikorka/Shutterstock.com

To fulfill its potential, the production and distribution of graphene materials and associated technology throughout global supply chains must be effective, efficient, and viable.

Industrial producers are beginning to learn how to manufacture graphene of high quality at commercially viable costs and timeframes. The industry as a whole is beginning to understand the kinds of materials and products that will be sought after for mass-market applications.

Within the supply chain, there are also numerous companies springing up to functionalize graphene, disperse it in material matrices, and design products and devices that capitalize on its unique advantages.

But there remain information gaps throughout the industry. Potential end-users and designers of consumer products are often unaware of the many properties of graphene, and consumers are not yet convinced of its applications.

In addition, designing new products requires a significant investment in expertise, equipment, and supply chain relationships just to get a working prototype together. Bringing that prototype forward to mass manufacturing generally requires significant upfront costs for manufacturing and supply chain technology, an investment that may not see a return simply due to low market awareness.

Still, things are improving. The graphene supply chain is maturing currently, with increasingly more intermediary businesses offering microservices and forming a healthy supply ecosystem.

This ecosystem should contain equipment manufacturers for production as well as research and development, 2D materials producers, specialists in functionalization and matrix dispersal, product manufacturers, and distributors in business as well as consumer markets.

The good news is that the dial is continually moving up: graphenes industrial supply chain is becoming progressively more robust, resilient, and geographically dispersed every year. If the current direction of travel is maintained, graphene products will enter consumer shelves and enterprise catalogs in just a few short years.

The majority of graphene that has been produced to date has been for research purposes. As such, production techniques have tended to favor quality and consistency over scalability and viability.

Graphene, an allotrope of the element carbon, was first isolated by scientists working at the University of Manchester, UK, in 2004. It was quickly recognized as one of the most promising nanomaterials (materials with a dimension measuring less than 100 nm) yet discovered.

As a two-dimensional material, graphene exhibits remarkable electrical, thermal, and optical properties that are a feature of the complex and non-intuitive laws of quantum physics, which only operate at extremely low spatial dimensions.

Graphene was initially produced from processed graphite in a subtractive process, resulting in a high-quality, pure material that was well suited for research purposes.

However, production is currently moving beyond lab-based production toward industrial, scalable methods and the accompanying industrial supply chain that will make mass graphene production viable.

At present, Samsung, the global electronics company, invests more in graphene-based patents than any other company. This is not surprising: nanoelectronics is probably the largest future application area for this 2D material.

As well as subtractive graphite processing, graphene has also traditionally been produced with chemical vapor deposition techniques. The latter is a scalable method; however, it is only capable of making monolayers of high-quality graphene films suitable for applications as semiconductor materials.

As well as scaling up chemical vapor deposition technologies to meet industrial demand for monolayer graphene semiconductors, the industry is also working on improving bulk production methods.

For this to work, the industry needs to develop a robust industrial supply chain, including equipment manufacturers, producers, suppliers and distributors. Such an innovation backdrop is essential to realize its many and diverse potential applications in high-value manufacturing.

Industrial production techniques include exfoliation, sonication, and plasma treatment. These methods break graphite up into controlled flakes of two-dimensional graphene.

Exfoliation, for example, produces extremely high-quality flakes of graphene, but the method is absolutely not scalable and therefore unviable for commercial applications.

Plasma treatment and sonication, however, are capable of putting out large amounts of graphene oxide and nanoplatelets which are used in plastics as additives. These products can be integrated within glass reinforced plastics as well as in concrete, imparting strength and thermal conductivity to the final compound material.

Graphene-based materials like these are also suitable for applications as coating and printing materials.

Deposition methods that create large amounts of graphene on foil substrates with tiling technology are currently being developed to transfer high-quality layers of graphene over a large substrate area.

Backes, C., et al. (2020). Production and processing of graphene and related materials. 2D Materials. doi.org/10.1088/2053-1583/ab1e0a.

Johnson, D. (2016). The Graphene Supply Chain Is Maturing, But It Still Needs Some Guidance. [Online] Graphene Council. Available at: https://www.thegraphenecouncil.org/blogpost/1501180/255576/The-Graphene-Supply-Chain-is-Maturing-But-It-Still-Needs-Some-Guidance

Taking graphene mass production to the next era. (2019) [Online] Cordis. Available at: https://cordis.europa.eu/article/id/124618-taking-graphene-mass-production-to-the-next-era

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

View original post here:

Following Graphene Along the Industrial Supply Chain - AZoNano

Posted in Quantum Physics | Comments Off on Following Graphene Along the Industrial Supply Chain – AZoNano

Notable Thermal and Mechanical Properties of New Hybrid Nanostructures – AZoM

Posted: July 7, 2022 at 9:04 am

Carbon-based nanomaterials such as carbon nanotubes (CNTs), fullerenes, and graphene receive a great deal of attention today due to their unique physical properties. A new study explores the potential of hybrid nanostructures and introduces a new porous graphene CNT hybrid structure with remarkable thermal and mechanical properties.

Image Credit:Orange Deer studio/Shutterstock.com

The study shows how the remarkable characteristics of novel graphene CNT hybrid structures could be modified by slightly changing the inherent geometric arrangement of CNTs and graphene, plus various filler agents.

The ability to accurately control thermal conductivity and mechanical strength in the graphene CNT hybrid structures make them a potentially suitable candidate for various application areas, especially in advanced aerospace manufacturing where weight and strength are critical.

Carbon nanostructures and hybrids of multiple carbon nanostructures have been examined recently as potential candidates for numerous sensing, photovoltaic, antibacterial, energy storage, fuel cell, and environmental improvement applications.

The most prominent carbon-based nanostructures in the research appear to be CNTs, graphene, and fullerene. These structures exhibit unique thermal, mechanical, electronic, and biological properties due to their extremely small size.

Structures that measure in the sub-nanometer range behave according to the peculiar laws of quantum physics, and so they can be used to exploit nonintuitive phenomena such as quantum tunneling, quantum superposition, and quantum entanglement.

CNTs are tubes made out of carbon and that measure only a few nanometers across in diameter. CNTs display notable electrical conductivity, and some are semiconductor materials.

CNTs also have great tensile strength and thermal conductivity due to their nanostructure, and the strength of covalent bonds formed between carbon atoms.

CNTs are potentially valuable materials for electronics, optics, and composite materials, where they may replace carbon fibers in the next few years. Nanotechnology and materials science also use CNTs in research.

Graphene is a carbon allotrope that is shaped into a single layer of carbon atoms arranged in a two-dimensional lattice structure composed of hexagonal shapes. Graphene was first isolated in a series of groundbreaking experiments byUniversity of Manchester, UK, scientists Andrew Geim and Konstantin Novoselov in 2004, earning them the Nobel Prize for Physics in 2010.

In the few decades since then, graphene has become a useful nanomaterial with exceptionally high tensile strength, transparency, and electrical conductivity leading to numerous and varied applications in electronics, sensing, and other advanced technologies.

A fullerene is another carbon allotrope that has been known for some time. Its molecule consists of carbon atoms that are connected by single and double bonds to form a mesh, which can be closed or partially closed. The mesh is fused with rings of five, six, or seven atoms.

Fullerene molecules can be hollow spheres, ellipsoids, tubes, or a number of other shapes and sizes. Graphene could be considered an extreme member of the fullerene family, although it is considered a member of its own material class.

As well as a great deal of research invested into understanding and characterizing these carbon nanostructures in isolation, scientists are also exploring the properties of hybrid nanostructures that combine two or more nanostructure elements into one material.

For example, foam materials have adjustable properties that make them suitable for practical applications like sandwich structure design, biocompatibility design, and high strength and low weight structure design.

Carbon-based nanofoams have been utilized in medicine as well, examining bone injuries as well as acting as the base for replacement bone tissue.

Carbon-based cellular structures are produced both with chemical vapor deposition (CVD) and solution processing. Spark plasma sintering (SPS) methods are also implemented for using graphene for biological and medical applications.

As a result, scientists have been looking at ways to make three-dimensional carbon foams structurally stable. Research suggests that stable junctions between different types of structures (CNTs, fullerene, and graphene) need to be formed for this material to be stable enough for extensive application.

New research from mechanical engineers at Turkeys Istanbul Technical University introduces a new hybrid nanostructure formed through chemical bonding.

The porous graphene CNT structures were made by organizing graphene around CNTs in nanoribbons. The different geometrical arrangement of graphene nanoribbon layers around CNTs (square, hexagon, and diamond patterns) led to different physical properties being observed in the material, suggesting that this geometric rearrangement could be used to fine-tune the new structure.

The study was published in the journal Physica E: Low-dimensional Systems and Nanostructures in 2022.

Researchers found that the structures with fullerenes inserted, for example, exhibited significant compressive stability and strength without sacrificing tensile strength. The geometric arrangement of carbon nanostructures also had a significant effect on their thermal properties.

Researchers said that these new hybrid nanostructures present important advantages, especially for the aerospace industry. Nanoarchitectures with these hybrid structures may also be utilized in hydrogen storage and nanoelectronics.

Belkin, A., A. Hubler, and A. Bezryadin (2015). Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production. Scientific Reports. doi.org/10.1038/srep08323

Degirmenci, U., and M. Kirca (2022). Carbon-based nano lattice hybrid structures: Mechanical and thermal properties. Physica E: Low-dimensional Systems and Nanostructures. doi.org/10.1016/j.physe.2022.115392

Geim, A.K. (2009). Graphene: Status and Prospects. Science. /doi.org/10.1126/science.1158877

Geim, A.K., and K.S. Novoselov (2007). The rise of graphene. Nature Materials. doi.org/10.1038/nmat1849

Monthioux, M., and V.L. Kuznetsov (2006). Who should be given the credit for the discovery of carbon nanotubes? Carbon. doi.org/10.1016/j.carbon.2006.03.019

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

View post:

Notable Thermal and Mechanical Properties of New Hybrid Nanostructures - AZoM

Posted in Quantum Physics | Comments Off on Notable Thermal and Mechanical Properties of New Hybrid Nanostructures – AZoM

Page 20«..10..19202122..3040..»