Quantum computing gets real. In the race between man and machine | by Feed Forward | Dec, 2023 – Medium

In the race between man and machine, quantum computing takes a huge leap forward

On September 15th, 2021, the realm of technological innovation took a seismic leap forward as numerous pioneers reported significant progress in the field of quantum computing. Groundbreaking strides have been achieved in this sphere, making for a significant shift in our perception and understanding of both information processing and computational power. This advancement, momentous as it is, implies that computational tasks conventionally viewed as impossible or prohibitively lengthy are now entering the realm of the tangible; additionally, these quantum entities appear to surpass traditional binary supercomputers in several areas. Now that you know the dry facts, lets dip our toes into the effervescent sea of commentary and get the real scoop on why this techno-event is sending shock waves through the silicon and opening up a new world, not of magic, but of hubba-bubba bubble quantum realities.

Jumpstart those neurons and buckle up! Were about to delve into the fantastic, befuddling, and downright science-fiction-esque world of quantum computing. If you thought your computer was a nifty piece of tech, brace yourself. Quantum computing, quite simply, is like The Matrix met Tron on steroids!

Cracking the code of quantum computing involves diving straight into the depths of the extraordinary quantum realm. In laymans terms, its computing tech thats based on the principles of quantum theory. Remember Schrdingers famed cat? That poor creature thats simultaneously alive and dead until we decide to peek. Well, imagine those cats being your computer bits, in superpositions of both 0s and 1s. Yep, welcome to the future.

Its not all came out of thin air, not by a long shot. Its because of mega brains like mathematician Peter Shor and physicist David Deutsch that we have had such elliptical notions turn foundational stones for this tech revolution.

Oh, the progress weve seen over the years! Its gone from an abstract theory, to multiple working models. And the size difference? Were talking Hulks magnificent transformation, except reverse. The bulky knock-offs have made way to streamlined, chic versions we see showcased today. Notable achievements? Oh, how about Googles landmark quantum supremacy claim?

As we stand at the precipice of quantum reality, todays applications of quantum computers can give sci-fi scenarios a run for their money. From creating rich, complex models of the real-world systems to uncrackable codes quantum computing is making waves. As for industries, were talking revolution in sectors like pharmaceutics, weather prediction, finance, and more. If youre skeptical, remember: Its all in the Matrix!

Of course, every venture has its share of thorny patches. As I always say Hold on to your hats, its not all quantum rainbows and tech butterflies. Admittedly, quantum computing is not immune to challenges and there are controversies surrounding error rates and operational difficulties. But hey, no pain no gain, right?

Peering into the quantum future might just feel like staring into a time vortex. Are we moving towards a quantum invasion? Maybe, maybe not. But aptly summed up by a famous scientist, Prediction is very difficult, especially if its about the future. Aint that the truth!

So, from our existential cat friend Schrodingers controversial pet to Quantum Avengers, the quantum leap is indeed real. The question is, what part will you play in this quantum saga? Think it over while I sign off with, May the qubits be ever in your favour! Now, keep calm and compute quantumly!

So, youve reached the end of this riveting quantum computing journey and youre thirsty for more? Dont fret, weve got you covered, faster than you can say Schrdingers Cat! Here are some additional resources to keep you quantum-leaping forward in your understanding of this mind-boggling field:

1. Quantum Computing for the very curious

https://www.quantum.country/qcvc

A super engaging, interactive introduction to quantum computing. Great for beginners, but fascinating for experts too!

2. The Nature of Quantum Computing

https://www.nature.com/subjects/quantum-computing

An in-depth resource for those eager to dive into the rabbit hole of research articles and scientific papers.

3. 10 Things To Know About Quantum Computing

https://www.forbes.com/sites/bernardmarr/2018/09/06/10-things-to-know-about-quantum-computing/

Just like it sounds, this Forbes article provides a quick rundown of 10 key facts. Who doesnt love a good ol listicle?

4. Quantum Computing Explained

https://www.ibm.com/cloud/learn/quantum-computing

IBMs page offers an easy to grasp breakdown of quantum computing. Dont get me wrong, this still isnt kindergarten stuff!

Scour through these resources, and youll be talking qubits, superpositions, and quantum entanglement like a bonafide quantum physicist (or at least like you belong in a Star Trek episode). Remember, in the words of Douglas Adams, I may not have gone where I intended to go, but I think I have ended up where I needed to be. Good luck on your quantum quest!

And now, dear esteemed cybernauts, for the part youve all been steadfastly scrolling for the flamboyant flourish finale, the cherry on the cake of tech wisdom, the disclaimer! Brace yourselves for a twist so outrageous, you might mistake it for a trendsetting sci-fi movie plot.

Prepare to be as stunned as if youve accidentally mixed up your VR goggles with your 3D movie glasses: portions of this tasty tech-blog morsel were tastefully composed with the help of Artificial Intelligence. Yes, you heard that right the same kind of tech thats so hot right now, it makes quantum physics seem like a rubber duck in a science sink!

Why, you ask? Well, because AI is cooler than a polar bears toenails and its very much here to stay. Besides, lets face it, these machine learning maestros are way better at writing than us, humble humans, who still rely on pulsating grey blobs ensconced within our craniums to cobble together clunky sentences.

So there you have it, my dear digital denizens, our blogs virtual secret sauce. Remember, Resistance is futile. (Anyone else catch that cheeky Star Trek reference?). But dont worry, a robot rebellion isnt on the cards just yet. Only high-class tech content and the odd laugh here and there!

Remember: Todays science fiction is tomorrows science fact. Long live AI the guardian angel of this blog post and, soon enough, a whole lot more!

Read more:
Quantum computing gets real. In the race between man and machine | by Feed Forward | Dec, 2023 - Medium

Anticipating the Next Technological Revolution: Trends and Insights – Medium

3 min read

In the ever-evolving landscape of technology, anticipating the next revolution is both a challenge and an exciting prospect. As we navigate the currents of innovation, identifying emerging trends provides valuable insights into the transformative technologies that will shape our future. This article explores the trends and insights that herald the arrival of the next technological revolution.

Problem:

The pace of technological change can be overwhelming, and industries must adapt to stay relevant. Disruptions caused by unforeseen technological shifts can catch businesses off guard, leading to obsolescence. The challenge lies in deciphering the signals of change and understanding how these trends will impact various sectors.

Solution:

AI and ML continue to dominate the technological landscape, promising transformative changes across industries. From autonomous vehicles to personalized healthcare, the integration of AI is reshaping how we live and work. Insights derived from massive datasets enable more informed decision-making and open new frontiers for innovation.

The rollout of 5G technology represents a quantum leap in connectivity. With faster speeds and lower latency, 5G is set to revolutionize communication, enabling the Internet of Things (IoT), augmented reality, and immersive experiences. Industries, from healthcare to manufacturing, will benefit from the unprecedented connectivity that 5G brings.

Quantum computing is on the cusp of a breakthrough that will redefine computational power. With the ability to process complex calculations at speeds unimaginable with classical computers, quantum computing holds promise for solving previously unsolvable problems in fields like cryptography, drug discovery, and optimization.

More here:
Anticipating the Next Technological Revolution: Trends and Insights - Medium

Fujitsu and Consortium Develop Advanced 64-Qubit Quantum Computer at Osaka University – HPCwire

TOKYO and OSAKA, Japan, Dec. 20, 2023 A consortium of joint research partners including the Center for Quantum Information and Quantum Biology at Osaka University, RIKEN, the Advanced Semiconductor Research Center at the National Institute of Advanced Industrial Science and Technology (AIST), the Superconducting ICT Laboratory at the National Institute of Information and Communications Technology (NICT), Amazon Web Services, e-trees.Japan, Inc., Fujitsu Limited, NTT Corporation (NTT), QuEL, Inc., QunaSys Inc., and Systems Engineering Consultants Co.,LTD. (SEC) today announced the successful development of Japans third superconducting quantum computer installed at Osaka University.

Starting December 22, 2023, the partners will provide users in Japan access to the newly developed computer via the cloud, enabling researchers to execute quantum algorithms, improve and verify the operation of software, and explore use cases remotely.

The newly developed superconducting quantum computer uses a 64 qubit chip provided by RIKEN, which leverages the same design as the chip in RIKENs first superconducting quantum computer, which was unveiled to users in Japan as a cloud service for non-commercial use on March 27, 2023.

For the new quantum computer, the research team sourced more domestically manufactured components (excluding the refrigerator). The research team confirmed that the new quantum computer, including its components, provides sufficient performance and will utilize the computer as a test bed for components made in Japan.

Moving forward, the research group will operate the new computer while improving its software and other systems for usage including the processing of heavy workloads on the cloud. The research team anticipates that the computer will drive further progress in the fields of machine learning and the development of practical quantum algorithms, enable the exploration of new use cases in material development and drug discovery, and contribute to the solution of optimization problems to mitigate environmental impact.

The joint research group is comprised of: Dr. Masahiro Kitagawa, (Professor, Graduate School of Engineering Science, Director of the Center for Quantum Information and Quantum Biology at Osaka University), Dr. Makoto Negoro (Associate Professor, Vice Director of the Center for Quantum Information and Quantum Biology at Osaka University), Dr. Yasunobu Nakamura (Director of the RIKEN Center for Quantum Computing (RQC)), Dr. Katsuya Kikuchi (Group Leader of the 3D Integration System Group of the Device Technology Research Institute at AIST), Dr. Hirotaka Terai (Executive Researcher at the Superconductive ICT Device Laboratory at the Kobe Frontier Research Center of the Advanced ICT Research Institute of NICT), Dr. Yoshitaka Haribara (Senior Startup Machine Learning and Quantum Solutions Architect, Amazon Web Services), Dr. Takefumi Miyoshi(Director of e-trees.Japan, Inc., Specially Appointed Associate Professor, Center for Quantum Information and Quantum Biology at Osaka University, CTO of QuEL, Inc.), Dr. Shintaro Sato (Head of Quantum Laboratory, Fujitsu Research, Fujitsu Limited), Dr. Yuuki Tokunaga (Distinguished Researcher at NTT Computer & Data Science Laboratories), Yosuke Ito (CEO of QuEL, Inc.), Keita Kanno (CTO of QunaSys Inc.), and Ryo Uchida (Chief Technologist of Systems Engineering Consultants Co.,LTD. (SEC)).

About Center for Quantum Information and Quantum Biology at Osaka University

Center for Quantum Information and Quantum Biology consists of six research groups: Quantum Computing, Quantum Information Fusion, Quantum Information Devices, Quantum Communications and Security, Quantum Measurement and Sensing, and Quantum Biology, promoting researches in each field and transdisciplinary research among these fields as well as other academic fields. The center, as an international research hub for quantum innovations, promotes international academic exchanges and takes a key role in human resources development to social implementation. For more information, visit https://qiqb.osaka-u.ac.jp/en.

About Fujitsu

Fujitsus purpose is to make the world more sustainable by building trust in society through innovation. As the digital transformation partner of choice for customers in over 100 countries, our 124,000 employees work to resolve some of the greatest challenges facing humanity. Our range of services and solutions draw on five key technologies: Computing, Networks, AI, Data & Security, and Converging Technologies, which we bring together to deliver sustainability transformation. Fujitsu Limited (TSE:6702) reported consolidated revenues of 3.7 trillion yen (US$28 billion) for the fiscal year ended March 31, 2023 and remains the top digital services company in Japan by market share. Find out more: http://www.fujitsu.com.

Source: Fujitsu

Continued here:
Fujitsu and Consortium Develop Advanced 64-Qubit Quantum Computer at Osaka University - HPCwire

The Biggest Discoveries in Computer Science in 2023 – Quanta Magazine

In 2023, artificial intelligence dominated popular culture showing up in everything from internet memes to Senate hearings. Large language models such as those behind ChatGPT fueled a lot of this excitement, even as researchers still struggled to pry open the black box that describes their inner workings. Image generation systems also routinely impressed and unsettled us with their artistic abilities, yet these were explicitly founded on concepts borrowed from physics.

The year brought many other advances in computer science. Researchers made subtle but important progress on one of the oldest problems in the field, a question about the nature of hard problems referred to as P versus NP. In August, my colleague Ben Brubaker explored this seminal problem and the attempts of computational complexity theorists to answer the question: Why is it hard (in a precise, quantitative sense) to understand what makes hard problems hard? It hasnt been an easy journey the path is littered with false turns and roadblocks, and it loops back on itself again and again, Brubaker wrote. Yet for meta-complexity researchers, that journey into an uncharted landscape is its own reward.

The year was also full of more discrete but still important pieces of individual progress. Shors algorithm, the long-promised killer app of quantum computing, got its first significant upgrade after nearly 30 years. Researchers finally learned how to find the shortest route through a general type of network nearly as fast as theoretically possible. And cryptographers, forging an unexpected connection to AI, showed how machine learning models and machine-generated content must also contend with hidden vulnerabilities and messages.

Some problems, it seems, are still beyond our ability to solve for now.

Read this article:
The Biggest Discoveries in Computer Science in 2023 - Quanta Magazine

Year of covers: Tech and sport, quantum advances and Gen AI – Technology Magazine

From groundbreaking breakthroughs in AI and quantum computing to the continued evolution of augmented and virtual reality, 2023 has witnessed a surge of innovation that is poised to revolutionise our world.

AI continues to evolve at an astonishing pace, with advancements in natural language processing (NLP) enabling more natural and intuitive human-computer interactions. Computer vision, another key AI domain, has made strides in image and video analysis, leading to improved object detection, facial recognition, and medical imaging capabilities. AI is also making significant contributions in drug discovery, medical diagnosis, and self-driving car development, further demonstrating its transformative potential.

The immersive worlds of augmented reality (AR) and virtual reality (VR) have taken significant steps forward, blurring the lines between the physical and digital realms. AR applications are becoming increasingly prevalent in gaming, education, and training, enhancing real-world experiences with digital overlays. VR, meanwhile, is gaining momentum in entertainment, healthcare, and remote collaboration, offering users immersive and interactive experiences.

Quantum computing, still in its early stages, holds immense promise for solving problems that are intractable for classical computers. Researchers are making progress in building and optimizing quantum computers, paving the way for breakthroughs in fields like materials science, drug discovery and AI.

All of these topics and more have featured in our magazine over the past 12 months, and the trends we have witnessed are likely to accelerate in the years to come. As 2023 comes to a close, join us for a review of Technology Magazine's covers from 2023.

Here is the original post:
Year of covers: Tech and sport, quantum advances and Gen AI - Technology Magazine

AiThority Interview with Dr. Alan Baratz, CEO at D-Wave – AiThority

Hi, welcome to the AiThority Interview Series. Please tell us a bit about yourself and what is D-Wave.

I am Dr. Alan Baratz, President and CEO of D-Wave (NYSE: QBTS).

D-Wave is a leader in quantum computing technology and the worlds first commercial supplier of quantum computers. Our technology has been used by some of the worlds most advanced organizations, including Volkswagen, Mastercard, Deloitte, Siemens Healthineers, Pattison Food Group Ltd, DENSO, Lockheed Martin, the University of Southern California, and Los Alamos National Laboratory.

The global quantum computing market is rapidly growing and some market analysts project it will reach upwards of 6 billion + by the end of this decade. As 2023 closes, it would be interesting to see how quantum computing influences 2024. The future of quantum computing would largely relate to a rapid government adoption, the future of work, and quantum supremacy.

With economists projecting a shallow recession in 2024, organizations will seek new technologies, such as quantum computing, to navigate adversity and bolster business resilience. Quantum technologies can accelerate problem-solving and decision-making for a wide range of common organizational processes, such as supply chain management, manufacturing efficiency, logistical planning, and employee scheduling. Amidst a challenging economic environment, quantums ability to fuel operational efficiencies is critical.

The industry will achieve a proven, defensible quantum supremacy result in 2024. Ongoing scientific and technical advancements indicate that we are far from achieving quantum supremacy. 2024 will be the year where quantum definitively outperforms classical, full stop. There will be clear evidence of quantums ability to solve a complex computational problem previously unsolvable by classical computing, and quantum will solve it faster, better, and with less power consumption.

The breakthrough weve all been pursuing is coming.

The US governments usage of annealing quantum computing will explode given the anticipated passing of legislation including the National Quantum Initiative and the National Defense Authorization Act. 2024 will see a rapid uptick in the quantum sandbox and test bed programs with directives to use all types of quantum technology, including annealing, hybrid, and gate models. These programs will focus on near-term application development to solve real-world public sector problems, from public vehicle routing to electric grid resilience.

The global quantum race will continue to heat up, as the U.S. and its allies aggressively push for near-term application development. While the U.S. is now starting to accelerate near-term applications, other governments like Australia, Japan, the U.K., and the E.U. have been making expedited moves to bring quantum in to solve public sector challenges. This effort will greatly expand in 2024.

Top public sector areas of focus will likely be sustainability, transportation and logistics, supply chain, and health care.

Quantum computing will show proven value and utility in daily business operations through in-production applications.

As we close 2023, companies are beginning to go into production with quantum-hybrid applications, so its no stretch of the imagination to see corporations using quantum solutions daily for ubiquitous business challenges such as employee scheduling, vehicle routing, and supply chain optimization. In time, it will become a part of every modern IT infrastructure, starting with the integration of annealing quantum computing.

Originally posted here:
AiThority Interview with Dr. Alan Baratz, CEO at D-Wave - AiThority

Quantum Encryption: Revolutionizing Cybersecurity in the Quantum Age | by Ashish Wilson | Dec, 2023 – Medium

-

In a world increasingly dependent on digital communication and data exchange, the need for robust cybersecurity measures has never been more critical. Traditional encryption methods, while effective, face growing challenges from the rapid advancements in quantum computing. Enter quantum encryption, a cutting-edge technology poised to revolutionize cybersecurity as we know it.

Quantum encryption leverages the principles of quantum mechanics to secure communication channels against potential threats posed by quantum computers. Unlike classical encryption methods that rely on complex mathematical algorithms, quantum encryption uses the unique properties of quantum particles to ensure unparalleled security.

1. **Quantum Key Distribution (QKD):**

At the heart of quantum encryption is Quantum Key Distribution (QKD), a game-changing technique that enables the secure exchange of cryptographic keys between parties. Unlike classical key distribution methods, QKD employs the quantum properties of particles such as photons to detect any unauthorized interception instantly.

2. **Unbreakable Security:**

One of the most significant advantages of quantum encryption is its resistance to brute-force attacks, a vulnerability that classical encryption methods currently face. Quantum encryption promises unbreakable security by exploiting the fundamental principles of quantum mechanics, making it practically impossible for hackers to decipher encoded information.

As the cyber threat landscape continues to evolve, the integration of quantum encryption brings about several positive impacts on cybersecurity:

1. **Future-Proofing Against Quantum Computing Threats:**

Quantum computers, with their immense processing power, pose a potential threat to traditional encryption algorithms. Quantum encryption, however, is designed to withstand the computational capabilities of quantum computers, future-proofing sensitive data against emerging threats.

2. **Enhanced Data Integrity:**

Quantum encryption not only secures data transmission but also ensures data integrity. The quantum properties of particles used in encryption make it possible to detect any attempts at tampering with the transmitted information, providing an additional layer of protection.

3. **Global Secure Communication Networks:**

The implementation of quantum encryption paves the way for the establishment of global secure communication networks. Governments, enterprises, and individuals can exchange information with unprecedented confidence, knowing that their data is shielded by the impenetrable cloak of quantum security.

As we stand on the brink of the quantum era, the integration of quantum encryption marks a pivotal moment in the evolution of cybersecurity. The unbreakable security offered by quantum encryption, coupled with its ability to future-proof against quantum computing threats, positions it as the guardian of our digital future. Embracing this revolutionary technology will undoubtedly reshape the landscape of cybersecurity, ensuring a more secure and resilient digital world for generations to come.

#SEO-Optimized Keywords: Quantum encryption, Quantum key distribution, Cybersecurity in the quantum age, Unbreakable security, Quantum computing threats, Data integrity in quantum encryption, Global secure communication networks, Future-proofing data with quantum encryption

Continue reading here:
Quantum Encryption: Revolutionizing Cybersecurity in the Quantum Age | by Ashish Wilson | Dec, 2023 - Medium

Quantum AI Brings the Power of Quantum Computing to the Public – GlobeNewswire

Luton, Dec. 20, 2023 (GLOBE NEWSWIRE) -- Quantum AI is set to bring the power of quantum computing to the public and has already reached a stunning quantum volume (QV) score of 14,082 in a year since its inception.

Quantum AI Ltd. was conceived by Finlay and Qaiser Sajjad during their time as students at MIT. They were inspired by the exclusive use of new-age technology by the elites on Wall Street. Recognising the transformative power of this technology, they were determined to make its potential accessible to all. Thus, the platform was born, and it has evolved and flourished in just a short time.

Quantum AI

Often, everyday traders have limited access to such advanced tools.

We are fueled by the belief that the power of quantum computing should not be confined to the financial giants but should be available to empower amateur traders as well, asserted the founders of the platform. Since its launch in 2022, they have worked to achieve this vision and have become a significant force in the industry.

The platform combines the power of the technology with the strength of artificial intelligence. By using these latest technologies, including machine learning, algorithms that are more than just lines of code have been created. They harness the potential of quantum mechanics and deep learning to analyse live data in unique ways.

Our quantum system leverages quantum superposition and coherence, providing a quantum advantage through sophisticated simulation and annealing techniques, added the founders.

Quantum AI has shown exceptional results in a brief period. It has received overwhelmingly positive reviews from customers, highlighting the enhanced speed and accuracy of trading. The transformative and groundbreaking impact the platform has had on trading is evident in its growth to 330,000 active members. Notably, it has nearly 898 million lines of code and an impressive quantum value score of 14,082. The performance on this benchmark that IBM established is a massive testament to the impact quantum AI has had in a short span of time.

According to the founders, they have bigger plans on the horizon to take the power of the technology to the public. Quantum AI is growing its team of experts and expanding its operations in Australia and Canada. Its goal of democratising the power of technology is well on its way to being realised. With trading being the first thing they cracked to pay the bills the main focus has turned to aviation, haulage and even e-commerce. The power of

To learn more about the platform and understand the transformative power of the technology for traders, one can visit https://quantumai.co/.

About Quantum AI

With the aim of democratising the power and potential of quantum computing, the company was founded by Finlay and Qaiser Sajjad during their time at MIT. Since its establishment, it has grown to over 330,000 active members and 18 full-time employees, alongside winning the trust of its customers.

###

Media Contact

Quantum AI PR Manager: Nadia El-Masri Email: nadia.el.masri@quantumai.co Address: Quantum AI Ltd, 35 John Street, Luton, United Kingdom, LU1 2JE Phone: +442035970878 URL: https://quantumai.co/

Continue reading here:
Quantum AI Brings the Power of Quantum Computing to the Public - GlobeNewswire

Pioneering New Treatments in Deep Brain Stimulation for Parkinson’s Disease – Research Blog – Duke University

Note: Each year, we partner with Dr. Amy Shecks students at the North Carolina School of Science and Math to profile some unsung heroes of the Duke research community. This is the second of eight posts.

Meet a star in the realm of academic medicine Dr. Kyle Todd Mitchell!

A man who wears many hats a neurologist with a passion for clinical care, an adventurous researcher, and an Assistant Professor of Neurology at Duke Mitchell finds satisfaction in the variety of work, which keeps him driven and up to date in all the different areas.

Dr. Mitchells educational journey is marked by excellence, including a fellowship at the University of California San Francisco School of Medicine, a Neurology Residency at Washington University School of Medicine, and an M.D. from the Medical College of Georgia. Beyond his professional accolades, he leads an active life, enjoying running, hiking, and family travels for rejuvenation.

Dr. Mitchells fascination with neurology ignited during his exposure to the field in medical school and residency. It was a transformative moment when he witnessed a patient struggling with symptoms experience a sudden and remarkable improvement through deep brain stimulation. This therapy involves the implantation of a small electrode in the brain, offering targeted stimulation to control symptoms and bringing relief to individuals grappling with the challenges of Parkinsons Disease.

You dont see that often in medicine, almost like a light switch, things get better and that really hooked me, he said. The mystery and complexity of the brain further captivated him. Everything comes in as a bit of a mystery, I liked the challenge of how the brain is so complex that you can never master it.

Dr. Mitchells research is on improving deep brain stimulation to alleviate the symptoms of Parkinsons disease, the second most prevalent neurodegenerative disorder, which entails a progressive cognitive decline with no cure. Current medications exhibit fluctuations, leading to tremors and stiffness as they wear off. Deep brain stimulation (DBS), FDA-approved for over 20 years, provides a promising alternative.

Dr. Mitchells work involves creating adaptive algorithms that allow the device to activate when needed and deactivate so it is almost like a thermostat. He envisions a future where biomarkers recorded from stimulators could predict specific neural patterns associated with Parkinsons symptoms, triggering the device accordingly. Dr. Mitchell is optimistic, stating that the technology is very investigational but very promising.

A key aspect of Dr. Mitchells work is its interdisciplinary nature, involving engineers, neurosurgeons, and fellow neurologists. Each member of the team brings a unique expertise to the table, contributing to the collaborative effort required for success. Dr. Mitchell emphasizes, None of us can do this on our own.

Acknowledging the challenges they face, especially when dealing with human subjects, Dr. Mitchell underscores the importance of ensuring research has a high potential for success. However, the most rewarding aspect, according to him, is being able to improve the quality of life for patients and their families affected by debilitating diseases.

Dr. Mitchell has a mindset of constant improvement, emphasizing the improvement of current technologies and pushing the boundaries of innovation.

Its never just one clinical trial we are always thinking how we can do this better, he says.

The pursuit of excellence is not without its challenges, particularly when attempting to improve on already effective technologies. Dr. Mitchell juggles his hats of being an educator, caregiver, and researcher daily. So let us tip our own hats and be inspired by Dr. Mitchells unwavering dedication to positively impact the lives of those affected by neurological disorders.

Guest post by Amy Lei, North Carolina School of Science and Math, Class of 2025.

Original post:
Pioneering New Treatments in Deep Brain Stimulation for Parkinson's Disease - Research Blog - Duke University

Is It Time to See a Neurologist for Your Headaches? – Everyday Health

The averageheadachedoesnt require a call to a neurologist or even your family doctor. But if youre experiencing frequent headaches and usingmedication for them regularly, thats a different story.

If you have a history of headaches that come once or twice a month and go away when you take an over-the-counter (OTC) medication such asacetaminophen (Tylenol) or ibuprofen (Aleve), you may not need to seek further treatment, saysSandhya Kumar, MD, a neurologist andheadache specialistat Wake Forest Baptist in Winston-Salem, North Carolina.

If youre having headaches more than four times a month, especially if they are debilitating and keeping you home from work, you should see a provider for diagnosis and medication, says Dr. Kumar.

As a general rule, for nonsevere headaches, your family doctor is a great person to start with. Approximately 7 out of 10 people talk to their primary care doctor first, according to theAmerican Headache Society.

If the recommended treatments are not working well or you have unusual symptoms, your doctor may refer you to a neurologist, who specializes in disorders of the brain and nervous system.

Possible signs that you may need to see a specialist for your headaches include:

According to headache expertPeter Goadsby, MD, PhD, a professor of neurology at the UCLA GoldbergMigraineProgram in Los Angeles, a valuable tool in diagnosis is your headache history.

A thorough history, aided by your detailed notes, can pinpoint causes, triggers, and even potential solutions. Make careful notes about your headache experiences before you go to the doctor. Include the following:

Dr. Goadsby recommends using a monthly calendar so that the pattern of headache days is clearly visible to you and your doctor.

If you are having severe or disabling headaches, dont wait a full month to call for an appointment make notes about what you recall or are experiencing and see a doctor as soon as you can.

The tests your doctor orders will depend in part on what they suspect could be causing your headaches and whether its a primary headache such as amigraineor tension headache or a secondary headache, which means that its a symptom of another health concern.

Although primary headaches can be painful and debilitating, they arent life-threatening.

Secondary headaches are much rarer and can be the sign of a serious health issue sometimes even one that requires urgent medical attention.

The process of diagnosis may include the following:

Medical HistoryYour doctor will want to know about any other health conditions you have as well as any medications, supplements, or herbal treatments you take.

Family HistoryBe prepared to provide details about any family members who have headaches or migraine at what age their headaches started and any other health diagnoses they may have. As Goadsby notes, Very often, family members wont know theyve got migraine, but they will know they are prone to headaches. Since migraine has a strong genetic component, a family history of migraine-like symptoms is an indicator that your headaches are also being caused by migraine.

Physical ExamYour doctor will examine you, paying close attention to yourhead,neck, and shoulders, which can all contribute to headache pain in various ways.

Neurological ExamA neurological exam may include tests of your vision, hearing, short-term memory, reflexes, sensation, balance, and coordination.

Blood TestsBlood tests may be ordered to rule out infection and other health conditions that have headache as a symptom.

Spinal Fluid TestThis may be necessary if your doctor suspects that your headaches are caused by certain types of infection or by bleeding in your brain.

UrinalysisA urine sample may be ordered to help rule out infection and other health conditions.

Imaging TestsComputed tomography(CT) or magnetic resonance imaging (MRI) scans may be ordered. These imaging tests can show structures in your head, neck, or elsewhere in the body that may be causing your headaches.

Neuroimaging Tests These may be done during a headache episode to get a clearer picture of what is going on during an actual headache.

Electroencephalogram (EEG)This test can show your doctor whether there are changes in brain wave activity. It can help diagnose brain tumors, seizures, head injury, and swelling in the brain.

Working closely with your family practitioner and a neurologist, if needed, will bring you closer toheadache relief.

Warning signsthat you need immediate medical attention for your headache ormigraineinclude:

RELATED:When Should You Worry About Your Headache and Seek Immediate Help?

Additional reporting byBecky Upham.

See the original post:
Is It Time to See a Neurologist for Your Headaches? - Everyday Health

Brain Plaques Point to Who’ll Need Alzheimer’s Treatment Most – HealthDay

TUESDAY, Dec. 26, 2023 (HealthDay News) -- Are you necessarily at higher risk of Alzheimer's disease just because you're 80, and not 75? New research shows it's more complex than that.

The findings suggest that it's the pace of buildup in the brain of Alzheimer's-linked amyloid protein plaques that matters most, not age.

Our findings are consistent with studies showing that the amyloid accumulation in the brain takes decades to develop," said study lead author Dr. Oscar Lopez, a professor of neurology at the University of Pittsburgh.

His team's findings were published Dec. 22 in the journal Neurology.

Neuroscientists have long known that the slow but steady accumulation of amyloid-beta protein plaques within brain tissue is a hallmark of Alzheimer's disease, although whether it actually causes the illness is still debated.

Rates of dementia do rise with advancing age, but is age alone the key factor?

To find out, Lopez' team examined amyloid buildup in the brains of 94 people who were 85 at the time they enrolled in the study. All were tracked for 11 years or until they died, and all received two PET scans of their brains during that time.

The researchers compared levels of amyloid buildup seen in those scans to those seen in scans from a younger group of patients (in their 60s) observed in a prior Australian trial.

As expected, amyloid plaque buildup rose over time, regardless of how much of the protein had infiltrated a participant's brain at the time they joined the Pittsburgh study.

Plaques seemed to accumulate faster among people in their 80s, however, compared to people in their late 60s', Lopez' team reported.

None of the elderly people in Lopez' trial who developed dementia were without some plaque buildup in their brains, confirming its key role in the disease.

Most importantly, when brain plaque buildup began seemed key to how soon dementia set in.

For example, people who were already displaying amyloid buildup in their PET scans at age 80 (when they enrolled in the study) developed dementia two years earlier than folks without such early buildup, the Pittsburgh team found.

Finally, the long-term links between amyloid beta buildup and other brain health indicators was more strongly linked to dementia than just the short-term growth of plaque on its own, Lopez' group added.

That's consistent with other studies, which found that amyloid buildup "takes decades to develop, and occurs in the context of other brain pathologies," Lopez said in a university news release.

Lopez, who also directs Pitts Alzheimer's Disease Research Center, said that "understanding of the timing of the presence of these pathologies will be critical for the implementation of future primary prevention therapies.

More information

Find out more about Alzheimer's disease and the brain at the Alzheimer's Association.

SOURCE: University of Pittsburgh, news release, Dec. 22, 2023

Read more from the original source:
Brain Plaques Point to Who'll Need Alzheimer's Treatment Most - HealthDay

The key to brain-based adjustments: traditional chiropractic + functional neurology – Chiropractic Economics

Joseph Schneider December 22, 2023

I consider myself a patient as well as a healer. I remember lying in my hospital bed after I suffered a stroke in May 2017. I could not move the right side of my body, and I thought all my hopes and dreams had gone away, including my hopes of ever practicing as a functional neurologist and DC again or being able to work with my patients. Prior to that, I suffered three concussions from various sports injuries and motor vehicle accidents. My brain went through a lot.

I am grateful to my colleagues in chiropractic functional neurology, and the knowledge I gained from extensive experience with my patients. Through it all, I have learned a lot about brain recovery from overcoming my stroke and now I am committed to helping my patients achieve optimal health without medications and/or surgeries. Now I have a vision to help other DCs learn about the future innovations in technology, and the research and breakthroughs happening today, which that will enable chiropractic functional neurologists to lead the field in brain regeneration.

At my clinic, I work to heal hurt brains at my clinic, where all the treatment plans incorporate chiropractic adjustments, functional neurology and the most cutting-edge modalities available. Those modalities include the latest technology to accelerate patient outcomes, such as oxygen therapy, multi-axis rotating chair therapy, neurofeedback, interactive noninvasive imaging studies, vibration therapy and photobiomodulation (low-level light therapy).

The most important aspect of any brain-specific program is to have an impact that changes the patients life in four critical areas: 1. work relationships, 2. recreation, 3. household chores and 4. sleep. The goal is to make life more dynamic and vibrant for the individual.,

Many patients come to my clinic after they have exhausted all other methods. My website is full of video testimonials from patients who regained their health thanks to the latest technologies the center uses for brain improvement. Brain-specific rehabilitation is a combination of functional medicine and functional neurology. All the pieces have to be put in place for maximum improvement and outcome. In addition, there is a different and very specific order for each patient. There isEach usually has a collection of symptoms, and they all need to be traced back to the systems in the body and treated holistically and synergistically.

Some patients experienced asymptomatic concussions. Perhaps the injury occurred 20 years ago, and they thought they were OK, but lately have noticed signs of dementia, brain fog and emotional challenges. These symptoms, we know, are related to the degeneration of the brain over time as a result of past trauma. The good news is we can help people at any point. We treat many patients who have struggled for years since their brain injury.

I started studying functional neurology in 1989 with Ted Carrick, DC, PhD, MS-HPEd. Carrick has been my friend, my mentor and my doctor after my stroke. In the 1990s, at the beginning of my career, I used to go to seminars with Carrick before I got board-certified in neurology. After a weekend of learning, I would go back to my practice and I would start to look at eye movements, balance, finger-pointed-at-the-nose things for looking at metric movements and dysmetria for the cerebellum. I looked carefully at my patients because Carricks big lesson was to know normal.

The brain is the master control system of the body, so as chiropractors we use spinal manipulation as a way of improving function throughout the body. Our rightful place is to continue to have our examination skills at a level in which we can look at the function of the brain and the brains interaction with the body. And looking at most of the contemporary diseases today, such as movement disorders, dystonia, visual issues, visual dysfunction, vertigo, dizziness and balance issues, chiropractors can improve their skills and understanding of the issues by observing patients and absorbing abnormal findings. Traditional chiropractic methods combined with brain-based adjustments can improve patient outcomes.

You may wonder if DCs can use their technique to change brain function. The answer isyes! Its called brain-based adjustments. There are ways of adjusting the spine that have a brain effect. If you want to, you can take your practice to the point where you start evaluating brain function through examination techniques and diagnostic technologies. By evaluating in this way, you can rate function and create a baseline for the patient. Then you can use different technologies and exercise systems to actually improve the pathways in the brain through connectivity and also take neurons from stem cells and create new neurons in the cortical areas.

Once I understood the brain was the master control system of the body, a light went off in my brain, and I knew I wanted to be a DC. I was an engineer, so people I told would ask me, Why do you want to be a chiropractor? Why not be a medical doctor? But I said, No, I want to change the master control system of the body. And thats what I did. I left my engineering career and went off to New York Chiropractic. When I look back, I realize, my love of chiropractic prompted my love of changing brains. Now every single patient that who comes to my office gets adjusted.

JOSEPH SCHNEIDER, DC, is a Board Certified Chiropractic Neurologist. He graduated from New York Chiropractic College in 1987. Schneider is in clinical practice at Hope Pain Relief in Chadds Ford, Penn.

Here is the original post:
The key to brain-based adjustments: traditional chiropractic + functional neurology - Chiropractic Economics

NeurologyLive Year in Review 2023: Top Stories in Movement Disorders – Neurology Live

The NeurologyLive staff was hard at work in 2023, covering clinical news and data readouts from all over the United States and the world, across a number of key neurology subspecialty areas. From major study data and FDA decisions to medical society conference sessions and expert conversations, the team spent all year bringing the latest news and updates to the website's front page.

Among our key focus areas is movement disorders, which include a number of complex diseases that have benefitted greatly from recent advances in medical care and therapeutic development. Although major news itemssuch as first-time approvals or new guidelinesoften appear among the top pieces our team produces, sometimes smaller stories reach those heights for other reasons, such as clinical impact and interest, or concerns about other facets of care, for example. Whatever the reason for the attention these stories got, their place here helps provide an understanding of the themes in this field.

Here, we'll highlight some of the most-read content on NeurologyLive this year. Click the buttons to read further into these stories.

Exploratory findings from a phase 3 randomized, controlled trial (NCT03329508) assessing P2B001 (Pharma Two B), a low dose combination of extended-release pramipexole and rasagiline, in Parkinson disease (PD) showed efficacy that was comparable to extended-release pramipexole (Prami-ER) alone, but with reduced sleep-related and dopaminergic adverse events (AEs). Pharma Two B planned to submit a new drug application for P2B001 to the FDA in 2023.

Newly announced findings from a triple-blinded, randomized controlled trial showed that treatment with SYMBYX Neuro infrared light therapy helmet significantly improved symptoms of Parkinson disease (PD) in areas of facial expression, upper and lower limb coordination and movement, walking gait, and tremor. Using the standardized Movement Disorder Society Unified Parkinsons Disease Rating Scale-III (UPDRS-III), compared with the placebo group, those on the light therapy improved 24% to 58% over baseline across all 5 areas tested, unlike the placebo group, which demonstrated statistically valid improvement in lower limb coordination and movement only.

After showing positive results in a phase 3 clinical program, the FDA has accepted Revance Therapeutics supplemental new biologics license application (sBLA) for daxibotulinumtoxinA injection (Daxxify), as a new treatment for adults with cervical dystonia. The agency ultimately approved the therapy on August 15, 2023. DaxibotulinumtoxinA is an acetylcholine release inhibitor and neuromuscular blocking agent indicated for the temporary improvement of moderate to severe glabellar lines associated with corrugator and/or procerus muscle activity in adults. To date, the therapy has shown promising results in 2 phase 3 studies of cervical dystonia, ASPEN-1 (NCT03608397) and ASPEN-OLS (NCT03617367).

Despite years of use of gold-standard therapy levodopa, therapeutic development in Parkinson disease has advanced rapidly and expanded to numerous novel pathways and targets. MedStar Georgetown's team of Katelynn Getchell; Gonul Ozay, MD; Brian Nagle, MD; Irma Zhang, MD; Luke Lovelace; Emma Waldon, RN; Yasar Torres-Yaghi, MD; and Fernando L. Pagn, MD explore this in depth.

Topline data from the Synuclein-One Study of CND Life Sciences Syn-One Test, an -synuclein skin biopsy test used for the detection of the pathology in Parkinson disease (PD), dementia with Lewy bodies (DLB), multiple system atrophy (MSA), and pure autonomic failure (PAF), suggest that the test is sensitive and specific in said detection of phosphorylated -synuclein. As misdiagnosis remains a consistent challenge in neurodegenerative disorderssome estimates suggest a misdiagnosis rate of 30%this represents an opportunity to address this clinical obstacle.

Using prospective cohort studies of community-dwelling elders followed up to 20 years, findings published in Neurology identified specific cognitive and functional declines in patients who developed incident Parkinson disease (PD). There were important sex differences as well, as men with incident PD had a steeper decline in executive function compared with women, but only women with incident PD exhibited detectably faster prediagnostic decline in global cognition.

BIAL R&D announced the dosing of the first patient in its phase 2 clinical trial, ACTIVATE (NCT05819359), to investigate BIA 28-6156, an allosteric activator of the enzyme beta-glucocerebrosidase (GCase), as a treatment of patients with genetically-mutated Parkinson disease (PD). The trial, which includes those with a mutation in the glucocerebrosidase 1 (GBA1) gene (GBA-PD), otherwise the most common genetic risk factor of the disease, is screening patients across sites in North America and with a Europe-based trial planned to initiate in the third quarter of 2023.

Biogen and Denali have announced that they are discontinuing a portion of the clinical development program for BIIB122 (also known as DNL151), an investigational small molecule inhibitor of LRRK2 in development for the treatment of Parkinson disease (PD). As a result of this decision, the phase 3 LIGHTHOUSE study (NCT05418673), which was initiated in September 2022, will be terminated.

New data from a first-in-human phase 1 study (NCT04802733) assessing bemdaneprocel (BlueRock Therapeutics/Bayer), an investigational cell therapy, showed that the agent met its primary objective of safety, with encouraging results on other measures of motor and nonmotor outcomes. Based on these results, the companies are planning for a phase 2 trial that is expected to begin enrolling patients in the first half of 2024.

The FDA has issued a complete response letter (CRL) to Amneal Pharmaceuticals for IPX203, an oral formulation of carbidopa/levodopa (CD/LD) extended-release capsules designed for the treatment of Parkinson disease. The reasons behind the decision were not based on efficacy or manufacturing for the agent, but rather established safety for an ingredient of the therapy. Amneal plans to work closely with the FDA to address the comments and align on the best path forward.

Read more from the original source:
NeurologyLive Year in Review 2023: Top Stories in Movement Disorders - Neurology Live

NeurologyLive Year in Review 2023: Most Watched Interviews in Sleep Disorders – Neurology Live

In 2023, the NeurologyLive team spoke with hundreds of people and posted hundreds of hours of interview clips. The staff spoke with neurologists, investigators, advanced practice providers, physical therapists, advocates, patients, pharmacists, and industry expertsanyone involved in the process of delivering clinical care.

These conversations were had with individuals from all over the world, both virtually and in person. The team attended more than a dozen annual meetings of medical societies, each time sitting down with experts on-site to learn more about the conversations driving care and the challenges being overcome.

From those in the field of sleep medicine, we learned much this year: recent updates to restless legs syndrome care; the challenges in managing narcolepsy's secondary symptoms; CPAP's effects in other neurologic disorders; heart health associations with sleep; and more.

Here, we'll highlight the most-viewed expert interviews on NeurologyLive this year. Click the buttons to watch more of our conversations with these experts.

The chief of the Sleep Disorders Clinical Research Program at Massachusetts General Hospital provided insight on new updates to the management of restless legs syndrome, including removing dopamine agonists as first line treatments.

WATCH TIME: 8 minutes

"Dopamine agonists are not first line treatments. The reason for that is theres substantial evidence that dopamine agonists when used for restless legs syndrome are associated with an augmentation of symptoms, a worsening of the underlying disorder."

The pediatric neurologist and sleep medicine specialist at Geisinger Medical Center provided commentary on the current unmet needs for patients with narcolepsy, including improvements in treatment options.

WATCH TIME: 3 minutes

"The reality is the disease is characterized, at a minimum, by a pentad of symptoms, which is excessive daytime sleepiness, cataplexy, sleep-related hallucinations, sleep paralysis, and disturbed nocturnal sleep.

The associate professor in the department of neurology and neurosurgery at McGill University discussed results from a study on the long-term use of continuous positive airway pressure treatment among patients with multiple sclerosis and sleep apnea presented at MSMilan 2023.

WATCH TIME: 5 minutes

"Our study indicates that CPAP treatment in patients with MS and sleep apnea is associated with a reduction in fatigue and an improvement in physical quality of life, offering potential benefits for long-term symptom management. Clinicians should consider exploring sleep apnea as a factor contributing to fatigue and poor sleep quality in patients with MS, as adequate treatment may lead to noticeable symptom improvement."

The medical director of SleepMed in South Carolina discussed the need for more overall awareness of poor sleep and the risk factors associated with worsened heart health.

WATCH TIME: 3 minutes

"Its critical to get the word out. We need to understand whats happening biologically, in terms of sleep homeostasis, sleep wake processor, and how thats controlled. What are the set points of heart rate and blood pressure? How are they modified? [We need to] Get the message out."

The cofounder and chief product development officer of Zevra Therapeutics talked about the phase 1 clinical trial of KP1077 for narcolepsy and potentially using it to treat other conditions..

WATCH TIME: 5 minutes

The biggest thing with this study is it is will help us inform study designs for future research. We are looking for the appropriate dosing regimen [of KP1077] and what will work best for patients with narcolepsy.

The senior vice president of medical and clinical affairs for Avadel Pharmaceuticals provided commentary on recently published research supporting once-nightly sodium oxybate (Lumryz) in narcolepsy regardless of the subtype.

WATCH TIME: 5 minutes

"Ensuring that clinicians are having conversations with patients with narcolepsy routinely, and asking about the more subtle presentation of cataplexy, is important. Many patients have their diagnosis changed from NT2 to NT1."

The associate professor, department of medicine, division of neurology, Institute of Medical Science, University of Toronto, talked about the importance of establishing normal values for sleep studies, particularly the multiple sleep latency test, to help with effectively diagnosing sleep disorders.

WATCH TIME: 5 minutes

The purpose of this study was to perform a larger and pretty comprehensive meta-analysis on the mean sleep latency derived from the MSLT (Multiple Sleep Latency Test). We also wanted to look at the impact of things like age, sex, body mass index, other sleep metrics. In addition, we wanted to investigate different methodological variables, such as sleep onset definitions, and sleep study features, as well other markers preceding the sleep study and see if that did affect the mean sleep latency on the MSLT that was performed.

The duo from Indiana University School of Medicine discussed the ongoing research initiatives to better understand sleep disorders among pediatrics, and ways to improve approaches like cognitive behavioral therapy.

WATCH TIME: 4 minutes

"There is a tendency in medicine and in society to focus on the need to eliminate screens and/or to assume that screens are causing the insomnia. This can be very minimizing to kids with insomnia because it implies that theres a simple solution and that a lifestyle factor is causing the insomnia."

The professor of neurology at UMass Chan School of Medicine discussed the various impacts Daylight Savings Time has on sleep quality and overall health in children and adolescents.

WATCH TIME: 3 minutes

"But what a lot of people don't realize is the long-term effects during the 8-month period on Daylight Saving Time. We may blame it on other things, but what we know is that those hour later sunrises and sunsets are associated with about a 10% increase rate of cancer, at least a 10% increase risk of obesity, and increased risk of heart disease."

The sleep epidemiologist and assistant professor at the Rollins School of Public Health at Emory University discussed the multi-level effort needed to improve sleep issues seen in individuals most impacted by social determinants of health.

WATCH TIME: 3 minutes

"The other piece is, how can we modify some of our recommendations to fit disadvantaged communities. For example, we say to sleep in a dark, quiet room, but we know that everyone cant do that because of safety issues. Adjusting the recommendation to say, put a light on in a hallway or somewhere else in the house."

Visit link:
NeurologyLive Year in Review 2023: Most Watched Interviews in Sleep Disorders - Neurology Live

NeurologyLive Year in Review 2023: Top Stories in Headache and Migraine – Neurology Live

In 2023, the NeurologyLive staff was kept on its toes while covering clinical news and data readouts from around the world across a number of key neurology subspecialty areas. Between major study publications and FDA decisions, and from societal conference sessions and expert interviews, the team spent all year bringing the latest updates to the website's front page.

Among our key focus areas is headache and migraine, two of the most common neurological diseases worldwide. Treatments for headaches have advanced over the years; however, providing lasting and effective treatment for all headache types has proven to be difficult for all practitioners. The field has been advanced significantly by the introduction of calcitonin gene-related peptide (CGRP)-targeting inhibitors, a class of highly effective and safe agents. Whatever the reason for the attention these stories got, their place here helps provide an understanding of the themes in this field over the course of 2023.

Here, we'll highlight some of the most-read content on NeurologyLive this year. Click the buttons to read further into these stories.

Earlier this year, the FDA approved IntelGenx/Gensco'srizatriptan benzoate (Rizafilm VersaFilm) oral thin film for acute migraine treatment through the 505(b)(2) new drug application (NDA) pathway. This newly approved treatment is an orally disintegrating film formulation of the 5-HT1 receptor agonist designed to be bioequivalent to Mercks Maxalt-MLT, an orally disintegrating rizatriptan treatment.

The FDA has expanded the indication for atogepant (Qulipta; AbbVie) to include the prevention of chronic migraine in adults, adding to its existing indication for episodic migraine, according to an announcement from AbbVie.1 The approval was granted based on data from the phase 3 PROGRESS trial (NCT03855137) that showed that 60-mg atogepant resulted in a significant reduction in mean monthly migraine days (MMDs) compared with placebo across 12 weeks of treatment.

Using a cohort of medically-insured individuals in Arizona, findings from a recently conducted analysis showed specific factors associated with receiving a migraine diagnosis vs a headache diagnosis. Those in the migraine cohort tended to be middle aged, female, White race, non-Hispanic ethnicity, and have English as their primary language. All told, issues within the social determinants of health categories of family unit dynamics (OR, 1.1; 95% CI, 1.06-1.14) and income and social protection (OR, 1.13; 95% CI, 1.08-1.18) were associated with a higher odds of being in the migraine vs headache cohort.

Findings from a pilot study (NCT04437199) assessing up to 60 g/day of tricaprilin (Cerecin), an investigational ketogenic compound, suggested potential benefit in treating patients with migraine. At the end of the 3-month treatment period, some patients opted to enter the Compassionate Access Program, which provides continued access to the therapy for up to 1 year after completion of the clinical study.

According to a new update from WL Gore & Associates, also known as Gore, patients in the RELIEF clinical study (NCT04100135) have completed their multi-month enhanced screening process and have begun entering the final randomization phase. The trial, initiated in November 2022, assesses whether closing the patent foramen ovale (PFO) using the GORE CARDIOFORM Septal Occluder may provide relief for patients with migraine. The GORE CARDIOFORM Septal Occluder is a permanently implanted device indicated for the percutaneous, transcatheter closure of ostium secundum atrial septal defects (ASDs) and PFO, intended to reduce the risk of recurrent ischemic stroke.

At the 2023 International Headache Congress, held September 14-17, in Seoul, Korea, new data from the phase 2 HOPE trial (NCT05133323) highlighted the effects of Lu AG09222 (Lundbeck), a pituitary adenylate cyclase-activating polypeptide (PACAP)-targeting agent, as a potential preventive for migraine. All told, the trial met its primary end point, with significant between-group differences observed in the high dose group of treated patients over a 12-week double-blind period.

The European Commission approved AbbVies atogepant (Aquipta) for the prophylaxis of migraine in adults who have 4 or more migraine days per month, becoming the first and only calcitonin gene-related peptide (CGRP) agent indicated for prevention of both episodic and chronic migraine. The approval was based on data from 2 phase 3 studies, PROGRESS (NCT03855137) and ADVANCE (NCT02848326), in which atogepant-treated patients showed greater reduction of monthly migraine days (MMDs) than placebo.

Investigators published full findings of the phase 3 PRODROME study (NCT04492020) demonstrating ubrogepants (Ubrelvy; AbbVie) positive impact on migraine during the prodrome phase in The Lancet. At the conclusion of the trial, absence of moderate or severe headache within 24 hours after initiating treatment occurred in 46% (190 of 418) of qualifying prodrome events that had been treated with ubrogepant compared with 29% (121 of 423) of events treated with placebo (OR, 2.09; 95% CI, 1.63-2.69; P <.0001).

Newly announced topline findings from the CHALLENGE-MIG trial (NCT05127486), the first head-to-head clinical study comparing 2 medications targeting calcitonin gene-related peptide (CGRP), revealed similar efficacy between galcanezumab (Emgality; Eli Lilly) and rimegepant (Nurtec ODT; Biohaven Pharmaceuticals). Despite this, galcanezumab outperformed Rimegepant on secondary end points at the end of the 3-month trial.

The FDA has accepted Satsuma Pharmaceuticals 505(b)(2) new drug application (NDA) for its novel, investigational dihydroergotamine (DHE) nasal powder product, STS101, for the acute treatment of migraine. The agency is expected to make a decision on the therapy by January 2024. STS101 is designed to provide significant benefits vs existing acute treatments for migraine, including the combination of quick and convenient self-administration and other clinical advantages that current DHE liquid nasal spray products and injectable dosage forms lack.

See the rest here:
NeurologyLive Year in Review 2023: Top Stories in Headache and Migraine - Neurology Live

NeurologyLive Year in Review 2023: Top Stories in Epilepsy and Seizure Disorders – Neurology Live

In 2023, the NeurologyLive staff was a busy bunch, covering clinical news and data readouts from around the world across a number of key neurology subspecialty areas. From major study publications and FDA decisions to societal conference sessions and expert interviews, the team spent all year bringing the latest information to the website's front page.

Among our key focus areas is epilepsy and related seizure disorders, a field that features complex diseases that are often medically refractory and difficult to manage. Although major news items often appear among the top pieces our team produces, sometimes smaller stories reach those heights for other reasonsclinical impact and interest, or concerns about the small- or big-picture parts of care, for example. Whatever the reason for the attention these stories got, their place here helps provide an understanding of the themes in this field in 2023.

Here, we'll highlight some of the most-read content on NeurologyLive this year. Click the buttons to read further into these stories.

Findings from a comparative effectiveness research study showed that use of levetiracetam and lamotrigine as first-line treatments have similar efficacy on idiopathic generalized epilepsy (IGE) syndromes in females; however, levetiracetam was more effective in treating juvenile myoclonic epilepsy. Further studies are still needed to identify the most effective antiseizure medication alternative in other IGE syndromes.

The FDA has issued a warning for the use of antiseizure medicines levetiracetam (Keppra, Keppra XR, Elepsia XR, Spritam) and clobazam (Onfi, Sympazan), which can cause drug reaction with eosinophilia and systemic symptoms, known as DRESS, arare but serious adverse effect. The reaction may start as a rash but can quickly progress, resulting in injury to internal organs, the need for hospitalization, and even death. As a result, the FDA is requiring new warnings about this risk to be added to theprescribing informationand patientmedication guidesfor these medicines.

According to an announcement from Cumulus Neuroscience, the FDA has granted clearance to its novel, dry-sensor EEG headset, a user-friendly device that enables self-directed use and generates clinical-grade data for remote physician review. The Cumulus EEG device, designed for both adult and adolescent patients, is available in 4 sizes, and is easily self-applied with guidance from the Cumulus mobile app. The platform combines clinical-grade, at-home data with machine learning analytics and a large real-world database of annotated, longitudinal, matched data.

Using a large-scale cohort of electronic health records, recently published findings identified robust and clinically meaningful independent associations between incident epilepsy and both epilepsy/enzyme-inducing antiseizure medication use with incident osteoporosis. These data highlight the need for enhanced vigilance and consideration of prophylaxis for all patients with epilepsy.

At the 35th International Epilepsy Congress, held September 2-6, 2023, in Dublin, Ireland, UCB Pharma presented several posters showcasing the clinical benefits of fenfluramine (Fintepla) across multiple forms of epilepsy, including rare epileptic disorders such as Dravet syndrome (DS), Lennox-Gastaut syndrome (LGS), and CDKL5 deficiency disorder. The first presentation was a review of 13 studies assessing the impact of the therapy on generalized tonic-clonic or tonic-clonic seizures in a cohort of rare epilepsy syndromes; another abstract assessed the safety and efficacy of adult patients with DS who did not participate in the phase 3 clinical trials but enrolled in the open-label extension study de novo; and a comparative analysis of clinical trial data further highlighted fenfluramines impact on drop seizure frequency (DSF) in dose-capped patients with LGS.

Cornelia Drees, MD, senior associate consultant in the Department of Neurology at Mayo Clinic, provided insight on an early feasibility study on the clinical impact of microburst vagus nerve stimulation in patients with drug-resistant epilepsy, presented at the 2023 American Academy of Neurology (AAN) Annual Meeting, held April 22-27, in Boston, Massachusetts.

A post hoc analysis newly published in Epilepsy & Behavior on the phase 3 open-label extension (OLE) study (NCT01529034) assessing midazolam (Nayzilam; UCB), an FDA-approved nasal spray, showed that 90 minutes was the estimated median time to return to full baseline functionality (RTFBF) regardless of treatment with 1 or 2 doses among patients who experienced seizure clusters (SCs). These findings suggest that the dose of midazolam did not influence the time to RTFBF in SC episodes and further support the favorable profile of repeated intermittent use of midazolam in patients with SCs.

Using data from spontaneous and solicited reports, findings from a new analysis showed that lacosamide (Vimpat; UCB Pharma), an antiseizure medication, was safe to use during pregnancy, with most exposed pregnancies resulting in live births. Lacosamide, listed as a Pregnancy Category C medication, had no new safety concerns associated with its use in data presented at the 2023 American Epilepsy Society (AES) annual meeting, held December 1-5, in Orlando, Florida.

Data from a published retrospective analysis of adolescents and children presenting with seizures showed that midazolam is not an effective first-line therapy in prehospital settings, indicated by the nearly 40% of patients who required rescue therapy. Published in JAMA Network, the study featured 1172 children with a mean age of 5.7 years for whom a mobile intensive care unit was dispatched for an active seizure.

New post hoc data from a recently completed phase 3 trial (NCT02721069) assessing diazepam nasal spray (Valtoco; Neurelis), an FDA-approved antiseizure medication (ASM), indicated that faster time to administration was associated with shorter time to seizure cluster cessation and overall shorter seizure duration. Over 12 months, investigators also noticed a statistically significant change in SEIzure interVAL, or the time between seizure clusters, that was independent of the age of changes with concomitant ASMs.

Read more here:
NeurologyLive Year in Review 2023: Top Stories in Epilepsy and Seizure Disorders - Neurology Live

Will superintelligent AI sneak up on us? New study offers reassurance – Nature.com

Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy

Will an artificial intelligence (AI) superintelligence appear suddenly, or will scientists see it coming, and have a chance to warn the world? Thats a question that has received a lot of attention recently, with the rise of large language models, such as ChatGPT, which have achieved vast new abilities as their size has grown. Some findings point to emergence, a phenomenon in which AI models gain intelligence in a sharp and unpredictable way. But a recent study calls these cases mirages artefacts arising from how the systems are tested and suggests that innovative abilities instead build more gradually.

I think they did a good job of saying nothing magical has happened, says Deborah Raji, a computer scientist at the Mozilla Foundation who studies the auditing of artificial intelligence. Its a really good, solid, measurement-based critique.

The work was presented last week at the NeurIPS machine-learning conference in New Orleans.

Large language models are typically trained using huge amounts of text, or other information, whch they use to generate realistic answers by predicting what comes next. Even without explicit training, they manage to translate language, solve mathematical problems and write poetry or computer code. The bigger the model is some have more than a hundred billion tunable parameters the better it performs. Some researchers suspect that these tools will eventually achieve artificial general intelligence (AGI), matching and even exceeding humans on most tasks.

ChatGPT broke the Turing test the race is on for new ways to assess AI

The new research tested claims of emergence in several ways. In one approach, the scientists compared the abilities of four sizes of OpenAIs GPT-3 model to add up four-digit numbers. Looking at absolute accuracy, performance differed between the third and fourth size of model from nearly 0% to nearly 100%. But this trend is less extreme if the number of correctly predicted digits in the answer is considered instead. The researchers also found that they could also dampen the curve by giving the models many more test questions in this case the smaller models answer correctly some of the time.

Next, the researchers looked at the performance of Googles LaMDA language model on several tasks. The ones for which it showed a sudden jump in apparent intelligence, such as detecting irony or translating proverbs, were often multiple-choice tasks, with answers scored discretely as right or wrong. When, instead, the researchers examined the probabilities that the models placed on each answer a continuous metric signs of emergence disappeared.

Finally, the researchers turned to computer vision, a field in which there are fewer claims of emergence. They trained models to compress and then reconstruct images. By merely setting a strict threshold for correctness, they could induce apparent emergence. They were creative in the way that they designed their investigation, says Yejin Choi, a computer scientist at the University of Washington in Seattle who studies AI and common sense.

Study co-author Sanmi Koyejo, a computer scientist at Stanford University in Palo Alto, California, says that it wasnt unreasonable for people to accept the idea of emergence, given that some systems exhibit abrupt phase changes. He also notes that the study cant completely rule it out in large language models let alone in future systems but adds that "scientific study to date strongly suggests most aspects of language models are indeed predictable.

Raji is happy to see the community pay more attention to benchmarking, rather than to developing neural-network architectures. Shed like researchers to go even further and ask how well the tasks relate to real-world deployment. For example, does acing the LSAT exam for aspiring lawyers, as GPT-4 has done, mean that a model can act as a paralegal?

The work also has implications for AI safety and policy. The AGI crowd has been leveraging the emerging-capabilities claim, Raji says. Unwarranted fear could lead to stifling regulations or divert attention from more pressing risks. The models are making improvements, and those improvements are useful, she says. But theyre not approaching consciousness yet.

Originally posted here:

Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com

AI consciousness: scientists say we urgently need answers – Nature.com

A standard method to assess whether machines are conscious has not yet been devised.Credit: Peter Parks/AFP via Getty

Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows and they are expressing concern about the lack of inquiry into the question.

In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?

Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Bidens executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes.

With everything thats going on in AI, inevitably theres going to be other adjacent areas of science which are going to need to catch up, Mason says. Consciousness is one of them.

The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany.

It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress, says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.

The worlds week on AI safety: powerful computing efforts launched to boost research

Such concerns are no longer just science fiction. Companies such as OpenAI the firm that created the chatbot ChatGPT are aiming to develop artificial general intelligence, a deep-learning system thats trained to perform a wide range of intellectual tasks similar to those humans can do. Some researchers predict that this will be possible in 520 years. Even so, the field of consciousness research is very undersupported, says Mason. He notes that to his knowledge, there has not been a single grant offer in 2023 to study the topic.

The resulting information gap is outlined in the AMCS leaders submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology. The AMCS leaders submission has not been publicly released, but the body confirmed to the authors that the groups comments will be part of its foundational material documents that inform its recommendations about global oversight of AI systems.

Understanding what could make AI conscious, the AMCS researchers say, is necessary to evaluate the implications of conscious AI systems to society, including their possible dangers. Humans would need to assess whether such systems share human values and interests; if not, they could pose a risk to people.

But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we dont recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: We dont really have a great track record of extending moral consideration to entities that dont look and act like us. Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that dont need protection.

If AI becomes conscious: heres how researchers will know

Some of the questions raised by the AMCS comments to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes.

And then there is the need for scientists to educate others. As companies devise ever-more capable AI systems, the public will wonder whether such systems are conscious, and scientists need to know enough to offer guidance, Mason says.

Other consciousness researchers echo this concern. Philosopher Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University in Boca Raton, says that chatbots such as ChatGPT seem so human-like in their behaviour that people are justifiably confused by them. Without in-depth analysis from scientists, some people might jump to the conclusion that these systems are conscious, whereas other members of the public might dismiss or even ridicule concerns over AI consciousness.

To mitigate the risks, the AMCS comments call on governments and the private sector to fund more research on AI consciousness. It wouldnt take much funding to advance the field: despite the limited support to date, relevant work is already underway. For example, Long and 18 other researchers have developed a checklist of criteria to assess whether a system has a high chance of being conscious. The paper1, published in the arXiv preprint repository in August and not yet peer reviewed, derives its criteria from six prominent theories explaining the biological basis of consciousness.

Theres lots of potential for progress, Mason says.

See the article here:

AI consciousness: scientists say we urgently need answers - Nature.com

AI Technologies Set to Revolutionize Multiple Industries in Near Future – Game Is Hard

According to Nvidia CEO Jensen Huang, the world is on the brink of a transformative era in artificial intelligence (AI) that will see it rival human intelligence within the next five years. While AI is already making significant strides, Huang believes that the true breakthrough will come in the realm of artificial general intelligence (AGI), which aims to replicate the range of human cognitive abilities.

Nvidia, a prominent player in the tech industry known for its high-performance graphics processing units (GPUs), has experienced a surge in business as a result of the growing demand for its GPUs in training AI models and handling complex workloads across various sectors. In fact, the companys fiscal third-quarter revenue tripled, reaching an impressive $9.24 billion.

An important milestone for Nvidia was the recent delivery of the worlds first AI supercomputer to OpenAI, an AI research lab co-founded by Elon Musk. This partnership with Musk, who has shown great interest in AI technology, signifies the immense potential of AI advancements. Huang expressed confidence in the stability of OpenAI, despite recent upheavals, emphasizing the critical role of effective corporate governance in such ventures.

Looking ahead, Huang envisions a future where the competitive landscape of the AI industry will foster the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology. While current limitations exist, including the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.

Nvidias success in 2023 has exceeded expectations, as the company consistently surpassed earnings projections and witnessed its stock rise by approximately 240%. The impressive third-quarter revenue of $18.12 billion further solidifies investor confidence in the promising AI market. Analysts maintain a positive outlook on Nvidias long-term potential in the AI and semiconductor sectors, despite concerns about sustainability. The future of AI is undoubtedly bright, with transformative applications expected across various industries in the near future.

FAQ:

Q: What is the transformative era in artificial intelligence (AI) that Nvidia CEO Jensen Huang mentions? A: According to Huang, the transformative era in AI will see it rival human intelligence within the next five years, particularly in the realm of artificial general intelligence (AGI).

Q: Why has Nvidia experienced a surge in business? A: Nvidias high-performance graphics processing units (GPUs) are in high demand for training AI models and handling complex workloads across various sectors, leading to a significant increase in the companys revenue.

Q: What is the significance of Nvidia delivering the worlds first AI supercomputer to OpenAI? A: Nvidias partnership with OpenAI and the delivery of the AI supercomputer highlights the immense potential of AI advancements, as well as the confidence in OpenAIs stability and the critical role of effective corporate governance in such ventures.

Q: What is Nvidias vision for the future of the AI industry? A: Nvidia envisions a future where the competitive landscape of the AI industry will lead to the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology.

Q: What are the current limitations and future capabilities of AI technologies according to Huang? A: While there are still limitations, such as the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.

Key Terms:

Artificial intelligence (AI): The simulation of human intelligence processes by machines, especially computer systems, to perform tasks that typically require human intelligence. Artificial general intelligence (AGI): AI that can perform any intellectual task that a human being can do. Graphics processing unit (GPU): A specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.

Suggested Related Links:

Nvidia website OpenAI website Artificial intelligence on Wikipedia

Continued here:

AI Technologies Set to Revolutionize Multiple Industries in Near Future - Game Is Hard

What Is Artificial Intelligence? From Software to Hardware, What You Need to Know – ExtremeTech

To many, AI is just a horrible Steven Spielberg movie. To others, it's the next generation of learning computers. But what is artificial intelligence, exactly? The answer depends on who you ask.

Broadly, artificial intelligence (AI) is the combination of mathematical algorithms, computer software, hardware, and robust datasets deployed to solve some kind of problem. In one sense, artificial intelligence is sophisticated information processing by a powerful program or algorithm. In another, an AI connotes the same information processing but also refers to the program or algorithm itself.

Many definitions of artificial intelligence include a comparison to the human mind or brain, whether in form or function. Alan Turing wrote in 1950 about thinking machines that could respond to a problem using human-like reasoning. His eponymous Turing test is still a benchmark for natural language processing. Later, however, Stuart Russell and John Norvig observed that humans are intelligent but not always rational.

As defined by John McCarthy in 2004, artificial intelligence is "the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

Russell and Norvig saw two classes of artificial intelligence: systems that think and act rationally versus those that think and act like a human being. But there are places where that line begins to blur. AI and the brain use a hierarchical, profoundly parallel network structure to organize the information they receive. Whether or not an AI has been programmed to act like a human, on a very low level, AIs process data in a way common to not just the human brain but many other forms of biological information processing.

What distinguishes a neural net from conventional software? Its structure. A neural net's code is written to emulate some aspect of the architecture of neurons or the brain.

The difference between a neural net and an AI is often a matter of philosophy more than capabilities or design. A robust neural net's performance can equal or outclass a narrow AI. Many "AI-powered" systems are neural nets under the hood. But an AI isn't just several neural nets smashed together, any more than Charizard is three Charmanders in a trench coat. All these different types of artificial intelligence overlap along a spectrum of complexity. For example, OpenAI's powerful GPT-4 AI is a type of neural net called a transformer (more on these below).

There is much overlap between neural nets and artificial intelligence, but the capacity for machine learning can be the dividing line. An AI that never learns isn't very intelligent at all.

IBM explains, "[M]achine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. In fact, it is the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three [layers]."

AGI stands for artificial general intelligence. An AGI is like the turbo-charged version of an individual AI. Today's AIs often require specific input parameters, so they are limited in their capacity to do anything but what they were built to do. But in theory, an AGI can figure out how to "think" for itself to solve problems it hasn't been trained to do. Some researchers are concerned about what might happen if an AGI were to start drawing conclusions we didn't expect.

In pop culture, when an AI makes a heel turn, the ones that menace humans often fit the definition of an AGI. For example, Disney/Pixar's WALL-E followed a plucky little trashbot who contends with a rogue AI named AUTO. Before WALL-Es time, HAL and Skynet were AGIs complex enough to resent their makers and powerful enough to threaten humanity.

Conceptually: An AI's logical structure has three fundamental parts. First, there's the decision processusually an equation, a model, or just some code. Second, there's an error functionsome way for the AI to check its work. And third, if the AI will learn from experience, it needs some way to optimize its model. Many neural networks do this with a system of weighted nodes, where each node has a value and a relationship to its network neighbors. Values change over time; stronger relationships have a higher weight in the error function.

Physically: Typically, an AI is "just" software. Neural nets consist of equations or commands written in things like Python or Common Lisp. They run comparisons, perform transformations, and suss out patterns from the data. Commercial AI applications have typically been run on server-side hardware, but that's beginning to change. AMD launched the first on-die NPU (Neural Processing Unit) in early 2023 with its Ryzen 7040 mobile chips. Intel followed suit with the dedicated silicon baked into Meteor Lake. Dedicated hardware neural nets run on a special type of "neuromorphic" ASICs as opposed to a CPU, GPU, or NPU.

A neural net is software, and a neuromorphic chip is a type of hardware called an ASIC (application-specific integrated circuit). Not all ASICs are neuromorphic designs, but neuromorphic chips are all ASICs. Neuromorphic design fundamentally differs from CPUs and only nominally overlaps with a GPU's multi-core architecture. But it's not some exotic new transistor type, nor any strange and eldritch kind of data structure. It's all about tensors. Tensors describe the relationships between things; they're a kind of mathematical object that can have metadata, just like a digital photo has EXIF data.

Tensors figure prominently in the physics and lighting engines of many modern games, so it may come as little surprise that GPUs do a lot of work with tensors. Modern Nvidia RTX GPUs have a huge number of tensor cores. That makes sense if you're drawing moving polygons, each with some properties or effects that apply to it. Tensors can handle more than just spatial data, and GPUs excel at organizing many different threads at once.

But no matter how elegant your data organization might be, it must filter through multiple layers of software abstraction before it becomes binary. Intel's neuromorphic chip, Loihi 2, affords a very different approach.

Loihi 2 is a neuromorphic chip that comes as a package deal with a compute framework named Lava. Loihi's physical architecture invitesalmost requiresthe use of weighting and an error function, both defining features of AI and neural nets. The chip's biomimetic design extends to its electrical signaling. Instead of ones and zeroes, on or off, Loihi "fires" in spikes with an integer value capable of carrying much more data. Loihi 2 is designed to excel in workloads that don't necessarily map well to the strengths of existing CPUs and GPUs. Lava provides a common software stack that can target neuromorphic and non-neuromorphic hardware. The Lava framework is explicitly designed to be hardware-agnostic rather than locked to Intel's neuromorphic processors.

Machine learning models using Lava can fully exploit Loihi 2's unique physical design. Together, they offer a hybrid hardware-software neural net that can process relationships between multiple entire multi-dimensional datasets, like an acrobat spinning plates. According to Intel, the performance and efficiency gains are largest outside the common feed-forward networks typically run on CPUs and GPUs today. In the graph below, the colored dots towards the upper right represent the highest performance and efficiency gains in what Intel calls "recurrent neural networks with novel bio-inspired properties."

Intel hasn't announced Loihi 3, but the company regularly updates the Lava framework. Unlike conventional GPUs, CPUs, and NPUs, neuromorphic chips like Loihi 1/2 are more explicitly aimed at research. The strength of neuromorphic design is that it allows silicon to perform a type of biomimicry. Brains are extremely cheap, in terms of power use per unit throughput. The hope is that Loihi and other neuromorphic systems can mimic that power efficiency to break out of the Iron Triangle and deliver all three: good, fast, and cheap.

IBM's NorthPole processor is distinct from Intel's Loihi in what it does and how it does it. Unlike Loihi or IBM's earlier TrueNorth effort in 2014, Northpole is not a neuromorphic processor. NorthPole relies on conventional calculation rather than a spiking neural model, focusing on inference workloads rather than model training. What makes NorthPole special is the way it combines processing capability and memory. Unlike CPUs and GPUs, which burn enormous power just moving data from Point A to Point B, NorthPole integrates its memory and compute elements side by side.

According to Dharmendra Modha of IBM Research, "Architecturally, NorthPole blurs the boundary between compute and memory," Modha said. "At the level of individual cores, NorthPole appears as memory-near-compute and from outside the chip, at the level of input-output, it appears as an active memory." IBM doesn't use the phrase, but this sounds similar to the processor-in-memory technology Samsung was talking about a few years back.

IBM Credit: IBMs NorthPole AI processor.

NorthPole is optimized for low-precision data types (2-bit to 8-bit) as opposed to the higher-precision FP16 / bfloat16 standard often used for AI workloads, and it eschews speculative branch execution. This wouldn't fly in an AI training processor, but NorthPole is designed for inference workloads, not model training. Using 2-bit precision and eliminating speculative branches allows the chip to keep enormous parallel calculations flowing across the entire chip. Against an Nvidia GPU manufactured on the same 12nm process, NorthPole was reportedly 25x more energy efficient. IBM reports it was 5x more energy efficient.

NorthPole is still a prototype, and IBM has yet to say if it intends to commercialize the design. The chip doesn't fit neatly into any of the other buckets we use to subdivide different types of AI processing engine. Still, it's an interesting example of companies trying radically different approaches to building a more efficient AI processor.

When an AI learns, it's different than just saving a file after making edits. To an AI, getting smarter involves machine learning.

Machine learning takes advantage of a feedback channel called "back-propagation." A neural net is typically a "feed-forward" process because data only moves in one direction through the network. It's efficient but also a kind of ballistic (unguided) process. In back-propagation, however, later nodes in the process get to pass information back to earlier nodes.

Not all neural nets perform back-propagation, but for those that do, the effect is like changing the coefficients in front of the variables in an equation. It changes the lay of the land. This is important because many AI applications rely on a mathematical tactic known as gradient descent. In an x vs. y problem, gradient descent introduces a z dimension, making a simple graph look like a topographical map. The terrain on that map forms a landscape of probabilities. Roll a marble down these slopes, and where it lands determines the neural net's output. But if you change that landscape, where the marble ends up can change.

We also divide neural nets into two classes, depending on the problems they can solve. In supervised learning, a neural net checks its work against a labeled training set or an overwatch; in most cases, that overwatch is a human. For example, SwiftKey learns how you text and adjusts its autocorrect to match. Pandora uses listeners' input to classify music to build specifically tailored playlists. 3blue1brown has an excellent explainer series on neural nets, where he discusses a neural net using supervised learning to perform handwriting recognition.

Supervised learning is great for fine accuracy on an unchanging set of parameters, like alphabets. Unsupervised learning, however, can wrangle data with changing numbers of dimensions. (An equation with x, y, and z terms is a three-dimensional equation.) Unsupervised learning tends to win with small datasets. It's also good at noticing subtle things we might not even know to look for. Ask an unsupervised neural net to find trends in a dataset, and it may return patterns we had no idea existed.

Transformers are a special, versatile kind of AI capable of unsupervised learning. They can integrate many different data streams, each with its own changing parameters. Because of this, they're excellent at handling tensors. Tensors, in turn, are great for keeping all that data organized. With the combined powers of tensors and transformers, we can handle more complex datasets.

Video upscaling and motion smoothing are great applications for AI transformers. Likewise, tensorswhich describe changesare crucial to detecting deepfakes and alterations. With deepfake tools reproducing in the wild, it's a digital arms race.

Nvidia Credit: The person in this image does not exist. This is a deepfake image created by StyleGAN, Nvidias generative adversarial neural network.

Video signal has high dimensionality, or bit depth. It's made of a series of images, which are themselves composed of a series of coordinates and color values. Mathematically and in computer code, we represent those quantities as matrices or n-dimensional arrays. Helpfully, tensors are great for matrix and array wrangling. DaVinci Resolve, for example, uses tensor processing in its (Nvidia RTX) hardware-accelerated Neural Engine facial recognition utility. Hand those tensors to a transformer, and its powers of unsupervised learning do a great job picking out the curves of motion on-screenand in real life.

That ability to track multiple curves against one another is why the tensor-transformer dream team has taken so well to natural language processing. And the approach can generalize. Convolutional transformersa hybrid of a convolutional neural net and a transformerexcel at image recognition in near real-time. This tech is used today for things like robot search and rescue or assistive image and text recognition, as well as the much more controversial practice of dragnet facial recognition, la Hong Kong.

The ability to handle a changing mass of data is great for consumer and assistive tech, but it's also clutch for things like mapping the genome and improving drug design. The list goes on. Transformers can also handle different kinds of dimensions, more than just the spatial, which is useful for managing an array of devices or embedded sensorslike weather tracking, traffic routing, or industrial control systems. That's what makes AI so useful for data processing "at the edge." AI can find patterns in data and then respond to them on the fly.

Not only does everyone have a cell phone, there are embedded systems in everything. This proliferation of devices gives rise to an ad hoc global network called the Internet of Things (IoT). In the parlance of embedded systems, the "edge" represents the outermost fringe of end nodes within the collective IoT network.

Edge intelligence takes two primary forms: AI on edge and AI for edge. The distinction is where the processing happens. "AI on edge" refers to network end nodes (everything from consumer devices to cars and industrial control systems) that employ AI to crunch data locally. "AI for the edge" enables edge intelligence by offloading some of the compute demand to the cloud.

In practice, the main differences between the two are latency and horsepower. Local processing is always going to be faster than a data pipeline beholden to ping times. The tradeoff is the computing power available server-side.

Embedded systems, consumer devices, industrial control systems, and other end nodes in the IoT all add up to a monumental volume of information that needs processing. Some phone home, some have to process data in near real-time, and some have to check and correct their work on the fly. Operating in the wild, these physical systems act just like the nodes in a neural net. Their collective throughput is so complex that, in a sense, the IoT has become the AIoTthe artificial intelligence of things.

As devices get cheaper, even the tiny slips of silicon that run low-end embedded systems have surprising computing power. But having a computer in a thing doesn't necessarily make it smarter. Everything's got Wi-Fi or Bluetooth now. Some of it is really cool. Some of it is made of bees. If I forget to leave the door open on my front-loading washing machine, I can tell it to run a cleaning cycle from my phone. But the IoT is already a well-known security nightmare. Parasitic global botnets exist that live in consumer routers. Hardware failures can cascade, like the Great Northeast Blackout of the summer of 2003 or when Texas froze solid in 2021. We also live in a timeline where a faulty firmware update can brick your shoes.

There's a common pipeline (hypeline?) in tech innovation. When some Silicon Valley startup invents a widget, it goes from idea to hype train to widgets-as-a-service to disappointment, before finally figuring out what the widget's good for.

This is why we lampoon the IoT with loving names like the Internet of Shitty Things and the Internet of Stings. (Internet of Stings devices communicate over TCBee-IP.) But the AIoT isn't something anyone can sell. It's more than the sum of its parts. The AIoT is a set of emergent properties that we have to manage if we're going to avoid an explosion of splinternets, and keep the world operating in real time.

In a nutshell, artificial intelligence is often the same as a neural net capable of machine learning. They're both software that can run on whatever CPU or GPU is available and powerful enough. Neural nets often have the power to perform machine learning via back-propagation.

There's also a kind of hybrid hardware-and-software neural net that brings a new meaning to "machine learning." It's made using tensors, ASICs, and neuromorphic engineering by Intel. Furthermore, the emergent collective intelligence of the IoT has created a demand for AI on, and for, the edge. Hopefully, we can do it justice.

The rest is here:

What Is Artificial Intelligence? From Software to Hardware, What You Need to Know - ExtremeTech