‘Digital twins’ project will help clean up space junk, repair and decommission spacecrafts – University of California

Imagine Earth from space: a blue marble, a pristine orb that is our one and only home. But like many other places on the planet itself, this view is littered with the evidence of humans: in the earths orbit floats more than 30,000 individual pieces of space debris larger than 10 cm, according to a 2023 report from the European Space Agency.

A new project led by Ricardo Sanfelice, UC Santa Cruz Professor and Department Chair of Electrical and Computer Engineering, will develop technology for better spacecraft that use complex robotics to clean up space debris, as well as repair, refuel and decommission other spacecraft. A research team will create highly detailed digital twin models of spacecraft that can carry out these complex tasks in space and develop next-generation control algorithms to manipulate those models, enabling experimentation without the costs of testing on the physical system.

Sanfelice and his research team have been awarded $2.5 million from the Air Force Office of Scientific Research (AFOSR) Space University Research Initiative (SURI) for this three-year project. Co-principal investigators include UC Santa Cruz Professor of Applied Mathematics Daniele Venturi, UT Austin Professor of Aerospace Engineering Karen Wilcox, and University of Michigan Professor of Aerospace Engineering Ilya Kolmanovsk; and the team will collaborate with government and industry partners including the Air Force Research Lab Space Vehicles Directorate, The University of Arizona, Raytheon Technologies, Trusted Space, Inc., and Orbital Outpost X.

A digital twin is a computer model of a physical system, designed to perfectly mimic the properties of the real-world object, including all of the instruments, computers, sensors, surrounding environment, and anything else the system might include. Digital twins enable researchers to conduct experiments and run analysis in the digital world, testing what concepts might work in the real world to determine if they are worth building and manufacturing.

Unlike more traditional simulations, digital twins often incorporate machine learning that allows the system to improve itself through experimentations, providing valuable iteration to build a more accurate and detailed system.

Digital twins can be useful in a range of engineering disciplines, but are particularly relevant for aerospace engineering where the costs associated with building the real systems are so high.

You can accelerate your production, you can reduce time and costs and risk of spacecraft design because spacecraft technology is very expensive and requires a lot of certification and regulation before they can go into space, Sanfelice said. Rather than performing those experiments which take a lot of time in the real world, with a digital twin you can do conceptual analysis and initial validation in the computer environment. This same logic extends to other complex and costly systems its all about scale and reduction of production time, cost, and risk while maintaining system performance and safety.

Digital twins are also especially useful for aerospace engineering because they allow engineers to test complex scenarios and so-called corner cases, situations where multiple parameters are at their extreme, within the realm of the computer. Highly complex and extreme situations are more likely to occur in the harsh conditions of space, and cant be fully replicated for experimentation back on Earth.

The models will enable the researchers to deeply examine what is necessary to carry out the highly complex tasks of clearing up space debris and using a spacecraft to refuel, repair, or demission other spacecraft. Such tasks could include a situation where a robotic arm on one spacecraft is trained to grab another spacecraft that is malfunctioning and tumbling through space, potentially damaging one or both of the systems. The researchers need to teach the computers to handle the tumbling and steering, developing optimization-based techniques to quickly compute and solve unexpected problems as they arise while also allowing for possible human intervention.

Sanfelice and his Hybrid Systems Lab will focus on developing the control algorithms that allow for experimentation on the spacecraft digital twins. The digital twin models need to be so complex to fully encapsulate the physics and computing variables of the real-world systems they represent, and this in turn requires new methods to control the models that go beyond the current state-of-the-art.

I have this massive detailed model of my system, it keeps updating as the system evolves and I run experiments can I write an algorithm that makes the digital twin do what I want it to do, and as a consequence hopefully the real physical system will do the same? Sanfelice said.

Sanfelices work will center around developing model predictive control algorithms, a type of optimization-based control scheme, to control the digital twins, of which Wilcox will lead the creation. Sanfelices lab develops robotic manipulators for grasping and other tasks performed by robotics, which require hybrid control schemes to enable the robotic fingers to be able to transition between conditions of contact and no contact with the object they are manipulating.

While the model predictive control techniques they develop for this project will be highly relevant to aerospace applications, Sanfelice believes there is an opportunity to expand to other complex application areas and develop more advanced basic science for digital twins and their control.

Continue reading here:

'Digital twins' project will help clean up space junk, repair and decommission spacecrafts - University of California

University of Stuttgart picks HPE for 115 million exascale supercomputer – DatacenterDynamics

The University of Stuttgart has ordered two supercomputers from HPE for its High Performance Computing Center (HLRS).

The two systems, called Hunter and Herder, will cost 115 million ($127m), and take the HLRS up to exascale level in two stages by 2027, leading to a massively parallel GPU-accelerated system.

"The expansion will strengthen Stuttgart's outstanding position in computer simulation and artificial intelligence research," explains Professor Wolfram Ressel, Rector at the University of Stuttgart.

The systems will be used for simulation, artificial intelligence (AI), and high-performance data analysis, within computational engineering and applied science.

Hunter, a transitional supercomputer based on HPE's Cray EX400, will begin operation in 2025, replacing HLRSs current flagship supercomputer, Hawk, and taking HLRS from its current peak performance of 26 petaflops, to 39 petaflops,

It will have 136 HPE Cray EX4000 nodes, each with four HPE Slingshot high-performance interconnects. Hunter will also use the next generation of Cray's ClusterStor storage system.

Hunter will begin the move away from CPU processors, adding in more energy-efficient GPUs. It will be based on the AMD Instinct MI300A accelerated processing unit (APU), which combines CPU and GPU processors, with local unified memory the processors can access quickly.

As well as boosting performance, Hunter will use 80 percent less energy than Hawk, the University says.

Herder, a true exascale system, will arrive in 2027. It is projected to provide speeds on the order of one quintillion (1018) flops. Its final architecture will use accelerator chips but won't be fully determined till the end of 2025.

The 115 million budget will be jointly funded by the German Federal Ministry of Education and Research (BMBF) and the State of Baden-Wrttemberg's Ministry of Science, Research, and Arts, through the Gauss Centre for Supercomputing (GCS), which is an alliance of Germany's three national supercomputing centers.

Elsewhere in Germany, the GCS is funding the Jupiter supercomputer at the Jlich Supercomputing Centre which, 2025, is scheduled to be Europe's first exascale system in Europe in 2025. The Leibniz Supercomputing Centre is planning an exascale system for widescale use in 2026.

The move to GPUs will save power, says the University. "Energy efficiency with optimal support for cutting-edge science is of paramount importance for us at the University of Stuttgart," said Anna Steiger, Chancellor at the University of Stuttgart.

"With Hunter and Herder, we are responding to the challenges of reducing CO2 emissions, while also enabling both improved computing power and outstanding energy performance."

"As part of the University of Stuttgart, HLRS has a key role to play it is not just the impressive performance of the supercomputer but also the methodological knowledge that the center has assembled that helps our cutting-edge computational research to achieve breathtaking results, for example in climate protection or for more environmentally sustainable mobility," said Petra Olschowski (Baden-Wrttemberg Minister of Science, Research, and Arts).

View original post here:

University of Stuttgart picks HPE for 115 million exascale supercomputer - DatacenterDynamics

First supercomputer that simulates entire human brain switching on in 2024 – Study Finds

PENRITH, Australia DeepSouth, the worlds first supercomputer designed to simulate the entire human brain, is now just months away from activation. Developed by researchers at the International Centre for Neuromorphic Systems (ICNS) at Western Sydney University, DeepSouth boasts the capability to mimic brain networks on the scale of an actual human mind.

DeepSouth employs a neuromorphic system, which replicates human biological processes. By utilizing hardware, it efficiently simulates large networks of spiking neurons, achieving an impressive 228 trillion synaptic operations per second. This rate is comparable to what scientists believe the human brain could achieve. The researchers at ICNS are optimistic that by replicating brain functions, they can gain a deeper understanding of its workings and subsequently design more effective AI systems.

Professor Andr van Schaik, the Director of ICNS, highlights that DeepSouth is distinct from other supercomputers due to its unique design. Specifically engineered to function like networks of neurons (brain cells), it requires less power and achieves greater efficiencies. This approach stands in stark contrast to traditional supercomputers, which, optimized for conventional computing tasks, consume a considerable amount of power.

Progress in our understanding of how brains compute using neurons is hampered by our inability to simulate brain like networks at scale. Simulating spiking neural networks on standard computers using Graphics Processing Units (GPUs) and multicore Central Processing Units (CPUs) is just too slow and power intensive. Our system will change that, Prof. van Schaik says in a media release.

This platform will progress our understanding of the brain and develop brain-scale computing applications in diverse fields including sensing, biomedical, robotics, space, and large-scale AI applications.

Prof. van Schaik believes that the DeepSouth system will pave the way for advancements in smart devices, such as mobile phones and sensors used in manufacturing and agriculture. Moreover, it is expected to contribute to the development of AI applications that are both less power-intensive and more intelligent. Additionally, the system will enhance our understanding of the workings of the human brain, both in healthy and diseased states.

The ICNS team at Western Sydney University has been instrumental in the development of this groundbreaking project, working in collaboration with experts across the neuromorphic field. This includes partnerships with researchers from the University of Sydney, the University of Melbourne, and the University of Aachen in Germany.

The name DeepSouth was thoughtfully chosen, serving as a tribute to IBMs TrueNorth system, which spearheaded the effort to create machines that simulate large networks of spiking neurons. It also honors Deep Blue, the first computer to win a world chess championship. Additionally, the name reflects its geographic location, down under in Australia.

DeepSouth is scheduled to become operational by April 2024.

Artificial Intelligence: By mimicking the brain, we will be able to create more efficient ways of undertaking AI processes than our current models.

Super-fast, large scale parallel processing using far less power: Our brains are able to process the equivalent of an exaflop a billion-billion (1 followed by 18 zeros) mathematical operations per second with just 20 watts of power. Using neuromorphic engineering that simulates the way our brain works, DeepSouth can process massive amounts of data quickly, using much less power, while being much smaller than other supercomputers.

Scalability: This systems design allows easy expansion by adding more hardware for larger systems or downsizing for portable or cost-effective applications.

Reconfigurable Design: Leveraging Field Programmable Gate Arrays (FPGA) allows hardware reprogramming, enabling the addition of new neuron models, connectivity schemes, and learning rules. DeepSouths remote accessibility through a Python-based front end simplifies usage without intricate hardware knowledge.

Commercial Availability: DeepSouth relies on off-the-shelf hardware, ensuring continual enhancements and easy replication at global data centers. This approach overcomes challenges associated with custom-designed hardware, which is time-consuming and costly.

South West News Service writer Dean Murray contributed to this report.

Originally posted here:

First supercomputer that simulates entire human brain switching on in 2024 - Study Finds