Are we making spacecraft too autonomous? – MIT Technology Review

Posted: July 5, 2020 at 9:45 am

Does this matter? Software has never played a more critical role in spaceflight. It has made it safer and more efficient, allowing a spacecraft to automatically adjust to changing conditions. According to Darrel Raines, a NASA engineer leading software development for the Orion deep space capsule, autonomy is particularly key for areas of critical response timelike the ascent of a rocket after liftoff, when a problem might require initiating an abort sequence in just a matter of seconds. Or in instances where the crew might be incapacitated for some reason.

And increased autonomy is practically essential to making some forms of spaceflight even work. Ad Astra is a Houston-based company thats looking to make plasma rocket propulsion technology viable. The experimental engine uses plasma made out of argon gas, which is heated using electromagnetic waves. A tuning process overseen by the systems software automatically figures out the optimal frequencies for this heating. The engine comes to full power in just a few milliseconds. Theres no way for a human to respond to something like that in time, says CEO Franklin Chang Daz, a former astronaut who flew on several space shuttle missions from 1986 to 2002. Algorithms in the control system are used to recognize changing conditions in the rocket as its moving through the startup sequenceand act accordingly. We wouldnt be able to do any of this well without software, he says.

But overrelying on software and autonomous systems in spaceflight creates new opportunities for problems to arise. Thats especially a concern for many of the space industrys new contenders, who arent necessarily used to the kind of aggressive and comprehensive testing needed to weed out problems in software and are still trying to strike a good balance between automation and manual control.

Nowadays, a few errors in over one million lines of code could spell the difference between mission success and mission failure. We saw that late last year, when Boeings Starliner capsule (the other vehicle NASA is counting on to send American astronauts into space)failed to make it to the ISS because of a glitch in its internal timer. A human pilot could have overridden the glitch that ended up burning Starliners thrusters prematurely. NASA administrator Jim Bridenstine remarked soon after Starliners problems arose: Had we had an astronaut on board, we very well may be at the International Space Station right now.

But it was later revealed thatmanyother errors in the software had not been caught before launch, including one that could have led to the destruction of the spacecraft. And that was something human crew members could easily have overridden.

Boeing is certainly no stranger to building and testing spaceflight technologies, so it was a surprise to see the company fail to catch these problems before the Starliner test flight. Software defects, particularly in complex spacecraft code, are not unexpected,NASA saidwhen the second glitch was made public. However, there were numerous instances where the Boeing software quality processes either should have or could have uncovered the defects. Boeing declined a request for comment.

According to Luke Schreier, the vice president and general manager of aerospace at NI (formerly National Instruments), problems in software are inevitable, whether in autonomous vehicles or in spacecraft. Thats just life, he says. The only real solution is to aggressively test ahead of time to find those issues and fix them: You have to have a really rigorous software testing program to find those mistakes that will inevitably be there.

Space, however, is a unique environment to test for. The conditions a spacecraft will encounter arent easy to emulate on the ground. While an autonomous vehicle can be taken out of the simulator and eased into lighter real-world conditions to refine the software little by little, you cant really do the same thing for a launch vehicle. Launch, spaceflight, and a return to Earth are actions that either happen or they dontthere is no light version.

This, says Schreier, is why AI is such a big deal in spaceflight nowadaysyou can develop an autonomous system that is capable of anticipating those conditions, rather than requiring the conditions to be learned during a specific simulation. You couldnt possibly simulate on your own all the corner cases of the new hardware youre designing, he says.

So for some groups, testing software isnt just a matter of finding and fixing errors in the code; its also a way to train AI-driven software. Take Virgin Orbit, for example, which recently tried to send its LauncherOne vehicle into space for the first time. The company worked with NI to develop a test bench that looped together all the vehicles sensors and avionics with the software meant to run a mission into orbit (down to the exact length of wiring used within the vehicle). By the time LauncherOne was ready to fly, it believed it had already been in space thousands of times thanks to the testing, and it had already faced many different kinds of scenarios.

Of course, the LauncherOnes first test flightended infailure, for reasons that have still not been disclosed. If it was due to software limitations, the attempt is yet another sign theres a limit to how much an AI can be trained to face real-world conditions.

Raines adds that in contrast to the slower approach NASA takes for testing, private companies are able to move much more rapidly. For some, like SpaceX, this works out well. For others, like Boeing, it can lead to some surprising hiccups.

Ultimately, the worst thing you can do is make something fully manual or fully autonomous, says Nathan Uitenbroek, another NASA engineer working on Orions software development. Humans have to be able to intervene if the software is glitching up or if the computers memory is destroyed by an unanticipated event (like a blast of cosmic rays). But they also rely on the software to inform them when other problems arise.

NASA is used to figuring out this balance, and it has redundancy built into its crewed vehicles. The space shuttle operated on multiple computers using the same software, and if one had a problem, the others could take over. A separate computer ran on entirely different software, so it could take over the entire spacecraft if a systemic glitch was affecting the others. Raines and Uitenbroek say the same redundancy is used on Orion, which also includes a layer of automatic function that bypasses the software entirely for critical functions like parachute release.

On the Crew Dragon, there are instances where astronauts can manually initiate abort sequences, and where they can override software on the basis of new inputs. But the design of these vehicles means its more difficult now for the human to take complete control. The touch-screen console is still tied to the spacecrafts software, and you cant just bypass it entirely when you want to take over the spacecraft, even in an emergency.

Theres no consensus on how much further the human role in spaceflight willor shouldshrink. Uitenbroek thinks trying to develop software that can account for every possible contingency is simply impractical, especially when you have deadlines to make.

Chang Daz disagrees, saying the world is shifting to a point where eventually the human is going to be taken out of the equation.

Which approach wins out may depend on the level of success achieved by the different parties sending people into space. NASA has no intention of taking humans out of the equation, but if commercial companies find they have an easier time minimizing the human pilots role and letting the AI take charge, than touch screens and pilotless flight to the ISS are only a taste of whats to come.

See the original post:
Are we making spacecraft too autonomous? - MIT Technology Review

Related Post