The Boeing 737 Max Disaster Is A Reminder That Automation Can Be Flawed
When the first Boeing 737 Max went down a little over a year ago, pilots raised the alarm that something might be wrong with the model’s automation technology. A feature designed to prevent the plane from stalling was actually making things worse. The system which was designed to avoid the aircraft from climbing at too steep an angle mistakenly believed that the aircraft was climbing too high while in a flat flying trajectory, causing it to nosedive randomly.
Many engineers point out that there were other ways that the Boeing 737 Max could determine its angle and whether a nosedive was appropriate, but the company hadn’t bothered putting failsafe systems in place. Likewise, Boeing didn’t give pilots the options of wresting control from their wayward machines and overriding with human executive function. People died because machine intelligence was given primacy. Interesting.
Automation – A Doubled-Edged Sword
As any aviation law firm will tell you, automation is a double-edged sword for the commercial aviation industry. Carriers want automation to make their vehicles safer and more reliable and, in no small degree, automation promotes this. Despite the recent 737 Max disasters, pilot error is still a greater risk than the failure of computerized systems. Human cognition can fail.
But as many experts point out, there are downsides when the technology becomes complex. Software controls practically every aspect of modern aircraft, from the speed to the altitude to the internal air mixture. The problem is that these systems work on rules and they are unable to adapt to unique contexts as people can. The Boeing 737 Max that crashed had not executive function that could override its programming. It went into a series of nosedives until it ended up in the Ethiopian desert, thinking that it was protecting passengers on board.
And this is the fundamental issue with automation: the inability of machines to interpret context and make decisions based on common sense. The anti-stall system has no idea that it is part of a plane, or that an aircraft carries hundreds of people whose lives are valuable. It’s a mindless intelligence occupying a small world: one in which the only thing that matters is its interpretation of the trajectory of the aircraft. All else is incidental.
The Boeing Ban
Combined, the 737 Max tragedies led to the deaths of more than 300 people. Examination of the flight data shows that both planes experienced similar issues shortly following take off. Neither aircraft was able to maintain altitude, going into a series of repeated nosedives against the pilot’s will. 2017 was an extraordinary year for commercial aviation in which not a single person died. But the tragedies at the hands of the 737 Max show that automation still has a long way to go before it can be considered as something that makes people safer – even if the data suggests that overall, it does.
Most developed countries around the world grounded the 737 Max in response to the disaster, pending modifications. But is the ban going to help? Or is there something wrong at the heart of the manufacturer’s approach to automation?
The problem seems to be one of communication. Automation technology on aircraft is progressing at a rapid clip with an increasing number of features going into effect every year. The changes are so quick that it’s hard for pilots to keep up to date, especially when they have busy schedules. Both of the pilots involved in the 737 Max disasters appear to have been unaware of the automation changes that Boeing made behind the scenes, suggesting that there is a lack of “automation transparency.”
Automation’s Checkered History
The two recent, highly publicized disasters are not the only time where pilots struggled to get to grips with automation. Back in 2013, an Asiana Airlines Flight crashed because the crew did not understand the automation technology aboard the plane.
The problem all along is the lack of communication between the makers of aircraft and the people who have to pilot them. Boeing, like other manufacturers, is reluctant to share information with pilots that might be important in the event of a malfunction. Doing so takes effort and, perhaps more importantly, admits that these systems are flawed and can fail.
The two disasters in which people died are not the only ones where a 737 Max malfunctioned. At the last count, a total of eight pilots said that they experienced random nosedives while piloting the aircraft. The problem is that the pilots do not have adequate training to cope with those situations. The plane appears to be doing things against their will, and it is unclear to the inexperienced pilot what exactly is going wrong. The instinct is to point the nose of the aircraft upwards, which re-activates the anti-stall mechanism, tilting it back down again.
Fully Autonomous Planes Closer Than Autonomous Cars
None of the recent events appear to have put the brakes on the drive toward autonomous planes. The industry sees it as a way of increasing value to carriers, as was made clear at the Paris Air Show. Automation of vehicles in the skies is a more technically feasible challenge than automation of cars on the road. Aircraft face fewer obstacles in the sky, there is less unpredictability, and it is possible to fit automation technology to the majority of vehicles quickly, allowing them to interact with each other.
Automation, it is hoped, will eventually negate the need for human input at all. But if that is to happen, automated systems need to be able to understand context and make executive decisions. The recent disaster with the 737 Max shows what can happen when we rely on machines that understand nothing about the world beyond the narrow horizon of their programming.
Will recent events derail progress in this direction? Unlikely. Hopefully, these disasters will spur engineers and companies to improve communication with pilots and develop systems that have better failsafe. Ultimately, pilots should be able to take over in the event of an emergency.