> Everything is "simple" with hindsight in mind.
The fixes are still simple and cost little.
I used to work at Boeing on airliner design. The guiding principle is "what happens when X fails" and design for that. It is not "design so X cannot fail", as we do not know how to design things that cannot fail. For Fukushima, it is "what happens if the seawall fails", not "the seawall cannot fail".
Airliners are safe not because critical parts cannot fail, but because there is a backup plan for every critical part.
Venting explosive gas into the building seems like a complete failure to do a proper failure analysis.
>at Boeing on airliner design. The guiding principle is "what happens when X fails"...Airliners are safe not because critical parts cannot fail, but because there is a backup plan for every critical part.
And yet creating a culture that is vigilant and consistently applies due diligence is hard. To that point: Boeing identified the 737-Max MCAS as 'hazardous' in their analysis. Putting aside that 'catastrophic' was the more appropriate rating, they still did not appropriately design their system when that system failed. (By their own processes, 'hazardous' meant it should not be designed with single-point hardware failures)* That implies it is as much a human/cultural issue as a technical one.
* before any claims that the system was designed just fine because the pilots could have avoided the issue with the appropriate actions, those are administrative hazard mitigations which are generally considered less desirable than hardware fixes, especially when engineering mitigations are already installed but not used. Removing the hazard >> engineering controls >> administrative controls >> PPE. To the GPP point, hindsight is easy, managing risk, people, and processes is hard.