Most people think “learning from experience” is easy, or natural, but neither is true in a high-consequence environment. It takes serious effort and a disciplined awareness to learn well when safety is a critical concern. Some “successes” may be accepted as valuable – but only after careful analysis. Many other “successes” should be rejected as only “luck;” or “one-off winners.” What succeeds as an expedient should never immediately be accepted as a Standard Operating Procedure (SOP).
Humans are natural optimizers, and any action that succeeds is often immediately incorporated (untested) into our mental playbook of “acceptable actions.” This mental process often occurs subconsciously and is called “normalizing.” It happens automatically with the brain chemicals that reward “success.” We can easily develop very bad habits without reflection.
Just because an action “succeeds” does not mean it should become a “Standard Operating Procedure (SOP) Every piloting action must be carefully examined and reflected upon. Utility is often at odds with safety. And safety must be our primary consideration in our high-consequence activity. “Success” alone cannot validate procedures. Many unsafe actions and techniques do not immediately reveal themselves as dangerous (we got lucky)! These untested “actions that succeed” can easily get coded into our brains as “acceptable” by a process called “normalizing.“ We become incrementally blind to the risk (drifting into failure). These actions may be quite hazardous but have not yet reached the “tipping point” of catastrophic failure. If we are honest, we must admit that “plain dumb luck” often protects us from catastrophe. But as the saying goes, “luck is not a reliable planning technique.” Honest reflection is an acquired piloting skill and essential to determining safe procedures we want to retain. The “O-Ring problem” was discovered on the second shuttle launch (accepted as normal) but only became catastrophic on the 25th launch…
When we normalize (mentally accept) untested and harmful behavior without “scoring” its veracity, we are often “drifting into failure.” An unfortunate, and very public example, of “the smartest people on earth” drifting into failure was NASA and the Space Shuttle accidents. Empowered by the amazing success of their moon landing, NASA exhibited overconfidence and classic “normalizing” behavior by aggressively launching shuttles way beyond recommended risk parameters. Their luck
ran out (twice) and the result was two dramatic and painful accidents. Viewed honestly, “success” (from luck) continually normalizes unacceptable risks. In this way, “success” actually becomes an *impediment* to honest learning by reinforcing behavior and techniques that are dangerous to future safety.
The current backcountry flying craze is following this same pattern. Risky flying that succeeds becomes “normalized” (and even celebrated on YouTube) and this continues to push the acceptable edge. Challenges taken too far inevitably “drift into failure.”
Judging any action by its results alone is a bad strategy (but this is how we commonly proceed when guided by emotional satisfaction). An example the Stanford Strategic Decision-Making Group presents in training makes this clear:
At a party, you have a few too many drinks and wisely decide to call an Uber to get home instead of driving; good or bad decision? Unfortunately, your car gets T-boned on the way home resulting in multiple injuries. If we test this decision just by “results,” it is obviously rated “bad.” Conversely, suppose you decide to drive home instead and successfully make it without damage, is this a vindication of this behavior? You can see “drift” in action here…
In many cases Murphy’s Law allows us succeed even with we exercise pretty bad judgment; we get away with it. And without careful reflection, bad decisions can easily become SOPs; we see this every day. Our brain rewards “success” and reinforces these behaviors.
Even smart, well-educated people can struggle to learn from experience. We all know someone who’s been at the office for 20 years and claims to have 20 years of experience, but they really have one year repeated 20 times. Double Loop Learning
To enable honest learning from experience, we need to reflect on every serious action or procedure and score its validity and value; this takes time, effort, and honesty. We need to honestly grade each new technique to verify that it conforms to acceptable industry norms. Also, does it reliably and repeatedly create the predicted and desirable outcome? Are the costs greater than the received value (every action has some risks)? Serious reflection and analysis are the required tools for learning from experience, and it does not happen “naturally!” Reflective analysis is a skill every pilot must learn and practice for safety. The military version of this same process is an after-action report (learning opportunity)!
Real progress and improvement (learning and not just problem-solving) occurs at a higher level and involves tweaking the mental models and preventing the error in the first place. This requires time to reflect critically on our own behavior and failings, solving deeper thinking/scripting problems. Level two or “double loop” learning freely admits to errors and fixes our inner OS that is usually the root cause.
Fly safely (and often) and see you at SAFE booth B-2097/8 at Oshkosh . Our SAFE Member Dinner is at the EAA PRC Thursday evening, July 27th. We need your help at the booth (so I can escape and present at the forums)!
Enjoy the new courses available to members on the new safe website. And please download and use the (free) SAFE Toolkit App. This contains all the references a working CFI needs plus provides continuously new safety content.
SAFE developed an insurance program just for CFIs! When you are an independent CFI, you are a business (and have legal exposure). This program is the most reasonable but also comprehensive insurance plan you can have (and every agent is a pilot!)





Tell us what *you* think!