1. Home
  2. TRENDS
  3. Artificial intelligence
0

Artificial intelligence

0

AI is a valuable training tool but it has limits.


By Shannon Forrest
Contributing Writer

If you ask artificial intelligence (AI) what AI is, the answer is “the creation of computer systems that can perform tasks normally requiring human intelligence, such as learning, problem-solving, and decision-making.

While human skills may fade and motor skills deteriorate with time, AI systems get better, accessing a database of experience to enhance safety and efficiency.

AI systems use large amounts of data to learn patterns and make predictions, enabling them to understand language, recognize objects, and act “autonomously.”

AI is becoming ubiquitous rapidly in all aspects of life, and aviation is no exception.

Although it’s common to ascribe anthropomorphic characteristics to AI – like thinking or emotion – this is a fallacy.

Even “understanding” as described in the definition is a misnomer.

Just like any other computer, AI still subscribes to the basic rules of programming and logic.

The power of AI lies in its ability to access near-infinite amounts of data quickly and accurately, and combine that with an algorithm.

This can be summed up nicely by the mantra that behavioral psychologists live by – “The best predictor of future behavior is past behavior.”

It’s the aviation equivalent of a pilot who has been flying for 35 years, has amassed multiple type ratings, has flown 75 different aircraft types, is a certified instructor across multiple categories and classes of aircraft, and has tens of thousands of hours.

But human skills have an unfortunate and fatal flaw – they have a shelf life. Memories fade and motor skills deteriorate. While humans get worse, AI gets better.

Now imagine that same imaginary pilot and his/her skills magnified by a factor of a million, with every single hour of experience (good or bad) readily accessible – a database of experience that can be tapped at will to make the system safer and more efficient.

That’s the direction in which AI is headed in aviation.

The pilot’s assistant

Although there’s currently a small movement to reduce 2-pilot requirements down to one, and make single-pilot operations fully autonomous, most of us are unlikely to see that any time soon.

In fact, a recently-passed funding package in Congress included the language, “shall not support reductions in flight deck crew” for commercial operations, meaning FAR Part 121 (airlines).

Instead, the trend seems to be embedding AI into automation to assist with decision-making, situational awareness (SA), emergency operations, and training.

There’s an important difference between AI and automation that must be understood. Automation is a process of completing the same task repeatedly or completing a task for long periods of time without human intervention.

CAE Rise uses AI to create a training ecosystem from more than 80,000 sessions and 2.7 petabytes of data. It replaces “checking the box” with personalized, insight-based learning, instilling correct habits early on and detecting unsafe behavior. It aims to eliminate the “Satan-Santa effect” by standardizing evaluation.

AI, on the other hand, can analyze a situation and suggest, promote, or encourage a deviation from the current course of action or a continuation of the present course.

In extreme situations, it can intervene, take control, and choose another methodology entirely.

Flying an ILS on autopilot is an example of strict automation.

If left unchecked, most aircraft will fly an ILS signal right into the ground or into another aircraft that taxies onto the runway.

In the 1980s, when the head-up display (HUD) was being developed, researchers saw this scenario play out.

In simulator trials, pilots were given the task of flying an approach to minimums followed by a subsequent landing.

What they didn’t know was that the designers had programmed the simulator to depict an aircraft taxiing into position just as the arriving aircraft reached ILS minimums.

Several pilots landed on the aircraft in position and explained during the debrief that they never saw the intruder.

The underlying etiology is what psychologists call inattentional blindness. We don’t see what we don’t expect to see.

Would this happen today, given automatic dependent surveillance – broadcast (ADS-B), synthetic vision systems (SVS), and emergency vision assurance systems (EVAS)? Perhaps.

However, the offending aircraft was clearly visible for several seconds. Would AI have captured it and issued a go-around alert to the pilot? Undoubtedly. Pattern recognition is the purview of AI.

AI in flight training

The ability to detect things that humans can’t – and do so at the speed of light – makes AI useful in the learning environment, in terms of both training and checking.

The challenge with primary flight training is that the role of the instructor is considered an entry-level position. This equates to lower pay than other professional flying jobs, which begets a desire to move up, resulting in higher turnover.

As an instructor leaves the job for a perceived better one, his/her students are moved to another instructor. It’s not uncommon for a single student to have 3 or more instructors during primary training.

Each new instructor brings with him or her a different personality, techniques, and perhaps even a new set of standards. Ultimately the student bears the cost in terms of extended training time and additional financial obligations associated with increased flight time.

Although the FAA Airman Certification Standards (ACS) should serve as the guide, every new instructor may have his/her own syllabus, curriculum, and focus points that vary substantially from what’s required or what’s measurably important.

According to an article in the Nov 2025 edition of the Flight Safety Foundation’s AeroSafety World magazine, instructors need to understand competence-based training and assessment, and evidence-based training, because these frameworks are playing an increasing role in shaping airline and corporate training.

Competence-based training means learning specific knowledge, skills, and abilities required for the job. Evidence-based training uses real-world and historical data to design probable scenarios in lieu of rote behavior.

Think of going around the pattern continuously, trying to learn landings and becoming frustrated repeatedly, because not a single one is perfect since each one has a different error.

The solution with most instructors is to just keep the student going around the pattern while coaching (eg, “That one was long,” “That one was fast,” or “You’re too high”).

That’s not identifying the underlying cause or working to solve the problem. Instructors need better tools to cut through the clutter objectively, and that’s where AI can help.

FlightSafety International

In 2019, FightSafety International (FSI) collaborated with IBM to develop FlightSmart – a program that uses AI and machine learning to enhance training effectiveness and improve performance.

FlightSafety’s FlightSmart, developed with IBM, uses AI to enhance training by measuring more than 4000 parameters in real time. It categorizes flying styles – from aggressive to timid – to provide personalized coaching. The program focuses on objective analysis to correct specific deficiencies, a hallmark of competence-based training.

Initial rollout and implementation occurred on 16 flight training devices and operational trainers dedicated to the T-6A Texan airframe at Columbus AFB in Mississippi.

The goal was to reduce pilot error and, accordingly, time to competence.

The FlightSmart paradigm can measure and record more than 4000 parameters in real time (dubbed “digital exhaust” by IBM).

It doesn’t matter whether that’s in a military simulator or a Falcon 7X.

Most of the information gleaned relates to things the instructor can’t see, like control force applied or calculated G force.

The data is then used to categorize a flying style that includes monikers like aggressive, risky, reactive, and timid.

Regardless, the output is used to develop personalized coaching to address very specific deficiencies – the hallmark of competence-based training.

FSI describes FlightSmart as an objective analysis of a crew member’s performance, and highlights that its purpose is to discover trends for better training management.

CAE

CAE also promotes AI data-driven training through its own proprietary program, called CAE Rise.

The company states that, to date, 80,000-plus training sessions have been recorded and assessed, leading to 2.7 petabytes of data, incorporating 1.5 million simulator-based maneuvers.

This is a huge dataset of what pilots are doing right and wrong. CAE Rise is more aptly described as a training universe or ecosystem that combines insight-based training and adaptive learning to develop personalized content.

It replaces “checking the box” with meaningful and deliberate practice. CAE describes some of the benefits as instilling correct habits as a function of primacy and detecting unsafe behavior early in the learning process.

Standardization of training is another highlight. The same standardization applies to evaluation. Instructors, line evaluators, DPEs, check airmen, and training center evaluators all develop reputations based on student perceptions (whether accurate or not).

Nearly every pilot can attest to having encountered “that guy” who’s known for being overly nitpicky, unreasonable, or for focusing on the seemingly unimportant items while promoting technique rather than standards.

On the other end of the spectrum is the too-easygoing, “it’s all good” guy, to whom anything short of crashing the plane is a pass.

In the standards community, this is known as the Satan-Santa effect. Personality traits drive students to prefer one examiner over another.

Both are extremes – one too harsh, the other too forgiving. Unlike a computer, all humans hold biases that could have a subconscious impact on how one grades an evaluation.

It’s not impossible to imagine a day in which there’s no evaluator in the simulator for a checkride. The computer would announce the task verbally, record the pilot completing it, and then debrief and grade pass or fail based on preprogrammed standards. That’s AI for you.

While it’s unlikely you’ll see a cyborg flight instructor giving instructions in the right seat in a Cessna 172, it’s certainly possible to collect data from flight control inputs and overlay that on a plot of the maneuvers to assist with instructor critique and standardization, especially if a student is forced to change instructors mid-training.

Interestingly, the younger and up-and-coming generation of pilots has an overwhelming preference for this level of technology.

They are products of the digital age, in which everything is recorded and broadcast. In their minds, all the simulator session is lacking is the ability to upload it to YouTube so their friends can like and comment on it.

AI on the horizon

There are a lot of promising things that AI can do operationally for aviators and in the training environment. Yet there’s one area that even CAE acknowledges it relies on instructors to assess accurately, and that’s the soft skill set.

AI can quantify how well pilots fly an ILS to minimums, how close they get to the touchdown zone, and how well they track the center line, but it can’t get a grasp on crew resource management (CRM).

FSI and CAE have always promoted facilitation rather than lecture among their instructor cadre, and that’s an essential element to debriefing CRM-related topics.

AI can’t read minds – at least not yet. It can quantify what a pilot did, but the reasons behind that action may only be brought out by a skilled instructor.

Things like teamwork, communication, leadership, and workload management are still best assessed by a human with knowledge and experience in a crew environment.

Human factors remain a causal element in incidents and accidents. Superior flying skills – as judged by AI – are meaningless without the soft skills that back those skills up.

How much AI is appropriate is like asking how much automation is appropriate. And the answer should be, “enough to improve SA, increase safety margins, and reduce workload. The best solution is a balance between technology and humans.

Older pilots need to accept that AI is here to stay and learn to incorporate it in the job.

Management is sold on the premise that AI improves operational efficiency in everything from route selection and operating practices to lowering training costs.

Younger, more technologically dependent pilots need to acknowledge that, despite what Google says, there’s still something to be learned from the gray-haired instructor.

Experience is a hard teacher because it gives the test first and the lesson later. Those gray hairs equate to a lot of tests and even more lessons.


ForrestShannon Forrest is a current line pilot, CRM facilitator, and aviation safety consultant. He has more than 15,000 hrs TT and holds a degree in behavioral psychology.