This is the third article in a four-part series looking into how airlines manage the application of artificial intelligence in commercial functions.
AI offers tremendous opportunity in airline commercial operations: pricing, merchandising, scheduling, and more. AI can tap into huge datasets to create reliable forecasts for customer behaviors and to develop algorithms for the optimal solution for maximizing revenue and/or customer satisfaction.
However, even with the possibilities for AI, there is no “perfect” algorithm. Given shortcomings in any such model, every AI/ML application requires active governance. Airline revenue management, one of the first applications of Big Data and Machine Learning, has developed such a governance.
Perhaps RM experience offers both lessons – and warnings – for more recent applications of AI/ML – as AI sees more uses in airline commercial decisions and an even greater role in pricing.
“ModelOps” for AI – or the framework for AI governance — often begins with data quality. In airline RM data quality continues to be a challenge even 50 years after the first RM models were deployed.
Data Quality – Impact of the AI/ML Model on the Data
Data inputs for airline revenue management have historically been simple — historic bookings. Certainly, booking data can be corrupted or there can be extraneous factors – a cancelled event or extraordinary weather – that make historic data unrepresentative of future performance.
Much effort is directed to “cleansing” data for revenue management and for other data applications. One concern of data quality that doesn’t receive much attention, however, is how the AI/ML model itself interacts with the data it then uses to take decisions. Airline RM recognizes this interaction and has developed a possible work-around from which other AI/ML users can learn.
"Much effort is directed to 'cleansing' data for revenue management and for other data applications."
The most commonly cited example of how the revenue management AI/ML model interacts with the data is termed “spiral down.” When limited full fare demand is detected, the model will logically try to fill the plane with lower fare demand.
This, however, can squeeze out high fare demand even when it appears – there are no seats left to be sold.
This can lead to “spiral down” since without observable high fare demand, the model increasingly favors low fare demand, effectively “spiraling down” the selling fares.
High availability of low fares can be a self-fulfilling prophecy in the sense that high fare passengers cannot book seats if there is limited inventory left for them.
Some airlines currently override the “optimization” process to avoid a downward cycle of providing too much low fare inventory.
The largest RM system providers offer tools to forecast how much “hidden” high fare demand might exist even if it isn’t observable. All such overrides, however, are by definition based on imprecise forecasts.
Learn About Bid Price and Protection Levels
Often, the override process is couched more as a “test” as to whether high fare demand is latent; the models revert back to the standard recommendations if the estimated high fare demand doesn’t show up.
This “test and learn” process is fundamentally outside the sophisticated optimization models based on observable history and wasn’t widely used until at least two decades after the first such RM systems were deployed.
AI/ML Offers Many Applications/Alternatives
Most AI/ML model applications will have a similar impact on observed results. Once a model is used for decision-making, almost by definition it impacts the data that is then used in the model itself.
A new possible application for AI/ML, for example, is increased personalization. Rather than offer a generic set of options or amenities for all customers, airlines seek to offer the most relevant menu for each market segment.
If an airline determines that a certain set of amenities best targets a passenger type, it may substitute this more personalized offering for the generic offering, making it much easier for the target customer to find and buy what she/he wants.
However, that will then make it much more difficult for that passenger type to book a different set of amenities. Again, this can lead to a self-fulfilling system that doesn’t recognize when preferences change.
"If an airline determines that a certain set of amenities best targets a passenger type, it may substitute this more personalized offering for the generic offering, making it much easier for the target customer to find and buy what she/he wants."
Just as airlines may override “optimization” to combat “spiral down,” users of AI/ML should continue to test alternative solutions not recommended by the model.
The best online retailers are known for constant testing; rather than relying on an “optimized” algorithm alone, they regularly test a broader set of alternatives. In fact, testing itself now has its own scientific process; tests may entail different content, different timing, different products displayed in different orders or at different times, and so on.
Airline RM has traditionally not incorporated such science-based testing in its use of AI/ML except in the limited case of “spiral down.” Historically, most testing has been the responsibility of RM analysts based on individual insights outside the model.
New airline AI/ML “optimization” should be built with broad, automatic testing to ensure the systems incorporate “hidden” demand segments and capture changes in customer needs not necessarily obvious in model-impacted results.
When “data quality” is typically discussed as a challenge for AI, the interaction of the model with the data is rarely highlighted. Constant testing is a critical piece of new AI/ML applications.
Tom Bacon is an airline industry consultant based in Denver, Colorado. With 30 years experience in a variety of travel companies and business situations, he has a track record of dramatically improving profitability through innovative revenue strategies.