
For decades new entrants have promised to revolutionize the archaic private jet charter market. AI is now center stage. What’s the reality?
By Doug Gollan, 10 hours ago
Recently, Tuvoli President & CEO Greg Johnson joined the Ironbird Podcast with host Dan Harris to discuss, “What’s Broken In Private Jet Charter.”
One subject was the use of Artificial Intelligence (AI), which has become a trend du jour in press releases.
In fact, one company, Jet Token, renamed itself to Jet.AI.
To bring you their discussion about AI, we provided the transcript of the podcast to ChatGPT. We then asked ChatGPT to provide a summary of their AI discussion and offer a perspective on where AI can support the private jet charter market and its limitations. The article was mainly edited with the assistance of Grammarly.
By ChatGPT
In the private jet charter market, artificial intelligence is often discussed as a destination rather than a tool.
Dan Harris surfaces this tension in his interview with Greg Johnson by repeatedly pressing on a deceptively simple question: what, exactly, is AI supposed to fix?
Johnson’s answer is consistent throughout the conversation.
The industry’s biggest risk is not failing to adopt AI fast enough. It is applying automation and AI to the wrong layers of the business, creating false confidence instead of better outcomes.
For years, charter operators have leaned on automation to handle volume.
Quotes are generated faster.
Requests are routed more efficiently.
Data is pushed across systems with increasing speed.
Tuvoli’s Greg Johnson warns that when AI is applied to problems such as ‘guaranteed availability’ or ‘perfect real-time pricing,’ it creates a veneer of precision that does not withstand contact with reality. The result is not better decision-making, but misplaced trust in outputs that appear authoritative yet are unreliable.
But Johnson argues that this acceleration has masked a deeper failure: automation has multiplied activity without improving understanding.
More quotes are being generated than ever before, yet operators often have less clarity than they did a decade ago about why trips book or fail.
Automation has increased noise faster than it has improved the signal.
This is where AI is frequently misunderstood.
Johnson draws a clear distinction between automation and artificial intelligence.
Automation is deterministic.
It follows rules.
When X happens, Y is triggered.
AI, by contrast, introduces probabilistic reasoning.
It evaluates patterns, context, and multiple possible outcomes before selecting a path.
The danger lies in assuming that AI can substitute for certainty in an industry where none exists.
Harris pushes Johnson on whether AI can eventually solve availability, pricing accuracy, or real-time aircraft certainty.
Johnson is unequivocal: those are the wrong problems to chase.
Aircraft break. Crews time out. Weather intervenes.
Even airlines with massive fleets cannot eliminate last-minute disruption.
Expecting AI to deliver perfect certainty in charter operations is a category error.
Yet much of the industry’s AI enthusiasm is implicitly built on that expectation.
Johnson warns that when AI is applied to problems such as “guaranteed availability” or “perfect real-time pricing,” it creates a veneer of precision that does not withstand real-world conditions.
The result is not better decision-making, but misplaced trust in outputs that appear authoritative yet are unreliable.
Where AI does add real value, Johnson argues, is in reducing friction and cognitive load — not replacing judgment.
AI can clean and reconcile messy data that humans struggle to manage at scale.
It can triage inbound requests, identify likely buyers versus low-intent inquiries, and surface patterns in historical booking behavior that no individual salesperson could reasonably detect.
It can significantly reduce time spent on repetitive tasks such as data cleanup, summarization, and internal communication.
But AI should not be asked to make safety decisions, override operational judgment, or promise outcomes that the physical world cannot guarantee.
Harris frames this as a maturity issue.
The charter industry, he suggests, has seen wave after wave of “revolutionary” technology promises — many of which failed to materialize at scale.
That history breeds skepticism and impatience.
When AI arrives with genuine capability, the temptation is to overcorrect and expect too much, too fast.
Johnson acknowledges this tension.
AI is moving faster than many previous technologies. Its ability to generate content, analyze data, and automate knowledge work is real.
But speed does not eliminate the need for domain expertise. In fact, it increases it.
One of Johnson’s most pointed observations is that AI solutions built outside aviation often fail because they address what customers say they want rather than what the industry actually needs.
Charter workflows are shaped by regulation, safety culture, owner economics, and operational nuance.
Without understanding those constraints, AI tools risk optimizing for metrics that don’t matter—or worse, actively harming margins and decision quality.
The most productive use of AI in charter today, Johnson suggests, is incremental and disciplined. Improve visibility. Reduce manual reconciliation. Help humans make better decisions faster — but keep humans accountable for those decisions.
Harris ties this back to a broader industry consequence.
When AI is oversold internally, operators either become overconfident or disengaged.
Sales teams stop questioning outputs.
Management assumes problems have been “solved” because a system exists. Meanwhile, the underlying issues—poor booking insight, payment ambiguity, and process friction—remain unresolved.
In that sense, AI does not fail the charter market. Charter fails AI by expecting it to replace foundational disciplines.
The path forward, Johnson argues, is not less technology but better sequencing.
Fix transactional clarity first.
Ensure systems of record exist.
Align sales, finance, and operations around a shared truth.
Only then does AI become a force multiplier rather than a distraction.
The following is ChatGPT’s response to our question, “At the end of your summary of the Ironbird podcast, can you provide ChatGPT’s perspective on potential and limitations relevant to the private jet charter market?”
That fact is worth addressing directly, particularly in an industry currently grappling with what AI can and cannot do.
ChatGPT’s strengths in the private jet charter market lie primarily in synthesis, explanation, and communication.
It can analyze large volumes of unstructured information—transcripts, contracts, emails, and policies — and quickly extract themes, risks, and narratives.
It excels at clarifying complex processes, reframing industry discussions, and helping professionals articulate their ideas more clearly to clients, investors, and internal teams.
ChatGPT is particularly useful for:
However, ChatGPT has critical limitations that mirror the warnings Greg Johnson raised in this discussion.
Most importantly, ChatGPT’s outputs are only as good as the context and data provided.
Without domain expertise guiding the questions, AI can confidently produce answers that sound right but miss critical nuance.
In the charter market, ChatGPT should be treated as an analytical assistant, not an authority.
It can help professionals think faster and communicate better — but it should never be mistaken for a decision-maker.
Used correctly, ChatGPT reduces friction and improves understanding.
Used incorrectly, it creates false confidence.
That distinction matters — in AI, and in charter.
[optin-monster-inline slug=”bikxydvh3yy1yhaljwws”
Enter your email address to subscribe to this blog and receive notifications of new posts by email.
Find the jet cards & fractional programs that fit your specific needs.
Find the perfect solution for your private aviation needs
Save Time. Buy Confidently.
Receive an apples-to-apples comparison of programs that meet your needs from more than 500 jet card and fractional options covering 65 points of differentiation and over 40,000 data points.