More news on this day
Donald Trump’s national artificial intelligence framework is pitched as a way to free businesses from a state-by-state “patchwork” of rules, but for the travel sector it largely overlooks the most immediate pressure point: how AI, data and biometrics are transforming the experience of crossing borders, boarding planes and moving through major transport hubs.
Get the latest news straight to your inbox!

A National AI Rulebook With Limited Space for States
The executive order signed in December 2025, Ensuring a National Policy Framework for Artificial Intelligence, and subsequent legislative recommendations seek to put federal authority at the center of US AI governance. The documents call for a “minimally burdensome” regime, tether AI oversight to national security and economic competition, and signal that conflicting state rules should give way to a single national standard.
Legal and policy analyses of the order note that federal agencies are being encouraged to treat many state AI statutes as barriers to innovation. Commentaries from law firms and policy groups describe a strategy that includes directing the Department of Justice to challenge selected state laws, tying some federal funds to state compliance, and developing draft legislation that would explicitly preempt measures viewed as out of step with federal priorities.
Yet the framework leaves important gray areas. It preserves some state authority over issues such as child safety, public infrastructure and government procurement, but offers little clarity on how far states may go in regulating algorithmic decision-making in sectors like employment, housing or consumer protection. That ambiguity sets up a likely wave of litigation as states test the limits of their remaining powers.
For travel, those uncertainties matter because airports, seaports and intercity hubs sit at the intersection of federal jurisdiction over borders and aviation, and state and local responsibility for consumer rights, privacy and civil liberties in public spaces.
State AI Laws Collide With Travel’s Data-Heavy Reality
Over the past two years, several US states have adopted or advanced AI and automated decision-making laws directed at consumer protection, transparency and anti-discrimination. Colorado’s Artificial Intelligence law, along with measures in states such as California and Connecticut, places duties on companies that deploy high-risk AI systems affecting individuals’ access to housing, employment, credit and other vital services.
Although these statutes are not written specifically for tourism or aviation, they reach deeply into the travel economy. Airlines, hotel groups, online travel agencies and car rental platforms increasingly rely on recommendation engines and scoring models to set fares, allocate upgrades, target marketing and evaluate fraud risk. As these systems fall within state definitions of high-risk or consequential AI, travel businesses must map, document and explain how automated tools affect customers.
Trump’s framework is designed to blunt that trend. Policy papers and legal briefings indicate that the administration views state-level obligations as a drag on innovation and a risk to US leadership in AI-intensive sectors, from finance to logistics. If a uniform federal framework ultimately narrows or overrides those state rules, many travel companies would see immediate compliance relief, especially those operating across multiple jurisdictions.
At the same time, a sharp curbing of state authority could weaken one of the few levers travelers currently have for asserting rights over algorithmic decisions that affect pricing, loyalty benefits or dispute resolution. That tension between smoother national compliance for industry and diminished local safeguards for consumers sits at the center of the travel sector’s stake in the new AI landscape.
The Missing Piece: Biometrics and Airport AI
While the national AI framework devotes extensive attention to innovation and national security, it has noticeably less to say about one of the most visible uses of AI for travelers: biometric identity checks and smart surveillance in airports and at land borders. Federal agencies such as the Transportation Security Administration and Customs and Border Protection have been piloting and expanding facial recognition and other biometric systems for years, backed by separate statutory and policy mandates.
Those deployments already raise questions that fall between federal and state authority. Airport terminals are often owned by city or state entities, while security and border control are federal responsibilities. Biometric boarding gates operated in partnership with airlines and technology vendors lean on both government data and private infrastructure. Privacy advocates have argued that state rules on biometric data collection and transparency should apply within terminals, while industry groups contend that federal aviation security policy should prevail.
Trump’s AI framework only lightly touches these frictions. Publicly available summaries focus on encouraging agencies to modernize identity verification, reduce bias and improve accuracy in high-stakes applications such as border control, but there is little new guidance on data retention limits, passenger choice, or how commercial partners in travel should be governed when they embed government-grade biometrics into their own processes.
As passenger volumes continue to rebound and wait times remain a political sensitivity, airports and airlines face pressure to expand AI-powered identity and risk assessment tools. Without clearer rules that bridge federal objectives and state-level privacy norms, travel hubs risk operating under overlapping obligations that satisfy neither civil liberties concerns nor industry demands for predictability.
Airlines, Platforms and the Patchwork Problem
Airlines and global distribution platforms are among the travel businesses most sensitive to the AI regulatory balance. Large carriers are experimenting with generative AI for customer service, dynamic pricing and disruption management, and are testing predictive models for aircraft maintenance and crew planning. Online travel agencies and metasearch sites rely on ranking algorithms that can materially shape which destinations and suppliers travelers see first.
Industry-facing commentary on Trump’s AI order suggests that many technology and financial services firms welcome the prospect of a single federal framework, arguing that complying with dozens of slightly different state rules could slow deployment and favor only the largest enterprises. For travel, a single set of baseline standards could simplify product design, especially for tools that operate nationally but are developed and hosted across multiple states.
However, that uniformity may come with tradeoffs. If the national framework reflects a relatively light-touch approach, travel and hospitality companies may face fewer obligations to audit AI systems for disparate impacts on protected groups, or to provide individualized explanations when automated tools drive adverse outcomes such as denied boarding, downgraded refunds or flagged fraud checks. Some state laws in development seek exactly those kinds of safeguards.
Travel brands that operate internationally must also reconcile US policy with stricter regimes elsewhere, such as the European Union’s AI Act and existing data protection rules. Even if Trump’s framework succeeds in preempting some US state laws, global carriers and hotel chains will likely continue building to the highest regulatory bar, leaving smaller domestic players more exposed to the exact patchwork problem the administration says it wants to solve.
What Travelers Should Watch Next
The next phase of Trump’s AI push is expected to unfold through agency rulemaking, federal grant conditions and potential legislation that would codify elements of the national framework. Legal observers anticipate challenges from states that view the effort as an overreach, with disputes over preemption and the scope of federal authority likely to reach higher courts.
For travelers, the outcomes will be felt less in abstract doctrines and more in the everyday frictions of planning and taking a trip. Data used to train airline and hotel algorithms, the biometric checks embedded in airport journeys, and the transparency available when an automated system makes a consequential call on a ticket or refund will all be shaped by how federal and state powers are eventually balanced.
Travel companies that depend heavily on AI may favor the clarity of a single national rulebook, but they will still need to navigate local expectations on privacy and fairness, as well as international norms in cross-border markets. In that sense, the greatest pressure point for travel is not whether Washington or the states win the power struggle, but whether any resulting framework offers practical, credible safeguards for the people whose journeys AI is increasingly managing behind the scenes.