The Booking.com breach has done more than expose millions of travelers’ data — it has exposed something far more uncomfortable: the gaps that boards are still not governing.
Here’s what that looks like in the Booking.com context.
The Booking.com Breach and the Risk Document That Doesn’t Exist
After a breach like this, someone will eventually ask to see the board-level risk assessment that mapped how reservation data and in-app messaging could be used to run convincing off-platform scams against customers.
Not a generic “phishing risk” entry on a risk register. A specific, documented analysis that said: “If an attacker gains access to booking details, dates, hotel names, and prior message history, they can impersonate us convincingly enough to trick guests into paying twice. Here’s the business exposure window. Here’s what we’re doing about it.”
In most cases, that document doesn’t exist.
What exists instead is a cyber risk register with a line item called “data breach” or “unauthorised access,” rated medium or high, with controls listed as “firewalls, monitoring, incident response plan.” The board approved it. Management reported against it quarterly. Everyone thought the risk was being governed.
But nobody wrote down the specific scenario where criminals use your own data to defraud your customers in your name.
That’s the gap. And that’s where deniability ends.
What Should Have Been in the Minutes
I’ve seen this pattern across sectors. The breach happens. The forensics come back. The board asks, “Did we know this could happen?”
The honest answer is usually: sort of.
Security researchers had been documenting reservation hijacking campaigns targeting platforms like Booking.com since at least 2023. Between June 2023 and September 2024, the UK’s Action Fraud received 532 reports of Booking.com scams, with victims losing £370,000. In 2018, criminals accessed data belonging to over 4,000 Booking.com customers. The company was fined €475,000 in 2021 for reporting that breach 22 days late to the Dutch privacy regulator.
The pattern was visible. The question is: what did the board do once they knew?
What should have been written down — and usually wasn’t — is a board minute that says something like:
“Management presented evidence of reservation hijacking campaigns targeting similar platforms. The board discussed whether our current controls adequately address the specific risk of off-platform fraud using our booking data and messaging channels. The board requested a detailed assessment of our time-to-warn capability and customer communication protocols in the event of a breach involving reservation context theft. This assessment will be presented at the next meeting.”
That minute creates accountability. It shows the board saw the risk, named it specifically, and required management to address it with evidence, not assurance.
When that minute doesn’t exist, the board’s position after a breach becomes much harder to defend. You’re left arguing, “We didn’t think that part was ours to govern.” That argument collapses quickly once customers have lost money and regulators start asking questions.
Before the Booking.com Breach: The Board Approval That Looked Fine at the Time
The other document that matters is the one where the board approved the product roadmap, the UX design, or the customer communication strategy that prioritised frictionless conversion over explicit scam warnings.
I’ve been in rooms where this happens. Product presents a clean, trust-heavy interface. In-app chat between guests and properties. One-click payment links. Minimal warning text so the screens stay visually appealing. The board sees the conversion metrics, the customer satisfaction scores, the competitive positioning. They approve it.
What doesn’t get written down is the trade-off.
Nobody minuted: “The board acknowledges that this design trains customers to trust payment-related messages in chat and email, which increases the effectiveness of reservation hijacking scams if our data is compromised. The board accepts this risk in favour of conversion optimisation.”
That’s the sentence that should have been there. Because that’s the decision the board actually made, whether they realised it or not.
When you approve a frictionless, trust-heavy customer experience without requiring compensating controls — explicit warnings, technical blocks on external payment links in chat, rapid customer alerts if data is breached — you’re making a governance choice. You’re saying the business benefit outweighs the customer harm risk.
That choice is defensible if it’s documented, deliberate, and accompanied by mitigations. It’s indefensible if it’s invisible.
The Escalation That Never Reached the Board
The third document that often doesn’t exist is the one that shows a CISO or risk lead raised concerns about off-platform harm, scam vectors, or time-to-warn, and those concerns were deprioritised without the board ever seeing them.
Here’s how that usually plays out.
The security lead says in a product meeting: “If we ship this flow as designed, criminals will be able to copy it almost perfectly using our own booking data. I want a hard rule in the UI: we will never ask customers to re-enter card details via a link sent in chat or email. And I want that warning visible in every confirmation.”
Product and marketing push back. “That adds friction. It damages the brand experience. We’ll handle scam education in a separate campaign.”
The concern is noted. The roadmap ships without the warning. The board never hears about the tension.
What should have been written down — and escalated — is a formal risk acceptance document that says:
“The CISO has identified that the current chat and payment flow design increases the risk of convincing off-platform fraud if reservation data is compromised. Product and commercial teams have declined to implement the recommended warnings and technical controls, citing impact on conversion rates. This risk acceptance requires board approval.”
When that escalation doesn’t happen, the board loses the ability to say, “We were never told.” Because the truth is, someone was told. It just didn’t reach the people who carry the accountability.
Why the Booking.com Breach Matters Now for Board Governance
The regulatory and legal environment is tightening around exactly this gap.
In 2022, a Georgia judge refused to dismiss a case against Equifax board directors who “had personal knowledge of the cyber risk and vulnerabilities” and “misrepresented the strength of Equifax’s security technology.” A California judge approved the first settlement against Yahoo board directors after a cybersecurity breach, signalling a trend towards fiduciary liability.
The Delaware Chancery Court’s Caremark decision made it clear: directors can be held personally liable for failing to “appropriately monitor and supervise the enterprise.” Recent shareholder derivative lawsuits increasingly focus on “failures surrounding duty of oversight.”
FTC Chair Lena Khan and Commissioner Alvaro Bedoya stated that “holding individual executives accountable” ensures firms and officers are better incentivised to meet their legal obligations. In 2022, Uber’s former chief security officer was criminally convicted in connection with a cybersecurity incident — believed to be the first time a U.S. company executive has been criminally prosecuted over a cyber breach.
What this means in practice is that “we didn’t think that part was ours to govern” is no longer a defensible position. Directors’ duties are framed around care, diligence, and foreseeable harm, not just compliance with the last regulatory update.
If your systems and data create a realistic pathway to off-platform customer loss, you’re expected to ask how that’s being managed — and to document that you asked.
What Needs to Be Written Down Going Forward
After a Booking.com-style breach, boards that take this seriously make three specific documentation changes.
First, they require a standing agenda item on customer harm scenarios.
Not generic cyber risk updates. A quarterly review that asks: “Given our data, our channels, and our customer interactions, what are the top three ways criminals could use our environment to defraud our customers? What controls exist? What’s the time-to-warn if those controls fail?”
That question gets documented. The answers get documented. If management can’t answer cleanly, that gets documented too.
Second, they create explicit escalation rules for security-versus-growth trade-offs.
Any time a security or risk control is deprioritised because of commercial impact, conversion rates, or UX concerns, it must be escalated to the board or executive committee with a formal risk acceptance document. No more burying discomfort in product steering groups.
The document states: what was proposed, why it was declined, what the residual risk is, and who approved the trade-off. That creates a clear line of accountability.
Third, they add time-to-warn as a board-level metric.
Most cyber dashboards still focus on time-to-detect and time-to-recover. Those are about your systems. Time-to-warn is about your customers.
The metric I’d add is brutally simple: “Average time from confirming high-risk data misuse to first explicit warning reaching affected customers.”
Not time to draft the comms. Not time to notify the regulator. The clock starts when you know attackers can use your data to impersonate you, and stops when customers receive a clear, actionable warning.
GDPR mandates notification to supervisory authorities within 72 hours of becoming aware of a breach that risks individuals’ rights and freedoms. Meta paid €1.2 billion in 2023 partly for data handling violations. British Airways was fined £20 million after a breach exposing 400,000 customers’ payment details. GDPR fines can reach 4% of global annual revenue or €20 million, whichever is higher.
The U.S. SEC now requires public companies to disclose material cybersecurity incidents within four business days of determining the incident is material. U.S. banking regulators require banks to notify “as soon as possible and no later than 36 hours.”
When time-to-warn sits on the dashboard, communication becomes a primary control, not just reputation management. The board sees, in hours or days, how long customers were left exposed before being told what was happening.
That’s the number regulators and plaintiffs will compare against “without undue delay.” If you can’t explain why it took several days to warn customers while scammers were already active, timing becomes part of the harm.ck) rather than behaviour-centric (trusted user doing normal things at abnormal scale).
The Governance Standard After Booking.com
What the Booking.com breach exposes is not a technical failure. It’s a governance failure that preceded the technical one.
The stolen data — names, email addresses, phone numbers, booking details, communication history — creates what security experts describe as a “gold mine” for fraudsters. Unlike typical breaches that expose passwords or financial data, this incident highlights a different kind of vulnerability: contextual data that enables highly targeted phishing attacks.
David Shipley, CEO of Beauceron Security, explained that reservation hijacking works because attackers “know you’re booking. They wait for it to get close to the date. They email you convincingly that your booking has been cancelled and you need to contact them immediately. That is stressful. Now we’re in panic mode. And that’s when we start to make mistakes.”
Critically, he noted that “the scammers who exploit this kind of data don’t always strike immediately” — meaning the business exposure window extends far beyond the technical containment date.
That’s the governance gap boards need to close. Not after the breach. Before it.
The documents I’ve described — the risk assessment that maps specific customer harm scenarios, the board minutes that show you saw the trade-offs and demanded evidence, the escalation rules that prevent security concerns from being buried, the time-to-warn metric that makes customer exposure visible — those are the artefacts of genuine governance.
When they exist, you can defend your decisions even if a breach occurs. When they don’t exist, you’re left explaining why something entirely predictable was never governed.
The question every board should be asking after Booking.com is not “Could this happen to us?” It’s “If it did, what would our minutes and papers show we knew, and when did we know it?”
If the honest answer is uncomfortable, the time to fix that is now. Before the breach. Before the regulator asks. Before the customers lose money and the headlines write themselves.
Because plausible deniability doesn’t survive contact with a timeline that shows the risk was foreseeable, the concerns were raised, and the governance gap was never closed.
That’s not bad luck. That’s a choice. And choices are accountable.
