February 29, 2026
The Gulf conflict tested assumptions about data sovereignty, infrastructure redundancy, and team availability that most security programs had never verified. What broke, and what didn't.
Not a leap year
I arrived in Abu Dhabi in the early hours of February 28th. The city was as clean as ever, organised, safe. With my luggage, I joined the queue for a taxi and, when it was my turn, asked the driver if it was alright to go to Dubai. He said yes, no problem. I double checked. He confirmed.
I had just come off a seven-hour flight, with the usual three hours waiting at departure and another hour on arrival, so with a ninety-minute drive to Dubai ahead of me, I was looking forward to checking in, having a quick nap, and resuming normal civilisation. It was fair to say I was tired.
The sentiment, unfortunately, seemed mutual.
Every so often the driver drifted, and cars on the motorway honked as he wandered between lanes. I asked if he needed a break. “No sir, I am fine,” he said, while the car edged right again. His arm was perched on the armrest in a way that looked less relaxed than strategic, almost as if he was physically reminding himself to stay awake. I suggested we put some music on. I kept him talking. Here we were: two tired individuals, one car, one motorway, and a fairly mutual interest in making it to Dubai in one piece.
By the time we entered Dubai it was unmistakable. Marina just has its flare, and that meant I was twenty odd minutes from my residence. It still was not daylight, so the city was illuminated in that way Dubai tends to be: polished, excessive, slightly theatrical. I enjoyed the view. I had missed Dubai.
When I arrived, I thanked the driver, urged him to get some rest, and checked in with the host. I was exhausted and, like any reasonable person at that hour, excited to sleep. I walked into the room, took a breath, and immediately thought: is that gas?
I messaged the host. He came up with housekeeping. They insisted it was air freshener. I disagreed. After some back and forth, the host gave me another room. It was much better, and, frankly, more like the room size I had paid for in the first place. At that point I thought, well, fine. That is probably the most inconvenient thing that can happen.
In Dubai, hospitality is premium, safety is part of the offering, and luxury is abundant. Even problems tend to arrive with a solution.
My mentee asked for a video, so I showed him the view from the balcony: the water, boats drifting past, the Burj Khalifa to my right, the SLS tower to my left.
Then I fell asleep.
I woke up to a message from that same mentee asking if I had seen the news.
I had not. I do not tend to follow world news; I scope it fairly strictly to cybersecurity. But this, it seemed, had the decency to affect me directly. Overnight, US and Israeli forces had struck Iranian nuclear and military infrastructure. Nobody seemed entirely sure what would happen next, but the mood around me in the Gulf was oddly nonchalant. Life was still proceeding as normal. We assumed we were safe. We assumed this would pass by the weekend. The region, on balance, did not appear to be preparing for apocalypse.
So I did what one does in Dubai when one believes the world is not ending: I went out for coffee with a friend, came back, and fell asleep by 11pm.
Then, just after midnight, my phone went off.
The sound of that siren is difficult to describe, but it did not sound procedural. It sounded urgent in the way one imagines these things are supposed to sound, which is unsettling when you have gone your whole life without hearing one for real. The warning told us to stay away from the windows due to an incoming missile threat.
My entire apartment was windows.
And I was near the top of the building.
So I went downstairs immediately, to the bottom, where people were already gathering indoors. This was clearly the first time many of us in that room had dealt with anything like that warning. No one really prepares for a ballistic missile. Some people become practical. Others panic. I saw a group of guys who, by all normal metrics, looked easy going enough; one of them was now having a full blown panic attack.
His room was on the first floor, but they had already decided they were not sleeping there that night. Their plan was to go to the underground parking and sleep in their car instead, which, under the circumstances, had the ring of improvised doctrine. He did not feel confident enough to take the lift alone, and security were blocking access to the stairs, so I offered to go in the lift with him.
He told me I was the most nonchalant person he had ever met.
I do not think that was true. I think there are moments where panic feels like a luxury, and practicality takes over because it has to.
Anyway, once they got to where they were going and seemed alright to continue about their way, I went back to my apartment, found a corner away from the glass, and slept on the hardwood floor with one eye open.
Then, in what I still consider a perfectly reasonable internal response to the situation, I told myself:
Ah.
Must be a leap year.
“Plans are worthless, but planning is everything.” — Dwight D. Eisenhower
Around the world, leap years carry bad luck. 2026 isn’t one, but February’s last day didn’t check the calendar.
Now imagine you’re not one person in a serviced apartment. You’re a CISO, and instead of one balcony full of windows, you’re responsible for an organisation’s entire security posture: the infrastructure, the vendors, the staff, the compliance trail, the risk model that’s supposed to account for all of it. None of it has been tested either. The BCP says “failover to secondary region.” The runbook references a team lead who left six months ago. The contact list has three numbers. Two go to voicemail. One is disconnected.
The plan existed. Nobody in the building could execute it.
February 29
Iranian retaliation reached across all six GCC states. The UAE absorbed 174 ballistic missiles, 689 drones, and 8 cruise missiles. Air defence intercepted the majority, but not all.
An AWS data centre in Bahrain took a direct hit. Banking services went dark across the region. The Strait of Hormuz was effectively closed. Twenty million barrels per day of global oil transit, shut down. Stock exchanges suspended trading. The DIFC facade in Dubai sustained blast damage.
Those are the facts. The question that matters is different: which of your assumptions just got tested?
What your last board briefing assumed
Five assumptions that looked solid on paper. Each one failed, and each failure made the next one worse.
”Your infrastructure is geographically redundant”
Sovereign data mandates pushed workloads into single region deployments, compliance teams signed off because the architecture kept data within jurisdiction, and the DR plan referenced a failover process that had been documented once and tested never. That process assumed the team who wrote it would still be around to run it, that the vendor who architected it would still be under contract, and that the secondary region could handle production load even though nobody had ever put production load on it. A single region is a single point of failure.
While AWS supports deployment across multiple regions, whether your application architecture, your data replication, your DNS configuration, and the human being who owns the cutover can all function simultaneously when the primary region takes a ballistic missile is not a question you answer by reading the provider’s documentation. You answer it by testing. Most organisations hadn’t.
The failover that existed on paper required two things to work: available vendors and available staff. On February 29, it had neither.
”The vendors will still be there”
Hormuz closed and took more than oil with it. Insurance carriers activated war exclusion clauses, vendors with Gulf operations paused contracts, shipping routes for replacement hardware rerouted around the Cape adding three weeks to delivery, and incident response firms couldn’t get staff into the country because commercial flights were suspended and visa processing had stalled.
Vendor risk models are built for cyber incidents: a vendor gets breached, their SaaS goes down, their credentials are compromised. Nobody modelled the scenario where the vendor’s regional office is physically inaccessible, their insurance won’t cover the engagement, their staff can’t get entry visas, and the shipping lane they’d use to send you a replacement appliance no longer exists as a viable route. Supply chain risk that only considers digital disruption is half a model, and half a model is worse than no model because it gives you confidence you haven’t earned.
When vendors can’t reach you and you can’t fly contractors in, the only resource left is your own people.
”Your team will be available”
I watched someone have a panic attack in a hotel lobby at midnight because a missile siren went off and he couldn’t figure out whether the lift or the stairs were safer. Security were blocking the stairs. He didn’t trust the lift. The documented evacuation procedure didn’t account for someone being too scared to move.
Scale that up. Half of prospective hires in the region paused relocation. Existing staff made personal safety decisions that, correctly, took priority over SLA commitments. Incident response contractors couldn’t get into the country. Flights were grounded.
Key person dependency sits on every risk register. The mitigation column says “document the runbook” and everyone signs off because a wiki page exists somewhere, except that wiki page was written by someone who’s since left, references procedures that assume familiarity with the environment, and has never once been executed by the people who’d actually need to run it when the author isn’t there. When three critical staff are simultaneously unreachable, a runbook nobody remaining can execute is just a document. It’s not a process. It’s a memory.
With no failover infrastructure, no vendor support, and no available personnel, the first thing to stop was evidence generation: the manual screenshots, spreadsheet updates, and email confirmations that regulators require as proof of continuous compliance. And the regulators didn’t stop requiring them.
”Regulators don’t grant conflict exemptions”
Saudi Arabia’s NCA ECC framework doesn’t have a conflict exemption. Neither does the UAE IAS. The compliance clock doesn’t pause because your team is sheltering from missile alerts.
It kept running through every week of the crisis, through the strikes, through the flight suspensions, through the period when security teams were focused on physical safety rather than producing artifacts. The organisations that relied on manual evidence collection now have gaps in their compliance trail that map precisely to the days when nobody was sitting at a desk capturing screenshots. The framework doesn’t have a line item for “armed conflict.” The gap just exists, and it’ll be there when the auditor asks for it.
Every one of these failures (infrastructure, vendors, personnel, evidence) was supposed to have been accounted for in the risk model. None of them were modelled as correlated.
”You’ve modelled for this”
“Data centre struck by ballistic missile” wasn’t in most risk registers. This isn’t a failure of imagination; it’s a failure of calibration. S&P affirmed the UAE’s AA/A-1+ sovereign rating throughout the crisis, which tells you something about national resilience. It tells you nothing about yours.
National infrastructure absorbed the strikes and recovered. Individual organisations’ recovery depended entirely on whether they’d tested their own assumptions, and the models those assumptions fed into treated infrastructure availability, vendor availability, personnel availability, and evidence generation as independent variables, each with its own probability distribution, each assessed in isolation, when the entire lesson of February 29 is that they all failed at the same time because the same event caused all of them. Your risk model had five inputs. It needed one correlation. If your risk quantification doesn’t account for that, it’s not a model. It’s a spreadsheet with confidence intervals.
الصديق وقت الضيق
There’s an Arabic proverb: الصديق وقت الضيق (a friend is known in times of hardship). Systems, like friends, reveal what they actually are under pressure. The organisations that kept operating through the crisis shared patterns. None of these are surprising. All of them are hard to sustain without discipline.
The tested plan won
Organisations that executed DR failover quarterly recovered. Not annually. Not “when we get around to it.” Quarterly. A BCP that’s been tested beats one that’s 200 pages long, because a documented plan that’s never been executed is a hypothesis and a tested plan is evidence.
You wouldn’t mark a vulnerability “remediated” without rescanning it. Why would you mark a BCP “current” without executing it?
When the expert leaves
Organisations with repeatable processes encoded in systems weathered personnel disruption. Defined steps. Assigned owners. Automated handoffs. Losing a team member was a staffing problem. For the organisations where the process lived in someone’s expertise rather than in a system, it was an operational crisis.
Can the process run without any specific person present? If the answer is “it depends who,” that’s your finding.
The machines didn’t stop
Automated evidence generation kept running through the crisis. Log exports, configuration snapshots, control validation records. Systems don’t care about missile sirens. They produce artifacts on schedule. The compliance trail from automated collection is continuous.
Manual evidence collection stopped. The gap maps precisely to the period when teams were focused on staying alive. Auditors don’t accept “we were under missile attack” as an explanation for missing evidence. They shouldn’t have to. The system should produce the evidence whether or not a human is watching.
1.5 million barrels short
The Abu Dhabi Crude Oil Pipeline exists as a strategic bypass for Hormuz. It works. But its capacity is roughly 1.5 million barrels per day against Hormuz’s 20 million. The backup exists and it can’t match primary capacity under full load.
The same pattern shows up in DR environments. The secondary region exists, it passed a checkbox review, but it was never tested at production load. When failover actually happened, the environment couldn’t handle the traffic, the data was hours stale, and the team that was supposed to manage the cutover was unreachable. A backup that can’t handle the real load isn’t a backup. It’s a checkbox.
Five questions for Monday morning
These aren’t recommendations. They’re questions. If you can’t answer them with evidence, that’s the finding.
Infrastructure. When did you last execute a failover? Not plan one. Not document one. Not schedule one for next quarter. Actually execute it. What broke?
People. If your three most critical security staff left within 30 days, which processes stop completely? Which ones degrade? Which ones keep running without intervention?
Evidence. Can you produce compliance evidence for a period when your team was unavailable? Not “we’ll reconstruct it after.” Can your systems produce it autonomously?
Insurance. Pull your cyber policy. Read the war exclusion clause. Does the current Gulf situation trigger it? If your broker says “probably not,” ask for it in writing.
Vendors. Which critical security vendors have Gulf operations? What are their continuity plans? Have you read them, or just confirmed they exist?
The Planning, Not the Plan
The Gulf conflict didn’t create weaknesses in security programs. It revealed them. Every failure pattern here exists at smaller scales in daily operations: the vulnerability marked “remediated” but never rescanned, the control “implemented” but never tested, the DR plan “current” but not executed in 18 months.
Documentation is not evidence. Verification is.
The organisations that kept operating through February and March weren’t the ones with the best plans. They were the ones that had done the planning: the testing, the exercising, the uncomfortable discovery that their assumptions were wrong, followed by the work to fix them before the assumptions were tested for real.
I went from a balcony overlooking the Burj Khalifa to a hardwood floor in a dark corner, away from the windows, in twelve hours. Your security program’s distance between “ready on paper” and “ready under pressure” might be shorter than mine was. But you won’t know until it’s tested.
February 29, 2026 isn’t on any calendar. For those in the Gulf, it hasn’t ended yet. We hope it does soon, and that everyone is safe.
If your DR and security program assumptions haven’t been tested recently, book a resilience architecture review.
References
- 1. NCA Essential Cybersecurity Controls (ECC) (National Cybersecurity Authority (Saudi Arabia))
- 2. UAE Information Assurance Standards (Telecommunications and Digital Government Regulatory Authority)
- 3. Abu Dhabi Crude Oil Pipeline (ADCOP) (ADNOC)
- 4. Strait of Hormuz shipping and oil flow data (US Energy Information Administration)
Want to operationalize remediation?
See how Basirah supports remediation with ownership, verification, and evidence.
Book a Walkthrough