The Short Version

  • The fine is the last cost, not the first. UnitedHealth Group reported $3.1 billion in total Change Healthcare incident costs across 2024. Any HHS fine, if it materializes, will be a line item, not the headline.
  • Multiple deadlines run in parallel. A single incident can trigger GDPR’s 72-hour clock, NIS2’s 24-hour early warning, DORA’s 4-hour initial notification, HIPAA’s 60 days, Singapore’s three calendar days, California’s 30 days, and the SEC’s four business days for public companies, all at once, with different recipients and content.
  • Unpatched vulnerabilities are now a regulatory finding. The ICO, CNIL, and FTC have each penalized organizations specifically for failing to do what their own risk assessments or accepted security standards already required.
  • Acquisitions inherit breaches. Marriott discovered the Starwood breach two years after closing. The ICO held that cybersecurity due diligence is not a one-off acquisition step. M&A insurance now carves out pre-existing cyber liabilities.
  • Insurance has stopped being a backstop. Cyber policies require MFA, EDR, immutable backups, and patching commitments as conditions of coverage rather than as discounts. The absence of any of them at the time of an incident can produce a claim denial.
  • Personal exposure is expanding. The SEC charged a CISO personally in 2023. The case was ultimately dismissed in late 2025, but the precedent landed. Delaware courts have recognized cybersecurity as mission-critical for Caremark purposes, putting directors in scope for failure-of-oversight claims.

Get one-on-one advice on maximizing your hosting company’s valuation and navigating the sale process.

In February 2024, ransomware hit Change Healthcare, a subsidiary of UnitedHealth Group that processes roughly half of all medical claims in the United States. By the time the company finished notifying affected individuals, the count had reached 192.7 million people. Some received their breach notifications eleven months after the incident. Under HIPAA, organizations must notify affected individuals within sixty days of discovering a breach. Change Healthcare’s timeline was not sixty days. For many of the 192.7 million affected Americans, it was close to a year.

Change Healthcare is the largest healthcare data breach in US history by volume. It is also one of the most visible illustrations of a pattern that plays out at every scale of server compromise: the breach itself is one event. Everything that happens afterward, every notification deadline, every regulatory investigation, every enforcement decision, is a separate set of problems that run in parallel, move on their own timelines, and arrive whether the organization is ready for them or not.

On May 3, 2026, Optimed Medical Laboratories in Poland was breached through what investigators identified as a cPanel vulnerability. Patient health data, national identification numbers, and laboratory results were confirmed stolen. The company notified regulators and sent individual notifications to patients. The breach was one data point in a campaign that had already compromised military and government infrastructure in the Philippines and Laos through the same cPanel authentication bypass.

For every organization caught in that window, the same question applied the moment the compromise was confirmed: what does the law require now, how quickly, and what happens if we get it wrong?

The answers differ by jurisdiction, by the type of data involved, and by whether the breach stems from a failure that regulators will classify as a preventable one. On all three dimensions, the answers have become more expensive.

More expensive in ways that often become visible only from inside an incident. UnitedHealth Group reported $3.1 billion in total costs and business disruption attributable to the Change Healthcare incident across 2024. The figure encompasses direct response costs, provider advance payments, IT remediation, and lost operating income. The regulatory fine, if one materializes from the ongoing HHS investigation, will be a line item in that total, not the total itself.

The financial structure of a serious breach plays out in a sequence. Investigation costs arrive first. Regulatory clocks are running before answers are available. Notification logistics constitute a mass communications project most organizations have never run before. The fine, the litigation, and the compliance obligations arrive afterward and run for years. What follows is a breakdown of each layer, jurisdiction by jurisdiction, and what the enforcement record shows.

The 72-Hour Clock and What Starting It Means

The General Data Protection Regulation applies to any organization that processes personal data of people in the European Union, regardless of where the organization itself is located. A US company with European customers is subject to the GDPR for how it handles those customers’ data. A hosting provider that stores data on EU residents is operating under GDPR obligations whether its servers are in Frankfurt or Dallas. This is not a technicality that applies only to large enterprises: any organization running a customer database, an email list, or an e-commerce platform that includes EU residents has GDPR obligations.

Under Article 33 of the GDPR, a data controller (the organization that decides what personal data to collect and how to use it) must notify the relevant supervisory authority of a personal data breach within 72 hours of becoming aware of it. Each EU member state has its own supervisory authority, the government body responsible for enforcing data protection law: CNIL in France, the BfDI in Germany, the Garante in Italy, UODO in Poland, and so on. The United Kingdom, post-Brexit, has its own UK GDPR enforced by the Information Commissioner’s Office (ICO), substantively similar to the EU framework but a separate regulatory regime.

The phrase “becoming aware” carries more weight than it appears to. The European Data Protection Board’s Guidelines 9/2022 specify that an organization is “aware” when it has a “reasonable degree of certainty” that a security incident has occurred and has compromised personal data. The clock does not start at the moment of the breach; it starts when the organization reaches that threshold of certainty. But the distinction is less protective than it sounds: failing to investigate promptly enough to become aware is itself a compliance failure.

The 72-hour window is not designed to give organizations time to investigate fully. The EDPB explicitly acknowledges that investigations frequently take longer than 72 hours to complete. The expected response is an initial notification with whatever is known at that point: the nature of the breach, an approximate number of affected individuals, the likely consequences, and a contact for follow-up. Additional information follows as the investigation continues. Organizations are not excused from the 72-hour obligation by not yet knowing everything. The notification goes first; the investigation continues in parallel.

The clock runs regardless.

On the 72-hour window after a compromise

For any organization that hosted customer data on infrastructure compromised via CVE-2026-41940, the “when did you become aware” question has sharp edges. The advisory was published April 28, 2026. If a server was exploited before patching, the moment incident response confirmed unauthorized access to systems containing personal data was the moment the clock started. On a cPanel server where the compromise may have occurred weeks earlier and the attacker may have had unrestricted access to all hosted accounts, confirming unauthorized access is not a five-minute exercise. It requires log analysis, forensic examination, and in many cases external incident response resources that take days to mobilize. The clock runs regardless.

When You Also Have to Tell the People Whose Data Was Taken

Article 33 requires notification to the supervisory authority. Article 34 of the GDPR imposes a separate and additional obligation: notification to the affected individuals themselves, “without undue delay,” when the breach “is likely to result in a high risk to the rights and freedoms of natural persons.”

High risk is not a subjective judgment call left entirely to the organization. The EDPB has established that certain categories of data essentially always meet this threshold. Health data, which Article 9 of the GDPR classifies as a special category, is one of them. A breach exposing health records is, by the structure of the regulation, a high-risk breach requiring individual notification. The organization cannot decide otherwise.

Individual notification under Article 34 must describe, in clear and plain language: the nature of the breach, the contact details of the data protection officer, the likely consequences, and the measures taken or proposed to address the breach. For a medical organization with potentially hundreds of thousands of patients, this means a mass communication exercise, at speed, in the middle of an active incident response, while regulatory engagement is simultaneously running. Organizations that have never designed or tested this workflow discover its complexity at the worst possible time.

Encryption at rest does not protect data from an attacker who has the same access as the system administrator.

On the Article 34 encryption exemption

The GDPR provides one narrow exemption to individual notification: if the organization has implemented technical measures that render the personal data unintelligible to unauthorized persons, such as encryption applied in a way that renders the keys inaccessible to the attacker, individual notification may not be required. The exemption is narrow. A cPanel authentication bypass that gives an attacker full WHM-level access does not produce this outcome. Encryption at rest does not protect data from an attacker who has the same access as the system administrator.

HIPAA, the US federal law governing healthcare data, operates on a parallel structure. A breach affecting 500 or more individuals requires notification to HHS and to affected individuals within 60 days of discovery. The 60-day window is longer than GDPR’s 72 hours but carries the same premise: notification is not contingent on completing the full investigation. Change Healthcare’s handling of the 2024 breach, where many individuals waited eleven months for notification, is the clearest recent illustration of what happens when the timeline slips. As of early 2026, HHS’s Office for Civil Rights has an open investigation into Change Healthcare and UnitedHealth Group specifically on breach notification compliance.

Article 32 and the Unpatched Vulnerability Problem

Parallel to the notification obligations runs a question that determines regulatory appetite for enforcement: did the organization take appropriate technical and organizational measures to secure the personal data it held? Article 32 of the GDPR requires this, calibrated to “the state of the art, the costs of implementation, and the nature, scope, context and purposes of processing.” In plain terms: did the organization do what a reasonable counterpart in its position, with its resources, would have done? The answer regulators have repeatedly reached when examining breaches that followed known vulnerabilities is no.

Failing to apply a security patch for a vulnerability with a CVSS score of 9.8 is not a defensible “state of the art” position. The UK Information Commissioner’s Office fined Advanced Computer Software Group Ltd £3.07 million in March 2025 for a 2022 ransomware attack that exposed the personal data of 79,404 people, including home-entry details for 890 individuals receiving home care. The ICO’s finding explicitly cited inadequate patch management, insufficient vulnerability scanning, and failure to deploy multi-factor authentication as the violations. The attacker gained initial access through a customer account that lacked MFA. The company was a software provider for NHS healthcare services. The breach disrupted NHS 111 and prevented clinical staff from accessing patient records.

In France, CNIL fined France Travail, the national employment agency, €5 million in January 2026 for a March 2024 breach that exposed the personal data of approximately 43 million jobseekers. CNIL’s findings included weak authentication for third-party advisors, inadequate logging and monitoring, excessive data access permissions, and, critically, a failure to implement security measures that had already been identified in the organization’s own risk assessments. The regulator found it unacceptable that the organization had documented the risks and then declined to address them. The fine included an additional daily penalty of €5,000 for failure to meet corrective action deadlines.

US regulators have reached similar conclusions through different legal frameworks. In May 2025, the FTC finalized an order against GoDaddy, one of the world’s largest web hosting providers, for data security failures stretching back to 2018. The FTC’s findings included a breach discovered in April 2020, six months after it occurred, in which attackers accessed SSH credentials for approximately 28,000 customer accounts and 199 employee accounts. The FTC found that GoDaddy had misrepresented the security protections it provided to customers and had failed to implement reasonable safeguards including multi-factor authentication, adequate logging, and timely vulnerability patching. The consent order requires GoDaddy to implement a comprehensive security program and submit to regular third-party security assessments. The FTC’s findings, applied specifically to a web hosting company, make the order a direct statement of what regulators consider the baseline standard of care for the industry.

The pattern regulators have established is consistent across jurisdictions: a breach that results from a known, addressable vulnerability, where the organization had been notified of the risk and had time to act, is treated as a security failure, not an unavoidable incident. CVE-2026-41940 was disclosed publicly on April 28, 2026. Organizations breached after that date, while the patch was available and the advisory had been circulated, will face that standard in any regulatory examination. Organizations breached before that date, during the 64-day zero-day window, have a different but not necessarily better position: they may not have been obligated to patch a vulnerability they could not have known about, but they still face questions about whether broader security posture, including firewall controls on administrative ports, met the applicable standard.

The Shared Hosting Multiplier

The legal structure of shared hosting creates a specific complication that a single-tenant compromise does not. A typical shared cPanel server hosts multiple separate customers, each of whom is, under the GDPR framework, an independent data controller (the party that decides what personal data to collect and how to use it) for their own users, customers, and employees. The hosting provider, who controls the underlying infrastructure, is typically their data processor (the party handling data on the controller’s behalf) under Article 28 of the GDPR.

When that server is compromised at the WHM level, the event is a data breach simultaneously for every customer whose data was accessible. Each of those customers, as data controllers, carries their own independent obligation to notify their supervisory authority within 72 hours. The hosting provider, as data processor, carries a separate obligation under Article 33(2): to notify the affected controllers “without undue delay after becoming aware of a personal data breach.” The contract between the hosting provider and its customers, the data processing agreement required under Article 28, should specify this obligation. In practice, many SMB hosting relationships operate without one. The absence of such an agreement does not extinguish the legal obligations. It makes them harder to navigate and creates additional regulatory exposure for the hosting provider.

A shared hosting server compromised via cPanel’s CVE-2026-41940 is therefore not one regulatory event. It is a potential cascade of dozens or hundreds of simultaneous events, each with its own 72-hour clock, each requiring an independent notification to a supervisory authority, and each requiring individual notification if the compromised data meets the high-risk threshold. Whether any of those customers had health data, financial data, or data on EU residents across multiple member states determines whether one-stop-shop rules (which allow a single lead supervisory authority to handle cross-border processing on behalf of others) apply, or whether multiple regulators must be notified in different jurisdictions at the same time.

Parallel EU Obligations: NIS2 and DORA

The GDPR is the most visible piece of EU regulation for hosting providers and their customers, but it is not the only piece with incident reporting obligations. Two newer frameworks have come into force in the past eighteen months, both of which impose their own notification timelines, penalty regimes, and supervisory bodies. Both can be triggered by the same incident that triggers a GDPR Article 33 notification.

The Network and Information Security 2 Directive (NIS2) reached its transposition deadline on October 17, 2024. NIS2 classifies organizations as either essential entities or important entities based on size and sector. Digital infrastructure providers, including cloud computing service providers, data center service providers, content delivery network operators, managed service providers, and managed security service providers, are explicitly named in NIS2’s scope. A hosting provider above the size thresholds (typically 50 employees or €10 million in turnover) is subject to NIS2 directly, not only through its customers’ obligations. DNS service providers and TLD registries are in scope regardless of size.

The notification timeline under Article 23 of NIS2 is: an early warning within 24 hours of becoming aware of a significant incident, an incident notification within 72 hours, and a final report within one month. The penalty ceiling for essential entities is at least €10 million or 2 percent of global annual turnover, whichever is higher. Important entities face at least €7 million or 1.4 percent. Several member states have transposed with higher local ceilings; Germany set its NIS2 maximum at €20 million for essential entities.

The Digital Operational Resilience Act (DORA) entered into application on January 17, 2025. DORA applies to financial entities and to the ICT third-party service providers that serve them. Hosting providers, cloud providers, and data center operators serving banks, insurers, payment institutions, and other regulated financial entities at scale are within DORA’s scope. The largest of them can be designated as critical ICT third-party service providers subject to direct supervision by the European Supervisory Authorities.

The DORA reporting timeline for major ICT-related incidents is the most stringent currently in force in the EU: an initial notification within 4 hours of an incident being classified as major (or no later than 24 hours after detection), an intermediate report within 72 hours, and a final report within one month. The penalty regime for critical ICT providers permits periodic penalty payments of up to 1 percent of average daily worldwide turnover, per day, for up to six months.

The practical consequence is that a hosting provider serving European customers can be subject to GDPR Article 33 (controller’s notification to supervisory authority, 72 hours), GDPR Article 33(2) (processor’s notification to controller, without undue delay), NIS2 (24-hour early warning, 72-hour notification, one-month final), and DORA (4-hour initial notification, 72-hour intermediate, one-month final, for any in-scope financial customer’s environment) simultaneously. Each obligation runs on its own clock, requires different content, goes to different recipients, and carries its own penalty structure. None is a substitute for any of the others.

Outside the EU: US, Australia, Singapore

The GDPR is the most comprehensive breach notification framework currently in force, but it is not the only one, and organizations with customers beyond the EU face obligations that do not align neatly with each other or with GDPR timelines.

United States. There is no single federal breach notification law. All 50 US states plus the District of Columbia, Guam, Puerto Rico, and the US Virgin Islands have enacted their own breach notification statutes, each with different definitions of personal information, different triggering conditions, different notification timelines, and different enforcement mechanisms. Notification timelines range from 30 to 60 days in states that specify numeric deadlines; most remaining states use qualitative language such as “without unreasonable delay.” California law effective January 1, 2026 imposes a 30-day deadline for individual notification; when a breach affects more than 500 California residents, the state attorney general must additionally be notified within 15 days of notifying consumers. The National Conference of State Legislatures maintains a tracker of state-level laws. For an organization serving customers across multiple states, a single breach may trigger notification obligations in all 50 states simultaneously, each with its own requirements for content, method, and recipient.

Federal sector-specific laws add further obligations on top of state requirements. HIPAA for healthcare data requires notification to HHS and affected individuals within 60 days of discovery, with public posting on the HHS breach portal for incidents affecting 500 or more people. The Gramm-Leach-Bliley Act and FTC Safeguards Rule cover financial institutions; as of May 2024, non-banking financial institutions regulated by the FTC must report breaches affecting 500 or more consumers. The SEC’s cybersecurity disclosure rule, effective December 2023, requires public companies to disclose material cybersecurity incidents on Form 8-K within four business days of determining materiality, a deadline that runs independently of any state breach notification timeline and applies regardless of how the investigation is progressing.

Australia. The Notifiable Data Breaches scheme under the Privacy Act 1988 requires notification to the Office of the Australian Information Commissioner as soon as practicable once an eligible data breach is confirmed. If an organization suspects a breach but has not confirmed it, the law allows a 30-day assessment period. Once confirmed, notification to the OAIC and to affected individuals must follow immediately. The Privacy Legislation Amendment (Enforcement and Other Measures) Act 2022 substantially increased penalties: the maximum fine for a serious or repeated interference with privacy is now the greater of A$50 million, three times the value of any benefit obtained from the conduct, or 30 percent of the organization’s adjusted Australian turnover during the breach period.

In October 2025, a court ordered Australian Clinical Labs to pay A$5.8 million in civil penalties in connection with a 2022 breach of its Medlab pathology unit that affected 223,000 patients. The breakdown is instructive: A$4.2 million for failure to take reasonable security steps, A$800,000 for failure to investigate a suspected incident, and A$800,000 specifically for failure to report the breach as soon as practicable. It was the first time an Australian court had imposed civil penalties under the Privacy Act’s mandatory data breach provisions, and the structure of the penalty reflects a clear regulatory position: delayed notification is itself a discrete, separately punishable failure, not just a procedural shortcoming attached to the underlying breach.

Singapore. The Personal Data Protection Act requires notification to the Personal Data Protection Commission within three calendar days of determining that a data breach is notifiable. Notification to affected individuals must occur at the same time or immediately after. A breach is notifiable if it results in, or is likely to result in, significant harm to affected individuals, or affects 500 or more individuals. Fines under the PDPA reach S$1 million or 10 percent of annual Singapore turnover, whichever is higher, for organizations above S$10 million in Singapore revenue.

Singapore’s three-day window is mathematically equivalent to the GDPR’s 72 hours, but stricter in two practical respects. The GDPR’s Article 33 includes the qualifier “where feasible” before the 72-hour deadline, which provides some regulatory latitude when an organization can demonstrate why the timeline was not achievable. The PDPA contains no equivalent qualifier. Additionally, where GDPR separates supervisory authority notification (72 hours) from individual notification (without undue delay, no specific deadline), Singapore requires both to happen simultaneously or in immediate succession. For an organization managing a breach that triggers both frameworks, Singapore’s requirements set the pace: no flexibility qualifier, and individual notifications cannot trail regulator notification by days or weeks as they often do under GDPR.

Notification Timelines at a Glance

FrameworkAuthority NotificationIndividual Notification
GDPR (EU)72 hours (Art. 33, “where feasible”)Without undue delay if high risk (Art. 34)
NIS2 (EU)24h early warning, 72h notification, 1m finalIf significant impact requires it
DORA (EU, financial sector ICT)4h initial, 72h intermediate, 1m finalPer sectoral financial regulator
UK GDPR72 hours (mirroring GDPR Art. 33)Without undue delay if high risk
HIPAA (US)60 days to HHS (500+ affected)60 days
SEC Form 8-K (US public companies)4 business days of materiality determinationPublic market disclosure
California (eff. Jan 2026)15 days after consumer notice (AG, if 500+)30 days
Australia NDBAs soon as practicable after confirmationImmediately after OAIC
Singapore PDPA3 calendar daysSame time or immediately after

When the Fine Is a Rounding Error

The Cegedim Santé case illustrates the gap between a regulatory event and the outcome that follows it. On September 5, 2024, France’s CNIL fined Cegedim Santé €800,000 for illegally processing health data. CNIL investigators had found in 2021 that the company was sharing patient data with third parties under the claim that it was anonymized; in fact it was only pseudonymized, meaning patients could be re-identified. The company had no legal authorization for the processing as required under French law.

Sixteen months later, Cegedim Santé’s MonLogicielMedical platform was breached. Hackers extracted 15.8 million patient administrative files. According to available reporting, approximately 165,000 of those files contained doctors’ free-text clinical notes, including records revealing HIV status, sexual orientation, mental health conditions, and psychiatric diagnoses. Politicians and security officials were among those whose records were compromised. The company filed a criminal complaint in October 2025 and did not disclose the breach publicly until late February 2026, four months later, a delay that may itself constitute a violation of the individual notification obligation. Cegedim confirmed 15.8 million records were compromised on March 3, 2026.

Cegedim Group, the parent company, had revenues of €616 million in 2023. The €800,000 fine was 0.13 percent of annual group revenue. The GDPR permits fines of up to 4 percent of global annual turnover, which would have been approximately €24.6 million for Cegedim Group. CNIL imposed 3.4 percent of that maximum.

The Cegedim sequence is the data: a fine set at 0.13 percent of group revenue, no public disclosure of structural changes to data practices in the sixteen months that followed, and 15.8 million patient records in the hands of attackers. Whether that outcome reflects the fine amount, enforcement timing, or organizational factors internal to Cegedim is a question the available record does not answer. What it does answer is that the regulatory event in September 2024 did not prevent the breach event in late 2025.

The Optimed Sequence in Real Time

Optimed Medical Laboratories, the Polish medical laboratory introduced at the start of this article, is a concrete example of how the obligations described above apply in a single incident, all at once.

The compromise was confirmed on or around May 3, 2026. From that moment, multiple clocks started running in parallel. As a data controller processing health data, Optimed was bound by Article 33 to notify UODO, Poland’s data protection authority, within 72 hours. The records affected included laboratory results and national identification numbers, placing the breach inside Article 9 (special category data) and Article 34 (high-risk individual notification). Optimed sent individual notifications to patients alongside its UODO filing, which is the sequence the regulation requires for a health data breach.

If Optimed met the size threshold for NIS2 essential or important entity status, its potential obligations also included a 24-hour early warning to the Polish national CSIRT, a 72-hour incident notification, and a one-month final report under NIS2’s parallel reporting track. Healthcare providers above the size threshold are within NIS2 scope. The NIS2 obligations would run alongside, not instead of, the GDPR obligations.

The suspected vector was CVE-2026-41940, a cPanel authentication bypass disclosed publicly on April 28, 2026. That timing exposes Optimed to two questions in any regulatory review: were appropriate technical and organizational measures in place to limit the cPanel administrative attack surface (firewall controls on administrative ports, restricted IP allowlists for cPanel and WHM access, mandatory MFA on administrative accounts), and was the patch applied within a window the regulator considered reasonable. The advisory had been public for five days when the breach occurred. UODO’s enforcement history under similar circumstances, including the 2019 Morele.net case where insufficient organizational and technical safeguards drew a PLN 2.8 million fine after a customer database breach, suggests these questions will be asked.

UODO does not maintain a public breach registry equivalent to HHS’s HIPAA Breach Portal, but the agency publishes selected enforcement decisions when it deems doing so to be in the public interest. For a healthcare data breach affecting an identifiable laboratory, the precedent suggests publication is likely if a fine is imposed. By the time UODO finishes its examination, Optimed’s name will have appeared in the press, in patient communications, and potentially in a published enforcement decision, well before any of the other tracks have closed. None of the obligations described in the preceding sections substitutes for any of the others. Optimed is running all of them at once.

When the Breach Comes With an Acquisition

A specific category of breach exposure that boards routinely underestimate is the breach that was already in progress when an acquisition closed, and that was discovered only afterward. The defining case is Marriott International’s acquisition of Starwood Hotels.

Marriott completed its acquisition of Starwood in September 2016. The Starwood reservation database had been compromised since July 2014, more than two years before the acquisition closed, by an attacker that Marriott’s internal security tooling did not detect until September 2018. The breach was disclosed publicly on November 30, 2018. The eventual count of affected records reached approximately 339 million.

The buyer inherits the breach and the regulatory liability for what follows.

On M&A and pre-existing compromise

The UK ICO’s enforcement response is the part of the case most relevant to acquirers and boards. The ICO initially announced an intent to fine Marriott £99.2 million in July 2019, ultimately reduced to £18.4 million in October 2020 after consideration of mitigating factors including the economic impact of the pandemic. The reduction is sometimes treated as the headline of the case. The substantive ICO finding is more important: Marriott had failed to conduct sufficient due diligence into Starwood’s cybersecurity posture during the acquisition process, and the duty to conduct such due diligence “is not time-limited or a ‘one-off’ requirement.” Acquiring an organization with a pre-existing breach does not transfer the breach risk to the seller. The buyer inherits the breach and the regulatory liability for what follows.

Representations and warranties insurance, which buyers traditionally rely on to cover unknown liabilities discovered post-closing, has responded to this exposure by writing increasingly explicit cyber carveouts. Pre-existing cyber incidents discovered after closing are increasingly carved out of indemnification caps. Standalone cyber insurance is required as a precondition to many M&A transactions, with policy start dates and the scope of pre-existing incidents scrutinized closely by underwriters.

The practical effect is that cyber due diligence has shifted from a checklist item to a substantive technical and legal review. External forensic engagement during diligence to test for indicators of compromise in target environments before signing is increasingly standard, particularly in deals where the target operates infrastructure or holds large volumes of personal data. For hosting providers and infrastructure companies that are themselves acquisition targets, or that hold acquirer customer data through their own consolidation activity, the Marriott precedent applies in both directions. An acquirer of a hosting business inherits the breach history. A hosting business with an unresolved compromise can lose deal value, and in some cases the deal entirely, when diligence surfaces it.

Personal Liability: When the CISO and the Board Are in the Frame

The financial and operational obligations described in this article apply to organizations. Personal exposure for the individuals running them has grown in parallel, particularly in the United States and in Delaware corporate law. The trajectory is worth tracking even where final outcomes have walked back the most aggressive enforcement actions.

On October 30, 2023, the SEC filed fraud charges against SolarWinds Corporation and its Chief Information Security Officer, Timothy G. Brown, personally. The complaint alleged that Brown had made or approved misrepresentations about SolarWinds’ cybersecurity posture, both in public statements and in materials used to assess the company’s controls, ahead of and through the SUNBURST attack disclosed in December 2020. It was the first SEC enforcement action that named a CISO personally as a defendant in a cybersecurity disclosure case.

The case did not end where it started. On July 18, 2024, US District Judge Paul Engelmayer dismissed substantially all of the SEC’s claims, sustaining only a narrow set of fraud allegations related to specific statements on SolarWinds’ corporate security webpage. On November 20, 2025, the SEC and SolarWinds jointly stipulated to dismiss the remaining claims with prejudice. The final outcome was a dismissal, not a settlement, and Brown faced no enforcement penalty.

The fact pattern that produced the case nonetheless changed the operating environment for CISOs and the executives they report to. The SEC demonstrated willingness to charge an individual security officer personally for representations about controls that did not match the underlying reality. Italian and German data protection authorities have imposed personal fines on data protection officers in some cases. The risk that a regulatory action will name an individual, even where it eventually fails, is sufficient to require a director and officer insurance review and to change how internal security communications are documented.

For boards specifically, Delaware corporate law has been moving in a parallel direction. The Caremark standard, established by Chancellor William Allen in 1996, imposes a duty on directors to implement reasonable monitoring systems for legal compliance. The Marchand v. Barnhill ruling by the Delaware Supreme Court in June 2019 sharpened that duty by holding directors personally liable for the absence of monitoring systems addressing “mission-critical” risks. Cybersecurity has been recognized as mission-critical in recent Delaware cases, including the 2022 Construction Industry Laborers Pension Fund v. Bingle ruling involving SolarWinds, where the court found that cybersecurity oversight is mission-critical for an online software company.

The answer to that question is usually decided years before the breach.

On board oversight under Caremark

The practical implication for boards is direct. A documented, periodically reviewed cybersecurity oversight process, recorded in board minutes and committee charters, is no longer a governance nicety under the Caremark standard. It is the evidence directors will be asked to produce in a derivative suit after a breach. When the demand letter arrives, either the record exists or it does not. The answer to that question is usually decided years before the breach.

What Regulators Look for First

The enforcement cases described in this article share a common structure: investigators arrive after the breach and work backward through what the organization had in place before it. The findings that produce the largest penalties are not the breach itself, but the specific operational gaps that made it worse, broader, or undisclosed longer than it needed to be. The ICO cited patch management and MFA. CNIL cited documented risks that were not acted on. The FTC cited logging failures and misrepresented security practices. The Australian court broke its A$5.8 million penalty into three parts, with two of them assigned entirely to what happened after the breach was suspected, not to the breach itself.

What prepared organizations have in common, as reflected in what regulators have penalized their absence, is a short list of practices:

  • An incident response retainer with an external firm, signed and tested before it is needed. This eliminates the mobilization lag that compounds notification deadlines.
  • Data processing agreements with all customers that specify the provider’s obligation to notify controllers without undue delay. For hosting providers, this is a legal requirement under Article 28 of the GDPR, not an optional contract upgrade.
  • Patch management with documented timestamps showing when a critical advisory was received, when remediation was scheduled, and when it was applied. This is the primary evidence an organization can produce to demonstrate Article 32 compliance when a breach follows a known vulnerability.
  • A notification workflow written, reviewed, and rehearsed before a breach. This means the 72-hour clock, NIS2’s 24-hour early warning, DORA’s 4-hour initial notification, Singapore’s three-day window, or HIPAA’s 60-day limit does not also require inventing a communications process from scratch at the same time.

None of these are novel or complex. They appear as absent in enforcement decisions precisely because they are easy to deprioritize until the investigation arrives. France Travail had documented the risks in its own assessments before the breach occurred. GoDaddy’s FTC findings cited the absence of MFA and adequate logging on a platform serving millions of customers. Advanced Computer Software’s ICO findings cited inadequate patch management at a company providing infrastructure to healthcare services. In each case, the enforcement finding pointed to gaps that the organization’s own processes or applicable security standards had already identified.

Cyber Insurance Has Stopped Being a Backstop

The cost calculation a CFO might make based on the enforcement record above frequently assumes cyber insurance will absorb a meaningful share of the loss when an incident actually happens. That assumption was reasonable in the cyber insurance market of a decade ago. It has become substantially less so.

Cyber insurance rate increases in the corporate market peaked at approximately 133 percent in late 2021. Direct written premiums in the cyber market grew on the order of 50 to 75 percent through 2022 according to NAIC data, driven by insurer losses from a wave of ransomware claims. The market has stabilized since 2024, but the stabilization has come through tighter coverage conditions rather than relaxed premiums.

Modern cyber policies typically require the following as conditions of coverage rather than as discounts:

  • Multi-factor authentication on all users including email, VPN, remote access, and administrator accounts.
  • Endpoint detection and response deployed across servers and endpoints, with host isolation capability.
  • Immutable or offline backups tested at documented intervals.
  • Patching commitments of 30 days or less for general vulnerabilities and 7 days or less for critical ones.

Insurers require documented proof of these controls before binding coverage. The absence of any of them at the time of an incident can produce a claim denial.

The Lloyd’s of London market introduced standard exclusions through Market Bulletin Y5381, requiring all standalone cyber-attack policies incepted or renewed after March 31, 2023 to contain exclusions for losses arising from state-backed cyber attacks. Claims for incidents attributed to nation-state actors, including ransomware operations linked to state-affiliated groups, may be denied under these exclusions even where the underlying loss would otherwise be within scope.

The Merck v ACE American litigation that prompted insurers to tighten this language was, in its own outcome, a policyholder win. New Jersey courts ruled in 2022 that the existing war exclusion did not apply to the NotPetya attack, the appellate court affirmed in May 2023, and Merck settled with its insurers for $1.4 billion in January 2024. The market’s response was not to challenge that result but to write more explicit exclusions for future policies.

Insurance does not transfer the cost of a breach. It conditionally subsidizes that cost.

On the cyber insurance market

For hosting providers specifically, enterprise customers increasingly require their hosting vendors to carry cyber insurance with specified coverage minimums as a contract condition, with the insurance requirement built into Article 28 GDPR data processing agreements. The premiums for that coverage are now a meaningful operating cost line for any hosting provider serving enterprise customers, and the conditions of coverage shape what security controls the provider must demonstrably maintain. Insurance does not transfer the cost of a breach. It conditionally subsidizes that cost, provided the insured can demonstrate that the controls required under the policy were in place at the time of the incident.

The Investigation Nobody Plans For

The direct costs of regulatory fines are the most visible part of a breach’s financial impact. The investigation that precedes them is frequently larger, and it arrives first.

Forensic analysis to determine the scope and nature of a breach, a prerequisite for any accurate regulatory notification, typically costs between tens of thousands and hundreds of thousands of dollars depending on server complexity, log availability, and whether the attacker deleted evidence. External incident response retainers run from $30,000 to $150,000 annually before any actual incident occurs. Without a retainer, the per-incident cost is higher and the mobilization time is longer, both of which compound the notification deadline problem.

Log availability is a specific and underappreciated constraint. Forensic investigators reconstructing what an attacker accessed and when depend entirely on what the compromised system logged. Default cPanel logging configurations do not necessarily capture all file access events across all hosted accounts. An attacker with WHM-level access may have selectively exfiltrated data in ways that generate no distinctive log signature distinguishable from normal system activity. The forensic conclusion that “data was likely accessed but we cannot determine exact scope” is legally and operationally different from “data was not accessed,” but it does not satisfy a regulator who requires a notification describing the approximate number of people affected. Organizations are required to provide their best estimate with appropriate qualifications, and then update that estimate as investigation continues.

On top of forensic costs sit legal counsel fees for regulatory engagement across multiple frameworks and jurisdictions, communications and public relations management, technical remediation, and exposure to private litigation in jurisdictions where affected individuals can sue directly.

Under Article 82 of the GDPR, individuals can claim compensation from both data controllers and data processors for material and non-material damage arising from a breach. The Court of Justice of the European Union ruled in Case C-300/21 in May 2023 that a claim for non-material damage does not need to cross any threshold of seriousness, though a mere infringement of the GDPR without identifiable damage is also insufficient. Subsequent CJEU rulings have clarified that reasonably grounded fear of misuse of one’s data can itself constitute compensable non-material damage. The practical effect is a lower bar for individual claims and a higher tail of potential plaintiffs in any large breach. Article 80 of the GDPR additionally permits non-profit organizations to bring representative actions on behalf of data subjects, and most EU member states have implemented complementary collective redress mechanisms in national law.

In the United States, plaintiff firms have organized data breach class actions that have produced settlements measured in hundreds of millions of dollars. The 2022 T-Mobile settlement reached $350 million for cash compensation, with an additional $150 million committed to security improvements, covering the 2021 breach affecting more than 76 million customers. The Equifax settlement totaled up to $700 million for the 2017 breach affecting 147 million people, of which up to $425 million was allocated to a consumer relief fund. The litigation track operates independently of regulatory enforcement, runs on its own timeline, and the settlement figures in major cases routinely exceed the regulatory fines imposed for the same incidents.

The full cost of a serious breach (forensics, regulatory engagement across multiple frameworks, legal defense, notification logistics, remediation, and litigation reserves) typically runs to multiples of the penalty figure that eventually appears in any single enforcement announcement. For Change Healthcare, the gap between the fine (pending) and the reported total costs and business disruption ($3.1 billion through 2024, per UnitedHealth Group’s earnings disclosures) is a reference point for what that multiple looks like at scale.