Executive Summary
NIS2 is not a documentation exercise. Directive (EU) 2022/2555 demands an early warning inside 24 hours of incident awareness, a structured notification inside 72 hours, and a final report inside one month. The reports must carry investigation depth, not alert summaries. Regulators expect to see the attack path, the evidence chain, and the reasoning behind every containment decision.
An alert is not an incident. NIS2’s 24-hour clock starts from awareness of a significant incident, and awareness is itself produced by investigation. The average enterprise team faces roughly 3,000 alerts a day, takes 70 minutes to investigate one, and leaves 63 percent untouched. Incidents that should start the clock are discovered late or not at all. When one is confirmed, analysts pivot across five to eight tools to reconstruct what happened. Both stages collapse inside 24 hours.
Legacy Security Orchestration, Automation and Response (SOAR) was built for the era before this clock. Static playbooks fail on novel threats. API drift breaks integrations silently. L1 automation stops the moment it sees an unexpected result. Gartner placed static SOAR past its architectural ceiling on the 2024 Hype Cycle for Security Operations.
The answer is an autonomous SOC. Morpheus AI triages 95 percent of alerts at Level 2 depth in under two minutes, generates runtime playbooks per incident, produces auditable investigation records, and assembles NIS2-grade reports from structured case data. The same architecture covers DORA, NIST CSF, and most sub-24-hour reporting mandates.
Table of Contents
- The 24-Hour Clock Is an Architecture Problem
- Scope, Liability, and the Global Spillover
- Why Legacy SOAR Cannot Meet NIS2
- SIEM Detects. It Does Not Investigate.
- The Autonomous SOC: Four Capabilities
- Autonomy That Regulators Can Audit
- The NIS2 Reporting Stack: 24 Hours, 72 Hours, One Month
- One Architecture, Many Mandates
- Defending Management-Body Liability
- Why Now: The Enforcement Window Is Open
- Buyer’s Checklist: What to Evaluate
The 24-Hour Clock Is an Architecture Problem
NIS2 codifies three reporting checkpoints. Each one carries a specific evidentiary standard. The failure mode is not late reporting. The failure mode is reporting on time with thin evidence and getting a follow-up from a regulator demanding the depth the report should have carried.
| Stage | Deadline | Evidentiary requirement |
|---|---|---|
| Early warning | 24 hours from awareness | Classification of the incident, suspected threat actor or cause, and a preliminary assessment of cross-border or essential-service impact. |
| Incident notification | 72 hours from awareness | Updated assessment, indicators of compromise, and initial containment status. The text should already be drawn from case evidence, not narrative. |
| Final report | One month from notification | Complete attack path, root cause, mitigation taken, lessons learned, and residual risk. This is the document that will be examined during audit. |
| Intermediate status | On request | Competent authorities can demand interim updates throughout the investigation. The case must stay live and queryable. |
What regulators actually want inside 24 hours
Member-state transposition and ENISA guidance point the same direction. An NIS2 early warning is not a header. It should name the affected systems, describe the attacker’s movement to the extent known, and outline the containment in progress. The absence of that depth is itself a finding.
This is why NIS2 is an architecture problem, and the problem has two halves that trigger two different articles. First, detection, which falls under Article 21. Awareness of a significant incident is produced by investigation, so NIS2 exposure begins long before the 24-hour clock formally starts. A SOC that investigates 37 percent of its alerts is not in breach of Article 23 for an incident it has not yet detected. It is, however, exposed under Article 21 for failing to operate appropriate and proportionate detection. Guidance from ENISA and national authorities signals that awareness will be assessed against the telemetry an entity collected or should have collected, though NIS2 enforcement case law is still forming. Second, reporting, which falls under Article 23. Once awareness lands, the early warning must carry evidenced depth within 24 hours. Manual workflows fail at both halves. The architecture that succeeds is one where every alert is triaged to L2+ depth at the moment of arrival, so awareness is established early and defensibly, and the same investigation output is already report-shaped when the clock does start.
Scope, Liability, and the Global Spillover
NIS2 is an EU directive with global reach. Any organization with EU subsidiaries, regulated EU customers, or critical-sector suppliers inside the Union inherits the clock and reporting obligations.
Sectors covered
The directive expands scope well beyond the original NIS framework. Essential entities include energy, transport, banking, financial market infrastructure, health, drinking water, wastewater, digital infrastructure, ICT service management, public administration, and space. Important entities add postal services, waste management, chemicals, food, manufacturing, digital providers, and research.
Penalties
| Category | Maximum fine | Applies to |
|---|---|---|
| Essential entities | €10M or 2% of global annual turnover, whichever is higher | Sectors identified as critical to EU economy or society |
| Important entities | €7M or 1.4% of global annual turnover, whichever is higher | Expanded sectors with lower criticality but systemic reach |
| Management bodies | Personal sanction, including temporary management ban | Article 20 imposes oversight duties; Article 32(5) permits member states to bar managerial functions in cases of repeated violation by essential entities |
Article 20 reaches the executive suite
Article 20 requires management bodies of essential and important entities to approve the cybersecurity risk-management measures, oversee their implementation, and undergo regular training. Article 32(5) then authorizes member states to impose temporary bans on managerial functions at essential entities in cases of repeated violation. The clauses work in tandem: the duty sits at the top, and the sanction reaches the individuals who hold it. The result is personally accountable oversight, and an autonomous SOC converts that accountability into auditable evidence.
Article 21 pushes supply-chain security into the risk-management duty, and contracts with suppliers become the enforcement mechanism. Essential entities must account for the security posture of direct suppliers. Suppliers without NIS2-grade incident reporting become unacceptable procurement risk. The architecture problem propagates outward from the regulated entity to every vendor it depends on.
Why Legacy SOAR Cannot Meet NIS2
Security Orchestration, Automation and Response (SOAR) platforms were designed before NIS2 existed. They automate sequences that someone pre-built. When the incident matches the playbook, they work. When the incident does not, they stall. NIS2 incidents rarely match a pre-built script.
Static playbooks fail on variants
A phishing playbook runs the same whether the target is the CFO or a new hire, whether the payload is known or novel. Static logic collapses the moment an attacker deviates.
Silent integration failures
Vendor API updates break SOAR integrations quietly. Playbooks continue executing. Results become wrong. Organizations discover the breakage during an active incident, not before.
The L1 analyst gap
SOAR playbooks are designed by L2+ engineers and executed by L1 analysts. Automation halts the moment it returns something unexpected. The senior time the tool was meant to protect is spent unblocking it.
Architect dependency
A SOAR architect earns $150K to $250K and sits at a single point of failure. When they leave, playbook development stalls. New threats outrun the team’s ability to codify responses.
Brittle across tool boundaries
Playbooks hard-code data paths across EDR, SIEM, identity, and cloud. Any schema change breaks the chain. The investigation depth NIS2 expects requires traversal that static logic cannot sustain.
No audit-ready output
SOAR case notes are sequence logs, not investigations. A regulator reading them finds steps, not evidence. Narratives have to be assembled by analysts after the fact, consuming the clock.
The detection math does not work
Industry alert-fatigue research places the average enterprise SOC at roughly 3,000 alerts per day and 70 minutes of full investigation per alert. Triaging every one to L2 depth manually would require nearly 150 analysts working around the clock. No enterprise staffs that team, and hiring cannot close the gap. ENISA’s workforce analysis puts the EU shortfall at close to 300,000 unfilled cybersecurity roles, and analyst tenure averages under three years because of burnout. The consequence is not that NIS2 reports are slow. The consequence is that NIS2-significant incidents hide inside the uninvestigated stream. Awareness lands late or never. The 24-hour clock under Article 23 has not started yet, but the risk-management obligation under Article 21 is already failing, and a regulator reconstructing the incident will know when it should have been caught.
SIEM Detects. It Does Not Investigate.
Security Information and Event Management (SIEM) is the data foundation of most SOCs. It aggregates logs, correlates signals, and fires alerts. NIS2 auditors appreciate a strong SIEM. They will not confuse one for an investigation.
A SIEM dashboard is a list of detections. An NIS2 report describes an attack: where it started, how it moved, what it accessed, and how it was stopped. That is a different artifact. It requires correlation across tool boundaries, not only within the SIEM’s own index.
Single-pane correlation is not enough
Modern attacks cross domains. An identity compromise begins in email, pivots through EDR, traverses network segments, and ends at a cloud storage object. The SIEM sees each telemetry stream, but it correlates within its own schema. The attack path is visible only when an investigation tool traverses the stack the same way the attacker did.
Beyond SIEM, beside SIEM
The market response is to add an investigation layer that sits beside the SIEM. Morpheus AI reads SIEM alerts as input, reaches into EDR, identity, email, and cloud telemetry for corroboration, and returns a structured case with an attack path, a containment recommendation, and a report draft. The SIEM stays. The investigation gap closes. When NIS2 asks for the incident investigation, the answer that passes is the full attack-path case record, not a SIEM dashboard.
The investigation gap is the attack surface
Attackers exploit the space between detection and response. The longer that gap, the farther they move. NIS2 regulates the gap by putting a 24-hour clock on the evidence that it was closed. An architecture that leaves the gap open cannot meet the directive, regardless of the SIEM behind it.
The Autonomous SOC: Four Capabilities
An autonomous SOC is defined by what it produces, not by what it automates. The output is an evidenced investigation per alert, delivered at machine speed, auditable by a regulator, and ready to roll up into an NIS2 report. Four capabilities make that output possible.
Attack Path Discovery
Morpheus AI traces attacker movement East-West across the security stack and North-South through 90 days of telemetry. It correlates across EDR, SIEM, identity, network, and cloud in a single view. For NIS2, this is the evidence a regulator asks to see: the path from initial access to containment, built automatically.
Runtime Playbooks
Playbooks are generated contextually per incident, not drawn from a static library. Morpheus reads the case, selects the right investigation and containment steps, and executes them with the tools actually connected. Variants stop being edge cases. The investigation adapts to the attack, not the other way around.
QA’d AI decisions
Every autonomous decision carries a reasoning chain and a confidence band. Morpheus runs continuous quality assurance on its own triage, measured against analyst review and against historical outcomes. NIS2 regulators can inspect the record and see how a classification was reached. Automation becomes auditable, not opaque.
Unified case assembly
Structured case data is the input to the regulatory report, not the output. Morpheus assembles the NIS2 early warning, the 72-hour update, and the final report from evidence already captured during investigation. Multi-authority dispatch is a single step. The analyst reviews and approves instead of writing from scratch.
Autonomy That Regulators Can Audit
Autonomy without accountability is the category mistake most “AI SOC” products make. A classifier that labels an alert malicious without evidence is not a SOC. NIS2 regulators will not accept that output. Neither will a board that carries Article 20 liability.
The architecture ratio that earns trust
Morpheus AI is built on a 70 to 80 percent deterministic framework paired with 20 to 30 percent large language model reasoning. The deterministic side handles tool orchestration, evidence collection, and execution. Every decision is logged with the tool calls, the evidence reviewed, and the reasoning trace.
Cybersecurity Triage LLM
Purpose-built for SecOps. 24 months of training, 60 specialists, domain tuning on attack techniques and IR playbooks.
Continuous quality evaluation
Every decision is scored against analyst review and retrospective outcome data. Accuracy is measured, reported, and defended, not asserted.
Self-healing integrations
800+ integrations that detect API drift and generate corrective code. Playbooks stop failing silently when vendor schemas change.
Flat-rate economics
No per-alert or per-token billing. $0.27 per AI-triaged alert against $25 to $45 for manual triage at the same depth.
Runtime playbooks, not static ones
A contextual playbook changes based on the target, the attacker technique, the tooling available, and the evidence collected so far. The same phishing alert generates different investigation paths for a CFO under active pursuit versus an intern receiving a recycled template. Static SOAR cannot model that gap.
The NIS2 Reporting Stack: 24 Hours, 72 Hours, One Month
NIS2 defines three reporting checkpoints against a single incident. The directive expects one investigation that matures across the month and yields three filings drawn from the same evidence base. An autonomous SOC treats the reporting stack as one live case with three dispatch moments.
| Window | What NIS2 requires | What Morpheus supplies from the live case |
|---|---|---|
| 24-hour early warning | Classification of the incident, suspected threat actor or cause, preliminary cross-border and essential-service impact assessment. | Fields populated from the L2 triage record and the Attack Path Discovery output. An analyst reviews and dispatches. The case stays live. |
| 72-hour incident notification | Indicators of compromise, initial containment status, revised impact assessment. | Morpheus appends new telemetry to the existing case. The 72-hour notification is a delta against the 24-hour warning, assembled from evidence captured during the interval. |
| One-month final report | Complete attack path, root cause, mitigation taken, lessons learned, residual risk. | Filtered view of the mature investigation record. Full evidence chain, tool calls, and approvals preserved for audit. |
One case, three filings
The advantage of treating the stack as a single case is auditability. A regulator reading the one-month report can trace every claim back to the evidence captured during the live investigation, with timestamps, tool calls, and approval chains preserved. Each dispatch moment draws from a record that was already there.
What a regulator will ask
At each checkpoint the question is the same: show the evidence behind the claim. An autonomous SOC answers with the case record already in hand. A manual SOC asks the analysts to reconstruct what they can remember. The reporting stack rewards the first posture and exposes the second.
One Architecture, Many Mandates
NIS2 is the most visible short-window reporting regime. Organizations subject to multiple regulators discover quickly that the same evidence and architecture service all of them. Investing in an autonomous SOC to satisfy NIS2 pays down compliance surface across the portfolio.
| Regulation | Reporting clock | What the architecture provides |
|---|---|---|
| NIS2 (EU) | 24h / 72h / 1 month | Early warning, notification, final report assembled from case evidence |
| DORA (EU finance) | 4h after classification (24h outer bound from detection) | Article 18 materiality classification at ingestion, multi-authority dispatch |
| SEC cyber rule (US) | 4 business days from materiality determination | Material-incident determination, 8-K filing evidence pack |
| NIST CSF 2.0 | Per-sector | Investigation depth and documentation suitable for audit |
| HIPAA breach rule | 60 calendar days from discovery | PHI impact scoping, affected-individual enumeration, notification evidence |
| PCI DSS 4.0 | Varies by card brand and acquirer | Cardholder data flow reconstruction, compensating control evidence |
DORA sets the precedent
DORA Article 19 requires an initial notification within 4 hours of an incident being classified as major, inside an outer 24-hour window from detection. The autonomous SOC architecture meets both DORA’s classification clock and NIS2’s 24-hour awareness clock with the same workflow: automated classification at alert ingestion, assembly from structured case data, and multi-authority dispatch in a single step.
The surface-area argument
Organizations that respond to each regulation with a separate workflow end up with parallel, overlapping processes. Each one carries its own maintenance burden, its own analyst time, and its own audit finding risk. An autonomous SOC architecture collapses the workflows into one. The investigation is the shared input. The report is the filtered output.
Defending Management-Body Liability
Article 20 is the clause that moves NIS2 from IT risk to personal risk. Management bodies of essential and important entities are required to approve cybersecurity risk-management measures, oversee their implementation, and undergo regular training. Member states must provide for sanctions, including temporary management bans, where oversight failure contributes to a material incident.
What a defensible posture looks like
A board or executive whose organization suffers a reportable incident will be asked three questions. What measures did you approve. How did you verify they were operating. What evidence do you have that the response met the regulatory standard. An autonomous SOC answers each one with a document, not an assertion.
Approved measures
Policy commitments, risk register, SOC capability baselines, all tied to the tool set that enforces them. Morpheus generates the inventory automatically from connected integrations.
Operating evidence
Daily investigation counts, L2 depth coverage, MTTR, integration health. Reports flow to a board dashboard from the same case data that feeds regulatory filings.
Incident-specific trail
For any reportable incident, the reasoning chain behind every autonomous decision, the human approvals applied, and the final report are preserved. The audit trail is the defense.
Training attestation
Article 20 training records captured with content, completion, and refresh cadence. The same platform that runs the SOC holds the governance evidence for management.
The board conversation
Boards do not want to manage cyber incidents. They want to know that the organization can. An autonomous SOC changes the conversation from capability claims to delivered evidence. What is our L2 triage coverage this quarter. What was our mean response time on critical incidents. What would a 24-hour NIS2 report look like, if we had to file one today. Each question has a numeric answer generated from the platform.
Why Now: The Enforcement Window Is Open
NIS2 enforcement is active in 2026. Transposition is closing, supervision is shifting from registration to inspection, and the BSI and other competent authorities have signaled the move from orientation to active enforcement. Organizations still treating NIS2 as a next-year decision are already in the exposure window.
Three clocks are running
- Regulatory. The Commission issued opinions for incomplete transposition. Member states are now enforcing, with investigations underway across sectors.
- Commercial. Article 21(2)(d) pushes supply-chain security into the essential entity’s duty, enforced by contract rather than by regulator. Banks, energy operators, and public administrations are issuing supplier RFIs on cybersecurity posture through 2026. Suppliers without evidenced investigation capability are being dropped from vendor lists before any fine is issued.
- Retrospective. Article 21 governs preparedness. When a reportable incident occurs, the regulator audits the architecture that existed before it. A SOC upgraded after a breach does not satisfy the duty that existed before the breach. Every month of delay is a month of retroactive exposure that surfaces at the first reportable event. The benchmark is the ENISA Technical Implementation Guidance (June 2025), which competent authorities now reference to define “appropriate and proportionate.”
The board’s three questions this quarter
Under Article 20 oversight duties, a management body has three questions to answer before the next reportable incident:
- Does our SOC produce an auditable investigation record for every significant alert, or only for incidents we identified ourselves?
- Can we demonstrate the effectiveness of our cybersecurity measures on demand, as Article 21(2)(f) requires, or only after a failure surfaces the gap?
- If a supplier relationship were audited tomorrow, would our contract evidence satisfy Article 21(2)(d), or would it expose a procurement gap?
An autonomous SOC answers each question with a document. A manual SOC answers with an assertion. The directive is increasingly uninterested in the difference.
Buyer’s Checklist: What to Evaluate
Use this checklist when assessing whether a platform is NIS2-capable. The questions are written to separate autonomous SOC architectures from repackaged SOAR, from SIEM add-ons, and from classifier-only “AI SOC” products.
- Investigation depth: Does the platform produce L2+ investigation per alert, or does it return a score. Scores are not reports.
- Attack Path Discovery: Can it reconstruct attacker movement across EDR, SIEM, identity, email, network, and cloud in a single view, across 90 days of telemetry.
- Runtime playbooks: Are playbooks generated contextually per incident, or drawn from a static library. Static playbooks fail on novel threats.
- Reasoning transparency: Is every autonomous decision logged with tool calls, evidence reviewed, and confidence. Opaque classifiers do not survive audit.
- Deterministic ratio: How much of the stack is deterministic versus LLM. All-LLM SOCs are brittle under regulator scrutiny.
- Self-healing integrations: Does the platform detect API drift and correct it, or does it fail silently when vendors update schemas.
- Report assembly: Can NIS2 early warnings and 72-hour notifications be drafted automatically from case evidence. Manual authoring eats the clock.
- Multi-authority dispatch: Can the same case data feed multiple regulatory filings with different formats in a single step.
- Economics: Flat-rate pricing or per-alert billing. Per-alert billing penalizes the organization for investigating thoroughly.
- QA on AI decisions: Is there a continuous accuracy measurement program against analyst review and retrospective outcomes. Accuracy should be measured, not asserted.
What Morpheus AI covers
A product that passes fewer than eight of these criteria is an adjacent tool, not an autonomous SOC, and the gap surfaces during the first reportable incident. Morpheus AI meets every item in the checklist. Attack Path Discovery, runtime playbook generation, the purpose-built Cybersecurity Triage LLM, 800+ self-healing integrations, and flat-rate economics form a single platform. Customers report 99 percent alert reduction and 80 percent MTTR improvement, with cost per AI-triaged alert at $0.27 against $25 to $45 for manual triage at the same depth.

