Webinar: Leaving SOAR? Here’s What Comes Next.

D3 Security · Security Operations Glossary

Triage Slop D3 Security

A standalone glossary definition — part of the D3 Security Operations Glossary.


Definition

Triage slop is low-quality, AI-generated alert classifications, investigation summaries, and response recommendations produced by security operations tools that lack domain-specific intelligence — output that appears professional and confident but lacks the contextual depth, cross-stack correlation, and domain accuracy required for reliable security operations. — D3 Security, 2026.

The term draws a direct parallel to AI coding slop — the widely documented phenomenon of low-quality, AI-generated software code that introduced measurable increases in security vulnerabilities, logic errors, and production incidents across the software industry in 2025–2026.

The defining characteristic of triage slop: it looks correct to an inexperienced reviewer but fails under scrutiny.

Origin of the Term

The concept emerged from the intersection of two 2025–2026 trends:

AI coding slop. Merriam-Webster’s 2025 Word of the Year, “slop” describes low-quality AI-generated content produced at scale. In software development, the practice of vibe coding — coined by Andrej Karpathy in February 2025, named Collins English Dictionary Word of the Year — produced code that contained 1.7 times more major issues and up to 2.7 times more XSS vulnerabilities than human-written code (CodeRabbit, December 2025).

AI SOC triage. As vendors began applying general-purpose LLMs to security alert triage, the same quality problems observed in AI-generated code appeared in AI-generated triage decisions — confident-sounding output that degraded the systems it was meant to improve.

What Causes Triage Slop?

Three architectural factors produce triage slop:

1. General-purpose LLMs applied to security. Models trained on internet-scale data can generate text about cybersecurity but cannot reason about attack propagation. They treat each alert as an isolated text classification problem rather than tracing how threats move across tools and time.

2. Static playbook architecture. LLM interfaces layered onto static SOAR playbooks speed up playbook authoring but do not fix the underlying inability to adapt to context. The same rigid workflow runs regardless of target, payload, or threat actor behaviour.

3. Absence of quality frameworks. Most AI triage products classify alerts without exposing reasoning chains, without validating against known ground truth, and without providing analysts with a visible framework to assess correctness.

Also see SOAR · Contextual Playbook Generation · Purpose-Built Cybersecurity LLM

Why Triage Slop Is Dangerous

Unlike coding slop — which produces visible failures such as outages and bugs — triage slop fails silently. A misclassified alert does not crash a system. It sits in a queue, marked as benign, while the threat it represented continues undetected.

67% of daily SOC alerts go uninvestigated. 61% of SOC teams report ignoring alerts later confirmed as genuine compromise. False positive rates exceed 50% in most enterprise environments. The average analyst spends 70 minutes per full investigation — far less time is available for AI-assisted triage review.

Also see Alert Fatigue · False Positive · L1 Investigation

How to Prevent Triage Slop

Organisations can prevent triage slop by requiring AI triage platforms that meet four criteria:

Domain-specific AI. The platform uses a purpose-built cybersecurity LLM trained on security telemetry, attack patterns, and investigation methodologies — not a general-purpose model with a security prompt.

Transparent reasoning. Every triage decision includes a complete, visible reasoning chain: what data was analysed, what correlations were found, what was ruled out, and what the platform recommends.

Ground truth validation. The platform validates its accuracy through attack simulation — generating realistic multi-stage attacks and measuring whether the AI discovers complete attack paths.

Progressive trust. AI decisions begin as proposals requiring human confirmation. As patterns stabilise and analysts repeatedly validate specific decision types, those decisions can be hardened into deterministic rules — building trust through operational evidence, not faith.

Also see Attack Path Discovery · Autonomous SOC · Self-Healing Integrations


Related Terms

Alert Fatigue · Attack Path Discovery · Autonomous Triage · Contextual Playbook Generation · False Positive · Purpose-Built Cybersecurity LLM · Self-Healing Integrations · SOAR

Further Reading

d3security.com/resources/soc-alert-triage-slop/ →

d3security.com/blog/amazon-lost-6-million-orders-vibe-coding-soc-next/ →

d3security.com/glossary/ →

Last updated: March 2026 · D3 Security · d3security.com