What “Good Coverage” Actually Means

(and why you probably don't have it yet)
If you’ve invested significantly in cybersecurity, you’ll most likely believe you have “good coverage.”

But if you can’t show that map and prove, by adversary behaviour, what’s covered, what’s partial, and what’s missing - with dates and artefacts - you don’t have good coverage; you have guesswork.

Truly “Good Coverage” is an evidence‑backed, threat-led map that ties adversary tactics, techniques, and procedures (TTPs) to concrete controls, with required telemetry specified, exact detection logic referenced, owners named, validation cadence recorded, and measurable outcomes tracked (time‑to‑detect, false‑positive rate, time‑to‑contain).

As a starter, try this:

1. Start with two axes

On one axis sits your Threat Profile: the tactics, techniques, and specific procedures most relevant to your sector, attack surface, and operating model.

On the other axis sits your Defensive Stack: not marketing categories, but the concrete controls, content, and telemetry you actually run - EDR rules, SIEM analytics, identity protections, email controls, network sensors, SaaS audit logs, data egress guards, and the humans and processes around them.

2. Cross the two, and you have a Coverage Map

Every cell answers a practical question: do we reliably detect or prevent this behaviour, under realistic conditions, with evidence?

If the answer is “yes,” what artefacts prove it? - Content IDs, playbook
links, last validation date, dependencies, and owners?

If “partial,” what fails in the chain? - telemetry gaps, weak logic, noisy
thresholds, missing enrichment, or response friction?

If “no” is it a conscious risk, a backlog item, or something you didn’t know
about until now?

Why most teams don’t have this is not a mystery.
  • Tool portfolios grew faster than detection engineering capacity, so effort spreads thinly and lands on the easy things: rule toggles, default packs, and indicator ingestion.
  • Telemetry is patchy with high-fidelity endpoint data here, blind SaaS or identity signals there, and normalisation isn’t consistent enough to write threat-led content once and reuse it confidently.
  • Ownership is ambiguous; SIEM, EDR, identity, and data teams each assume someone else has the gap.
  • Finally, validation is episodic. Red-team weeks are valuable, but if your content isn’t re-tested after a change, drift will quietly rot coverage until the next big exercise re-discovers it.
Make coverage concrete by writing it down at the procedural level

Take a technique like credential dumping and list the procedure variants you care about in your context - direct LSASS access, minidumps, handle duplication, and memory scraping via legitimate tooling.

For each, specify the telemetry you require (process lineage, handle access events, command-line, image loads), the detection logic you rely on (exact rules or analytics with their IDs), the suppression or tuning needed to keep noise acceptable, and the response path if it fires (investigation steps, isolation, identity resets ). The net result is automated threat detection, and reduced manual hours.

Record where the logic lives (EDR, SIEM, detection-as-code repo), when it was last validated, and by whom. Do the same for persistence via scheduled tasks, discovery via PowerShell, lateral movement via SMB, and data staging to cloud storage.

You don’t need the whole matrix perfect on day one, but you do need the most relevant rows written with this level of precision.
A living Coverage Map then drives decisions with speed

When a fresh report drops about a campaign using a new flavour of signed binary proxy execution, you don’t scramble for generic rules; you check the row for that group of behaviours and see exactly what you have . If the answer is partial, the map tells you whether the blocker is missing telemetry on specific hosts, a blind identity signal for certain privileged groups, or a rule that was tuned away last quarter because it was too noisy.

That precision is how you convert an intelligence input into an engineering output within the sprint, instead of letting it drift into backlog fog.

Coverage also gives you rationalisation without ideology. Instead of arguing which platform is best in the abstract, you can show which controls currently defend against which threats. Overlap might be healthy (defence in depth) or wasteful (two tools covering the same procedure with similar logic and cost). Under-coverage is usually the painful surprise: teams discover that entire classes of threats - initial access via OAuth consent, for example - are functionally invisible because the telemetry is missing or not centralised in a way analytics can use.

The map shows the cheapest route to reduce risk

Add the missing signal, promote a successful hunt to a managed detection, or retire a redundant rule set and free up analyst time.

Metrics matter, but only the ones that anchor back to the techniques used by an adversary. Track validation cadence per behaviour (when did we last emulate it successfully?), time-to-detection by behaviour family, false-positive rate per analytic, and time-to-contain for the associated playbook.

Roll these up for leadership in plain language: which defenses are well defended against the latest threats, which defense are improving, and what is the residual risk that remains.. Boards and regulators understand risk framed around what attackers actually do far better than vanity scores.

Ready to take the next step?

If you want a head start, gather your top threats, your current controls, and one thorny use case; we’ll put you in touch with an expert who can walk you through a sample Coverage Map together and identify the three fastest ways to turn uncertainty into evidence.

Introducing Threat-Led Defense

For decades, cybersecurity efforts have primarily focused on identifying and patching vulnerabilities—flaws in the software we rely on that adversaries exploit to launch attacks.

While addressing critical vulnerabilities is essential, the relentless pace at which new ones emerge makes it nearly impossible for even the most well-resourced organizations to keep every system fully patched.

>Introducing Threat-Led Defense
About this Sponsor

Built by the Team Behind ATT&CK® Tidal Cyber is powered by the practitioners who helped make MITRE ATT&CK® the industry’s common language for adversary behaviour.

With deep roots in ATT&CK stewardship, evaluation programs, and hands-on threat-informed defense, their team has productised the approach they pioneered, making it practical, scalable, and ready for your day-to-day defense.

>About this Sponsor
More in Tidal Cyber
From IOC Chasing to Threat-Led Defence
From IOC Chasing to Threat-Led Defence

You can keep blocking yesterday’s hash, or you can start defending against tomorrow’s behaviour.


Share this story

We're a community where IT security buyers can engage on their own terms.

We help you to better understand the security challenges associated with digital business and how to address them, so your company remains safe and secure.

Interested in what you see? Get in touch, and let's start a conversation Get in touch