Red teaming is useful for plenty of other things, but it's the wrong protocol for answering this specific question about defense efficacy.
By their nature, they only test a few specific variants of a few possible attack techniques that an adversary could use.
The root of this issue is red teams don't test enough of the possible attack variants to judge the overall strength of defenses.
One technique I've examined had 39,000 variations.
The organization's security team chooses and deploys intrusion prevention system, endpoint detection and response, user and entity behavior analytics, or similar tools and trusts that the selected vendor's software will detect the behaviors it says it will.
Testing Against Tens of Thousands of Variants Although testing each variant of an attack technique isn't practical, I believe testing a representative sample of them is.
To do this, organizations can use approaches like Red Canary's open source Atomic Testing, where techniques are tested individually using multiple test cases for each.
If a red-team exercise is like a football scrimmage, Atomic Testing is like practicing individual plays.
Next, they need to use a set of test cases that cover all possible variants for the technique in question.
Building these test cases is a crucial task for defenders; it will directly correlate with how well the testing assesses security controls.
Like a good map, they leave out non-important details and highlight the important ones to create a lower-resolution, but overall accurate, representation of the threat.
How to build these test cases is a problem I'm still wrestling with.
Another solution to the shortcomings of current threat detection is using purple teams - getting red and blue teams to work together instead of seeing each other as opponents.
More cooperation between red and blue teams is a good thing, hence the rise of purple-team services.
Even with more cooperation, assessments that look at only a few attack techniques and variants are still too limited.
Building Better Test Cases Part of the challenge of building good test cases is that the way we categorize attacks obscures a lot of detail.
Cybersecurity looks at attacks through a three-layered lens: tactics, techniques, and procedures.
A technique like credential dumping can be accomplished by many different procedures, like Mimikatz or Dumpert, and each procedure can have many different sequences of function calls.
If you're looking to put your threat detection to the test, look for ways to build representative samples that test against a wider swath of possibilities - this is a better strategy that will produce better improvements.
It will also help defenders finally answer the questions that red teams struggle with.
This Cyber News was published on www.darkreading.com. Publication date: Fri, 05 Jan 2024 15:00:38 +0000