I’ve extolled the virtues of false positives before. Talking about the Boy Who Cried Wolf, I’ve pointed out that the villagers who chose to ignore his false alarms rather than correct his behavior or replace him were taking an unnecessary risk. The story and a pack of wolves bear me out on this.
I still continue to hear professionals, not just salespeople in this case, talk about how few false positives they get with such and such a tool like it’s a good thing.
Once more with [my] feeling: precision and accuracy are manufacturing concepts. If I am producing 1,000 widgets a day, I almost always want them to be identical within “six sigma” if that level of precision is called for. If I’m writing code to automatically process insurance claims, the more accurately the code identifies which claims to pay and which to reject, the better. The more widgets end up as spoilage or claims that end up as out-sorts, the more cost I’ve incurred (cost is bad).
I GET IT. BEING CERTAIN IS REALLY, REALLY GOOD. Makes you feel secure.
But what about that scene in at least half the action movies ever made? The hero is walking…
- …through the woods…
- …down a long hallway with lots of turns…
- …in a parking lot…
…and every time they…
- …reach a turn…
- …hear a branch crack…
- …approach a pillar…
…then they quickly snap their gun pointing directly at the direction of the area of possible danger. No one, and I mean no one, ever says “the hero in the last movie pointed their gun at 15% fewer animals cracking tree branches that were not really bad guys. That makes them a better hero.”
So, my not too subtle point is: accuracy and precision ARE NOT security concepts when it comes to detective controls. If you are in the business of data protection and you are proud of the fact that you have tuned your detection systems to avoid 100% of false positives then you probably erred on the side of tuning out real threats too. In fact, the wolves are counting on the fact that you’d rather turn down the alarms and get some sleep.
There’s another part to this line of reasoning about false positives. We should, in fact, expect our detection technologies to yield false positives if we are truly trying to detect the majority of threats. But, in addition to that, we also need to reject the idea that an explained false positive is ok to live with.
For example, let’s say that you have an orphaned process on a LINUX server that kicks off every day at the same time and causes a spike in internal network traffic. There’s no business reason for it, but you’ve also ruled out that it is being caused by any malware. You’re done, right?
No. Tempting as it is to “not bother” the Ops folks with a request to clean up something you have determined is not itself an exploit, you must. The more you allow such “noise” into your detection abilities, the more likely a real exploit will sneak in undetected.
To sum up: first of all, detection tools that yield too few or no false positives are not sensitive enough. Secondly, on-going false positives that are accepted as part of detection reporting can easily mask actual attacks.
That’s what I’m certain about.