This will start with the first law of thermodynamics and end up with change management. All the while, we will keep information security in focus.
So, simply put, the first law of thermodynamics says that the amount of energy in a closed system cannot be increased or decreased. If we substitute “effectiveness of security controls” for “amount of energy” then we come up with the statement that the effectiveness of security controls in a closed system cannot be increased or decreased. Is that useful? I think so. Because it is exactly when a user opens an e-mail, i.e., let’s something new into the environment, that the effectiveness of security controls can be decreased. In other words, there are so few systems that are even remotely “closed” at this point, that we might need to re-think how we think about controls.
I imagine that in the world of the military, there are those that think of defensive systems in a way that is similar to this approach. But traditionally, in information security, we talk about perimeters (or lack of them) and mostly think of controls as barriers and alarms (the “block, stop, scream” model) not dynamic systems. But to really think about information security controls, we need to think about systems that are dynamic and environments that cannot (ever) be completely controlled.
There are lots of metaphors for security/vulnerabilities out there that refer to the physical world and have been around awhile. They contribute to this idea of a closed system:
1. A chain (only as strong as its weakest link)
2. A wall (that can have cracks or gaps in it)
3. A tunnel (vulnerable on either end)
4. A lock (that can be cracked)
5. A key (that can be stolen or guessed—i.e., picked)
6. A vault (that can be broken into)
7. A door (that can be broken down)1
All useful but insufficient ideas. Ask a software vendor for a drawing of how their product maintains security. My unscientific survey says you have a 40% chance of being presented with a diagram that employs a picture of at least one of these physical objects (locks and brick walls are the most common clip art used in these).
If you’ve been following this blog, you know I try to point out where the information security professional runs the risk of being too satisfied.
The only way to answer the question “have I done enough” with a “yes” is if you are only focused on job security, not information security. And even then you might be wrong.
We need to accept that the physical metaphors for security are increasingly hard to maintain. Even the physical controls that are truly physical are, as we know, no longer airtight for most organizations. People work outside the confines of your organization’s four walls. And Cloud based service providers are not always giving you dedicated physical hardware even if you manage to get your own “logical partition” (referring to a concept, not a brand).
Which leads inescapably to Change Management. I have encountered various attitudes about Change Management in talking to my peers and colleagues. Attitudes/policies run from “change management is an Operations control—Security should have a limited role in it” to “nothing gets into Production without Security’s approval”. (More fundamentally, I am taking for granted that Security has a role in Change Management; if you are in charge of Information Security for your organization and you are not at least monitoring Change Management, well, we need to talk.)
Regardless of the role of the Security department in your Change Management process, here’s the question I would pose: does Security look at potential changes as closed systems and evaluate the effectiveness of the controls within that system or does Security consider how that change might introduce new risk into other parts of the environment as well?
This article, of course, argues for the latter, heavily involved approach. Going back to the rephrasing of the first law of thermodynamics, it is only if a system is truly closed that we can measure the effectiveness of controls as static. And since that is not common for a system to be closed, then the scope of Security’s review of changes to production must be broad and review how overall control effectiveness might be impacted.
In short: Security needs to care about Change Management. A lot.
♦ (if you don’t want gory details, stop reading the article here.)
[GORY DETAILS: Think of the things NOT usually tested for when an organization tests a change in a non-production environment. For example, how does vendor support work, what new trust relationships are established? Consider the way Change Management might be compartmentalized into categories that mask new risks. For example, a new application that requires a firewall rule change (open a new port). Is the firewall rule change considered separately in Change Management, as a separate “ticket”. Is the rule change justified by the mere fact that the application needs it—and after all Security already reviewed the application and said it was secure? Or is the rule change considered in terms of the application’s functionality? If we are talking about opening a port on the firewall, are we just taking the application representative’s (a vendor or developer’s ) word for it that the traffic on the port will be limited to what they say or do we monitor it for a while? What else might be going on on that port? What if the developer thought it would be good to use a port which, unbeknownst to them, is used by known exploits (445 comes to mind)? Shouldn’t that be at least noted in case it impacts your ability to detect the exploit?]
- The “door” as a metaphor for cybersecurity was used famously in the movie “Live Free or Die Hard”. The computers for the Cybersecurity Division of the FBI get compromised in some way and when it is reported to the Deputy Director of Cybersecurity, he asks “We were hacked?”. The answer is rather odd: “It wasn’t denial of service level, but they definitely cracked our door.” Considering that denial of service (DOS) attacks can be accomplished without ever gaining entry to the system under attack comparing any kind of hack that does get into the system with a DOS attack is not exactly accurate. Yet, for an untrained audience, the image of the “door” being “cracked” is what matters. And it works pretty well.