Relying on people to be dumb is not an effective control. “won’t” is not the same as “can’t” and never will be. But it is used all the time to justify controls and to assume that lost records are somehow “safe”.
Consider the “users are dumb” argument. It comes up when you find weak application security. By the time an IT department is focused on getting an application to run, they sometimes feel obligated to confuse or ignore the distinction between what a user can’t do and what a user won’t do. Consider that the majority of the time spent in application product selection is *not* spent on implementation issues. The sales people for the product are going to be happy to review a technical specification sheet with IT so that IT can declare whether or not the application can be installed successfully in their environment. But those spec’s are designed for just that purpose: defining the minimum acceptable environment for implementation. Those specs are the only place where a vendor will openly list out the limitations of the product. Application security? At best, it’s an afterthought on the specification sheet. And, if it’s mentioned at all, it will only mention strengths. (If anyone finds any material from a software vendor that brags “easy access to our fully unencrypted database”, please let me know.)
Here’s an example: a part of an application can only be run by individuals logged in with administrative privileges to the application’s database. The fastest way to get that application into production is to either restrict the running of that process to your current database administrators or give more individuals administrator access.
I have had vendor implementation engineers tell me that elevating someone’s privileges was ok because “those users aren’t technical so they won’t figure out how to do anything but what we show them”. In the example above, you are counting on them being “too stupid” to know how to execute SQL commands from, say, Excel. Ignoring for the moment any exploit that steals that individual’s credentials, this is still an awful way to mitigate risk. It is based entirely on the idea that we are giving elevated privileges to someone who is “too stupid” to do any damage.
Sometimes, system developers will develop a system that assumes all users have local admin privileges on their workstations. This is completely because the developer is either too lazy or not focused enough on security when they write the application. But when you challenge the maker of the software, they’ll mumble things about controlling the workstation with GPO’s or, again, how the user won’t be able to figure out how to mess things up.
There is, of course, a fatal flaw in the argument that someone should be trusted with authority on a system because they DON’T know how to use that authority. And then there is the ever present danger that someone or something (malware) will compromise that user’s credentials. There’s some awfully “smart” malware out there.
There is another type of “can’t”-versus-“won’t” line of reasoning that takes place after an organization loses customer data. In that argument, an organization makes the questionable claim that what is protecting the lost data from exposure are the technical expertise, hardware and software needed to read the media the records are on.
When TRICARE, a Federal Government agency that does health care administration, lost data [possibly including Social Security Numbers and prescription drug information] on approximately 4.9 million patients, they explained why the individuals should not worry:
“The risk of harm to patients is judged to be low despite the data elements involved since retrieving the data on the tapes would require knowledge of and access to specific hardware and software and knowledge of the system and data structure,” TRICARE officials said. [http://www.govhealthit.com/news/lost-tricare-backup-tapes-could-expose-nearly-5-million-records ]
Security should never depend on the ignorance of users. Or thieves.
thanks to Rodney Meryweather for reading this and exposing me to the phrase “Security through Obscurity” as way of describing this flawed reasoning.