Why is human usable security so broken?
Why do we have so many basic problems in cyber defense? The reason, imho, is vendors and specifically developers do a terrible job at designing human usable security solutions. This domain is dominated by the geeky coolness of the technology itself. Further, instead of fixing core problems, we say we need more new geeky technology. To complicate matters, marketing departments differentiate themselves by how many new terms and concepts they can coin to talk about the same things. Everyone is trying to spin the same story in a different way to make themselves sound like they are unique. I think this industry is drunk on new geeky technology and sick of eating buzz-word salad.
Let’s look at a few key problems:
1) Why are passwords such a problem?
- Developers do not always implemented password storage in a secure salted fashion
- Organizational policy is terrible. They force users to change their passwords frequently and also expect them to type that password manually hundreds of times a day, often on tiny touch screens. This forces users to pick passwords that are easy to remember and super simple to type.
- Password reset procedures are the weakest link in the entire system
- Developers and vendors are slow to adopt better solutions like FIDO
Solution: Regulators need to produce a document that outlines all of the requirements for using passwords and password resets. Organizations that continue to use passwords must then be legally liable to the regulation of how to implement them. A better and more proven solution is how people protect their homes and cars. They do this by carrying physical keys. Organizations need to move to the FIDO standard and have people view their FIDO keys just like they do their home and car keys.
2) Why is software so vulnerable?
- Organizations rush concepts to market with the idea that they will fix or patch them in the field
- Developers are often not well trained on how security researchers break things.
- Developers and QA engineers focus mainly on how to make something work, not how to make it break.
Solution: Developers need to be trained on basic red-team activities. Further, at some level, organizations need to be held legally liable for the quality of the software they produce. Regulation is probably a necessary evil to force positive change.
3) Why are users reluctant to install corporate security software?
- Users are worried about what the software is doing and what data it is reporting.
- Organizations want visibility into the devices on their network and want to remotely control those devices for the user, and this scares end users.
- Security software is viewed as making systems slow and sluggish, making day-2-day tasks harder or impossible, and often requires multiple software solutions to do what the user believes is the same thing.
- Corporate security software is only available for the company owned devices the user is using, without realizing that the user probably uses many personal devices for some amount of corporate work.
Solution: Users should have full visibly into what data is being sent back to the central servers and which policies are enacted on the system. There should be no hidden settings or changes. Imagine if a user could see and easily read all of the data that was being sent back. There should also be no difference between the consumer version and the corporate version of the product and all updates and reports should come from or go to public cloud services, not to services behind corporate VPNs.
4) Why are systems not human useable?
- Systems are usually designed by first working on the backend data and internal storage. Then APIs are written to access that data. At the very end of the process the UX is developed based on the data at hand.
- Developers will often respond to requests for features with “that is not how it works under the covers”
Solution: Organizations should design the UX first, then figure out what data is needed to drive that UX.