Victorian-era fears about being buried alive led to the patenting of clever mechanisms to allow hapless individuals to escape from their own interment. Feel free to chuckle over that, but the next time you get into an argument over infosecurity risk, take a moment to stop and consider whether one of you is overly concerned about coffin escape hatches.
After political issues, risk perception issues represent the biggest challenge for the security professional. Accurately understanding risk and effectively communicating that understanding to others is core to any risk management role.
Conveying an accurate picture of risk requires more than building awareness of organizational priorities; it also involves teaching employees what to do about it. Most importantly, it requires inculcating a culture in which people want to help. We, as humans, actually have a marvelously effective mechanism for coping with familiar risks. Unfortunately, the natural confidence this engenders is counterproductive when confronting the unfamiliar risk situations of today's wired world. For example, when code is created without regard for best practices for security robustness, either the developers assume that they already have an adequate understanding of the risks, or they overestimate their knowledge of the proper techniques for reducing them.
Humans' perception of risk severity is proportional to the degree to which they feel the risky conditions were imposed on them, and inversely proportional to their expectation of benefit from the risk-causing conditions. This explains why we place a relatively higher priority on preventing death by terrorism than on highway fatalities. Common understanding of risk and appropriate responses appear when everyone can take part in the relevant decisions. Committees may be tedious and inefficient, but the buy-in process is an effective way to build consensus.
Best practices are often lacking because their implementation would require discipline and inconvenience. Instead of actually managing these awkward risks, it's easier to target some sort of straw man, like screening for nail clippers and lighters on airplanes. It's a cheap way to create the perception that terrorism is being addressed. Likewise, security professionals often find it expedient to be more concerned with hackers than with the more insidious and awkward matter of internal threats.
The disconnect between desired security behavior and in-the-wild typical human behavior has little to do with the precise wording of policy documents; it has to do with the perception of what constitutes real risk and what needs to be done about it. Humans are naturally fascinated by disaster, leading to inappropriate levels of preoccupation with the unusual and dramatic. You are subject to this, and the people you deal with are often affected by it to significant degree.
What it boils down to is that we are not wired to deal well with low-probability threats. Given that we're in a profession that exists to respond to such risks, the potential for dumb decisions is pretty high. Not only do you have to avoid being blown about by the winds of risk, you have to protect your user base from them, too. You are competing with both the media and the grapevine to help your users know how to act safely online. You can only expect a finite level of attention from any of the people you are attempting to influence, so use it wisely. Choose your risk battles, and don't let your risk awareness capital get squandered through the spread of counterproductive risk myths--either from your department or from the outside.
Dig Deeper on Risk assessments, metrics and frameworks
A look at AWS antipatterns: What not to do in the cloud
Diversity in analytics: how to overcome the obstacles and optimise the opportunities
Implementing RPA at John Hancock ushers in new wave of IT innovation
AI is no silver bullet for cyber security