Governance and Policy in Practice

Governance and Policy in Practice

Part 4. Rethinking cybersecurity from the viewpoint of risk

 

There are two principles to planning good governance: you automate away toil to ensure reliability and quality, but you never automate decision-making. We always have to know and take responsibility for our basic intent. 

 

Having security policies (actually policies in general) is bread and butter stuff for management. The confusion that afflicts many executives is that policy sounds like a responsibility only for suits. It’s not. Informed policy requires expertise from the very deepest levels of an organization. Intent isn’t something it makes sense to impose from the top down onto an organization, at least not without aligning with the detailed knowledge of front line troops. Policy is only useful if it reflects what each individual, from their own perspective, would promise to do of their own volition when the moment arrives. Everyone has to support policy on an intuitive level. Intent, after all, lies with trained individuals, from the bottom to the top of the organization. That cries out for coordination and alignment.

 

The role of policy is to capture an intuitive summary of all the distributed intent, in the form of automatable rules and procedures, to remind everyone of what they already know to be the right approach in the right context. If it’s too simplistic, it won’t capture nuance. If it’s too detailed, it’s not a helpful summary and details will fall through the cracks. Policy has to be distilled from the “bottom up”, by the supply chain of expertise:

 

DISCOVERY → EXPERIENCE → DETAIL → COORDINATION → POLICY

 

But what does bottom up mean? Instead of thinking “org-chart”, think “edge and centre”. The part of an organization which faces and interacts directly with the world is … all of it! Management interacts with a different world than customer facing sales, but everyone is exposed to a version of the outside world. That makes everyone vulnerable, but it also means that everyone has a different practical experience of what leads to vulnerability. We need all of it.

 

Of course, it’s the “discovery” phase where Orchestra’s business model enters the fray. Scanning technology helps to automate collection of state from inside devices and datastores, which are difficult for humans to see or assess. Automation technology can even suggest remedial actions based on common knowledge across industry norms, from compliance advice and vendor specific knowledge. Indeed, this takes me back to my work with CFEngine and configuration automation: configuration changes should always be automated for reliability and accuracy, given the mind-numbing repetition needed to keep actual state the same as desired state. In a similar way, the tools for discovery of human and technology behaviour can keep track of changing patterns of circumstance, and point out where different configurations might be needed to cope with the mission. 

 

The trivial part of security policy is whether to use these kinds of tools or not. Policy’s more important role is to automate the basic skeleton of both human and technological procedure for dealing with the expected. That’s the minimum. To be more fully prepared, it’s the unexpected we need to worry about. So part of policy is to expect the unexpected! 

 

As Patrick Debois and I point out in our new book,  Promising Digital Risk Management, the human aspect of policy is the glue in between all the tooling that keeps mission together. To keep the glue strong, to avoid dissent and interference, policy’s role is to promote agreement, mutual understanding, and a sense of common purpose. Although it’s certainly not a bad idea, it’s not enough to install CFEngine or Harmony Purple–an organization needs to be aligned with its policy, person by person, from top to bottom–or bottom to top. 

 

The policy funnel bow-tie, described in our book, is a straightforward way to plan and manage that organization-wide process.  It seeks a balance between advice from regulators, experts, and outside elements with general knowledge accumulated from many places, and focuses on everyone’s local mission objectives, informed by deep knowledge and in situ experience. 

The continuous policy evaluation process isn’t rocket science (unless, of course, your mission is actually rocket science!)–all it takes is the promise of goodwill and cooperation from all, and a willingness to manage by promise rather than by edict.

Leave a Reply

Your email address will not be published. Required fields are marked *

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp