We have always talked about the need for a proactive approach to security and its effectiveness and benefits in managing risks. E.g. it’s always more effective and economical to build secure software rather than testing and fixing it after development or in production. In fact we even learned in school with lessons which said “Prevention is better than cure”.

In risk management we come across controls which are:

Preventive: Controls which ensure than exposure don’t or can’t occur

Detective: Controls which help us capture exposures if they happen or are happening

Corrective: Controls which enable us to correct exposures

Nowadays there is lot of focus on Detective controls which includes deployment of technology solutions which detect and capture unwanted network activity, access attempts, patterns etc. Needless to say these investments and focus is fine, but we need to move our risk management posture more towards the preventive side. Which is we must do more to ensure we don’t have weak areas which can be exploited.

I recently read the book “Superfreakonomics” by Steven Levitt and Stephen Dubner. I am usually apprehensive of sequels as they never match up to the original, but gladly this book was a good read. I came across two examples in the book which illustrate the point about preventive controls.

 

After the 7/7 terrorist attacks in London there was a team formed to use statistical information to identify terrorists. Data points used to identify suspects were banking usage patterns such as:

-They make large deposits in cash and withdraw small amounts

-PO boxes are used as addresses and they often change

-There are regular wire transfers to other countries but always below the threshold for Bank   triggering requirements

-They never use savings accounts or fixed deposits even though the account had idle money

-Transactions don’t show normal living expenses and regular out flows such as insurance payments etc.

As one can imagine it would be difficult to come up with a algorithm to make the system accurate. Let’s say a system is developed with 99% accuracy and that there are 500 terrorists in the UK, 495 of them would be identified which would be great. The problem is that with 50 million adults living in the UK the system would also wrongly identify 1% of them which is 500,000 people. This would be huge problem to manage, which is similar to the “False positive” issue in the information risk management world. Hence the best detective control system or technology would always have a false positive issue which would significantly reduce the benefits from the system.

Another example is the detective control deployed at airports which require us to remove our shoes at the security check / scan. This started after one Richard Reid tried to ignite a shoe bomb; fortunately he failed but statistically succeeds in killing equivalent of 14 lives a year in the US!

Let’s say it takes on an average one minute to remove the replace the shoes in the airport security line. In the US this happens for about 560 million times a year, which is equal to 1,065 years. Average US live expectancy is 77.8 years, which yields a total of 14 person-lives a year.

 

The above examples may sound dramatic (statistics and economics can be used to communicate any message depending on which side you are!). However the underlying theme makes sense, we have to focus on proactive approach to security to be more effective and economical in comparison to other approaches.