We all know that traditional find and fix practice of vulnerability management has many challenges, and as a result is not really equipped to do a good job of managing exploits in today's information security scene. 

In this article I want to talk about the 4 things you should do in order to make it work better.




There is a lot of concern out there about zero-day vulnerabilities, but did you know that - of all the vulnerabilities exploited in 2015, the majority  92.5% of them were vulnerabilities where CVE was published during years before 2015 ? i.e 2014 and earlier years. Which means organizations had many years to remediate/mitigate, yet they failed to do so.

You can read more about this and other trends in vulnerability management  here - Time to Re-think Vulnerability Management ? These 5 Facts Say So...


Coming back to today's topic of how to make it work - Based on my experience working on many VM Programs across industries and geographies, I would recommend the following 4 changes. 


1. Priority ruled by "real" severity

In the information security world - priority is ruled by the severity of a vulnerability. Typical approach is to fix the high vulnerabilities first,  followed by medium and finally low.  Severity is decided by following a standard such as the Common Vulnerability Scoring System (CVSS). A vulnerability is rated high if an exploit is available, and if it directly impacts either of the CIA triad  (Confidentiality, Integrity, Availability) in a severe way and if it is impacting a common service. 

This approach misses scenarios where we can improve the security posture with minimum effort and gradually improvise it to the maximum by following a repeatable process. Priority of the same vulnerability can be different depending on the usage of the asset. It would be lower for an internal server compared to a server in DMZ (De-Militarized Zone) . Defacement vulnerability gets higher priority if it is found in website that can be targeted by hacktivists such as the case of government websites.

For example - If there are 3000 vulnerabilities, out of which if you can identify 500 vulnerabilities that has CVSS rating 8.0 and affecting the DMZ servers.  focusing on those 500 first will make more sense. Similarly we do not have to fix CSRF immediately if all major transactions are going through 2 pairs of eyes by following a maker-checker approach and the probability of hacking reduces by half.

Therefore the exercise of categorizing vulnerabilities and deciding their priority by performing risk based analysis can be much more effective.



2. Valuable inputs from threat intelligence 

In traditional approach you are typically missing opportunities to utilize data to identify any patterns in the output of the VA/PT exercise even though you have been doing the tests for ages and you would have sufficient data.

Analytics on the data generated by regular scans can be helpful in identifying patterns. These patterns can be used to come up with predictive models to build threat intelligence which can be a shot in the arm for the  vulnerability management exercise because now you can take proactive decisions. Proactive decisions will always keep you a step ahead from security point of view.

An organization can develop more effective standard operating procedure document for adding a server in DMZ with a service that has reoccurring vulnerabilities.

Threat intelligence can also be used to  provide direct input to improve process security to cultivate "security aware" culture in the organization.


3. Asset inventory in real-time

As per Symantec - there was a 125% increase in the number of zero day vulnerabilities between 2014 and 2015, and it expected to be even higher in 2016.

Vulnerabilities that do not have a fix available yet are called zero-day vulnerabilities.

An organization can never be ready for zero-day vulnerability you are managing your asset inventory manually. Because assets keep getting added on a daily basis, and these assets may include servers that are running different services with different versions of various software. 

If the list is static and based on periodic scans, then there is a gap in information to manage zero-day vulnerabilities


4. Visibility for quicker decision making

One of the key success factors of a vulnerability management program is the visibility of the right things to the right people in the organization. Though security is a cross organization concern,  it means different things to different individuals in the organization. A VA/PT (Vulnerability Assessment, Penetration Testing) report is definitely not good for all the stakeholders.  

The CIO wants a high level qualitative view that can help her/him in allocating funds. The stats should help understand the current security posture of the infrastructure. It should also help with ROI calculations.

The CISO/CSO is concerned about tracking the status of vulnerabilities and the progress made because it is a mammoth task to track the status/ vulnerabilities of the entire infrastructure. Imagine an organization with 500 odd servers, 300 odd services running of different versions, 1213 new critical vulnerabilities added, along with 1255 high vulnerabilities. It is not easy to track this and remain sane. 

Executives are more interested in knowing the critical business assets that may be affected by a vulnerability. They are concerned about the business and need to take strategic business decisions based on vulnerabilities impacting these assets. 

The IT team needs to know every detail about vulnerabilities. They should be able to reproduce them if required, fix them with all the technical steps required to be followed and verify them before marking as closed.

As you can see, the same report would not be sufficient to cater to the needs of all the above stakeholders. If you provide them with what they need - It will help in taking quicker decisions. 


Traditional find and fix method of vulnerability management can still be made to work in today's world if you can make the above 4 changes.