Software or network vulnerabilities are gateways for hackers to gain unauthorized access. It offers instant chances to run malicious code, install unwanted software, or even cause data loss. Managing vulnerabilities is a continuous process and requires proactive monitoring to identify, analyze, and respond to these risks.
Here are ten questions to verify whether your vulnerability program is functioning at optimal levels:
- Are you getting overwhelmed with the number of vulnerabilities?
There are 76,206 known vulnerabilities today, out of which 2,712 gets added in 2016. So, on average, 18 new vulnerabilities got added each day. There are many reasons why vulnerabilities are increasing and will continue to be critical for information security.
-
- The introduction of new technologies such as mobile computing, virtualization, and cloud, the numbers of vulnerabilities continue to rise.
- Cloud computing has made governance more difficult as the organization shares responsibility with cloud providers.
Third-party security is an issue as this increases the attack surface area.
Examples of how third-party vulnerability management can affect organizations include Target’s 2013 breach. Fazio Mechanical, a heating & air company that worked with Target, got breached, which led to infiltration into Target’s network. (Source: KrebsonSecurity)
-
- Do zero-days sound like “heads I win, tails you lose” kind of game?
A zero-day threat means no fix. These attacks don’t get discovered immediately. It takes approximately eight months to detect zero-day exploitation (source: FireEye)
Coincidentally, It takes organizations an average of 103 days to remediate vulnerabilities. (source: NopSec)
-
- Many organizations don’t have adequate means of detection in LANs/VLANs, nor do they have ways to protect data from leaving the environment (SSL, data loss prevention, etc.)
Traditional defenses rely on malware signatures or URL reputation, which identify only known threats.
- Many organizations don’t have adequate means of detection in LANs/VLANs, nor do they have ways to protect data from leaving the environment (SSL, data loss prevention, etc.)
-
- Do you feel CVSS has inadequacies?
Not surprisingly, researchers have found that CVSS scores focus too much on impact without emphasizing risk (such as the prevalence of exploitation). This means CVSS could result in inaccurate assessments, focusing too much on improbable, high-impact scenarios, which means vulnerability management teams waste time and money in remediating issues that are not high-risk. (source: University of Trento).
-
- Applying data analytics to CVSS scores can help organizations determine the least and most vulnerable departments. While CVSS is useful, analytics solutions can pull in data from other sources to better provide context to vulnerabilities. Sources can include threat intelligence feeds and vendors, IDS/IPS, behavioral monitoring, and incident management.
-
- Do you need to scan to know if your assets are vulnerable, and still, feel there are blind spots?
62% of respondents to surveys reported that scanners are used to determine risk levels and prioritize patches. However, only 51% are satisfied with their abilities and the information produced by scanning (source: SkyBox Security).
-
- Vulnerability scanning tools often don’t have access to the entire network (e.g., many network segments may be missing). Any changes to firewall rules could limit the scanning tool.
- Asset scanning is useful, but many types of asset scanning tools run both authenticated and unauthenticated. Asset scanning may depend on IT asset management databases (ITAM), which are often not complete or updated regularly, creating different blind spots.
- The output from vulnerability scanning can be overwhelming to less mature organizations. It is essential to have asset owners identified and recognize false positives that can obfuscate the overall results.
- Vulnerability assessments should go beyond just scanning – they should include thorough asset classification and reviews to prevent blind spots.
-
- Do you wish you knew if you tended to fall for specific types of exploits?
Historical data can help organizations create a minimum security baseline configuration standard for systems, devices, and applications - having this baseline can ensure an organization-wide standard level of security.
-
- Many organizations go through a vulnerability management process and fail to conduct reporting and reviews of organizational trends.
- Analytics and metadata analysis can be used to provide insight into whether organizations are more prone to specific types of exploits (e.g., vulnerabilities in network security devices, vulnerabilities in custom applications due to a lack of penetration testing, etc.).
- Historical data can enable organizations to move from a reactive stance to a more proactive stance, better preventing future exploitation. (source: BeyondTrust)
-
- Do you feel it is a long lengthy process to patch, and you often miss the urgency?
According to a survey, 46% of respondents reported only partial implementation of the patch management process, and another 12% reported having no patch management process (source: TrustWave).
-
- While many organizations may have asset criticality defined (based on required uptime, information classification, etc.), it may not be taken into account to prioritize patch management. Highly critical assets will require urgent patch processes based on the potential impact of the patch’s exploitation and severity. (source: Gartner)
- Patches development and release can be time-consuming. How do you mitigate risks for assets affected by this delay? Fixes can include improved access management for the system or application (by following a “zero trust” model that can prevent outsiders from exploiting vulnerabilities). Compensating controls (a concept from PCI DSS) can accommodate long, lengthy patches. Other controls include removing network access, hardening configurations (source: Gartner)
-
- Do you wish you could wish-away old vulnerabilities and focus on newer, fancier ones?
The DBIR 2015 report says, 99.9% of the exploited vulnerabilities were compromised more than a year after the CVE was published.
-
- After discovering a vulnerability, the remediation process can often be drawn out and longer than it should be due to the hand off to resolver and operations teams. (source: Forrester)
- The delay in remediation, many vulnerabilities might get forgotten and might not get remediated. While attackers constantly mutate to develop new attack methods, they might use old vulnerabilities for easy infiltration and exploitation.
-
- Do you struggle to create good reviews/reports for the right stakeholders?
-
- Many different scanners are used across organizations, though most lack a way to consolidate all of the information to present a standard view of vulnerabilities. This leads to a disconnect on how to prioritize vulnerabilities and asset criticality.
- Reporting on metrics (number of vulnerabilities, exploitation, remediation, the average time to patch, etc.) can provide insight to all parties in the organization and create a common view of priorities and goals. (source: SecureState)
- Many companies use many scanners to get views into their environments – such as database scanners, web application scanners, SAP scanners, and traditional vulnerability scanners. Using large numbers of scanners creates confusion among the different groups in IT security. They can’t see what is specifically applicable to them, nor do they have a macro view of vulnerability management across the organization. (source: Forrester)
- 63% of the Skybox survey respondents stated that they use two or more scanners in the environment. Respondents included all types of roles – including CISO, network operations, risk managers, and security operations analysts. Only 44% of respondents were satisfied with analysis activities regarding scanning- including reporting and data visualization. Generally, the higher up a person was (e.g., Director or CISO versus an analyst), the less satisfied they were with tools – possibly because the information presented was too detailed and not tangible. (source: Skybox)
-
- Are you keeping track of everything that is happening outside the threat landscape?
Most organizations accept that the ability to update vulnerability data quickly following a newly discovered vulnerability or threat announcement. (source: Skybox Security)
-
- It is challenging to understand the threats of exploitation in the context of your environment by just reading about them online from new articles and intelligence feeds.
- Breach simulation technologies provide a “what-if” testing framework that provides visibility and context into your IT environment to understand how applicable specific threats are and their impact. Adding this context to vulnerability management helps in prioritizing assets and understanding vulnerability prioritization. (source: Forrester)
- Many companies like Splunk and Aujas offer vulnerability data analysis solutions, helping organizations visualize data and gain better insight into vulnerabilities and risks. (Source: SANS Institute)
-
- Do you feel it is a re-start every day and hate the fact that vulnerability management is cyclical?
Management teams often forget that VM is a cyclical process. Instead, they fall back to a simple checklist method, which results in them missing the prioritization of assets and vulnerabilities. They focus their efforts on low-impact, unlikely threats and vulnerabilities, often ignoring reporting, remediation, or vulnerability management steps.
Aujas offers Security Analytics and Visualization Platform (SAVP) that leverages advanced machine learning, NLP, and clustering algorithms to understand enterprise security environments and provide personalized vulnerability intelligence.
To know more about how SAVP can help your enterprise, write to us at contact@aujas.com.