See Bill Frank’s biography and contact information at the end of this article.
[Note: This is an updated version of the original article posted on March 21, 2024. I replaced the term "Governance” Controls with “Performance” Controls to eliminate any confusion with the NIST Cybersecurity Framework 2.0 use of the term “Governance.”
I focus here on automated controls that monitor and measure the “performance” of “Defensive” controls that directly block threats or at least alert on suspicious activities.
How well are your cybersecurity controls performing? Measuring control efficacy is challenging. In fact, under-configured, misconfigured, and poorly tuned controls, as well as variances in security processes are the Achilles Heels of cybersecurity programs.
A mismatch between risk reduction potential and performance results in undetected threats (false negatives) as well as an excessive number of false positives. This leads to an increase in the likelihood of loss events.
All controls, whether people, processes, or technologies, can be categorized in one of two ways – Defensive or Performance.
Most controls are easily categorized. Firewalls and EDR agents are examples of Defensive Controls. We categorize Offensive Controls as Performance because their purpose includes testing the efficacy of Defensive controls.
Vulnerability management (discovery, analysis, and prioritization) is a Performance Control because vulnerabilities, whether in security controls, application code, or infrastructure, are a type of control deficiency.
Patching is a Defensive Control because patched vulnerabilities prevent threats targeting those vulnerabilities from being exploited.
Attempting to conduct Performance functions manually is time-consuming, limited in scope, and error prone. Human Penetration Testing has been the go-to Performance Control for decades. However, only the very largest organizations can afford to fund a Red Team to provide anything close to continuous testing.
Most organizations hire an outside firm to perform pentesting. Due to high costs, the scope of human pentesting is limited. In addition, it is typically performed only once a year or once a quarter. Therefore, for most organizations, human pentesting is little more than a checkbox exercise.
Note that human pen testers use a variety of tools to address many of the standard and repetitive tasks associated with pentesting. However, in general, these tools are not revealed to the client.
Have said that, I am not here to denigrate human pen testing. There are surely many pen testers that have deep expertise and creativity that goes beyond what any automated tool can provide. This is why bug bounty programs are popular.
The cybersecurity market has responded to the need for automated Performance Controls. Since no two organizations are the same, my goal for this article is to describe different types of Performance Controls to help you decide which approach is right for you.
There are five types of automated Performance Controls I will discuss:
Note that since virtually all of these tools are SaaS platforms, factors including costs, support and training, community, data security, and compliance must always be evaluated!
Attack Simulation is my simplified term that covers a variety of vendors who use terms like Automated Penetration Testing, Breach and Attack Simulation, and Security Control Validation.
The one thing they all have in common is executing simulations of known threats against deployed controls. However, the vendors in this space use a variety of architectures to accomplish their goals.
The key factors to consider when evaluating Attack Simulation tools are (1) the number of agents that are required or recommended, (2) integrations with deployed controls, (3) the degree to which the simulation software mimics adversarial tactics, techniques, and procedures (TTPs), (4) the vendor’s advice on running their software in a production environment, (5) firewall / network segmentation validation, (6) threat intelligence responsiveness, and (7) the range and quality of simulated techniques and sub-techniques.
Agents. The number of agents needed for internal testing. This ranges from only one agent needed to start the test to the requirement for agents on all on-premise workstations and workloads. No agents may be needed for testing cloud-based controls.
Defensive Control Integrations. Integrating Attack Simulation tools with Defensive Controls enables blue/purple teamers to better understand how a control reacted to a specific technique generated by the attack simulation tool.
Simulation. An indicator of how close a vendor gets to simulating real attackers is its approach to discovering and using passwords to execute credentialed lateral movement. Are clear-text passwords taken from memory? Are password hashes cracked in the vendor’s cloud environment (or on the vendor’s locally deployed software)? Adversaries use these techniques regularly, your attack simulation tool should too.
Production / Lab Testing. Attack Simulation vendors vary in their recommendations regarding running their tools in production vs lab environments. Of course, it’s advisable to perform initial evaluations in a lab environment first. But to get maximum value from an attack simulation tool, you should be able to run it in a production environment.
Firewall / Network Segmentation. There is a special case for testing firewall/intrusion detection efficacy. Agents may be deployed on each side of the firewall. This allows for validating firewall policies in a production environment without running malware on any production workstations or workloads.
Threat Intelligence Responsiveness. New threats, vulnerabilities and control deficiencies are discovered with alarming regularity. How quickly does the attack simulation vendor respond with safe variations for you to test against your controls? Do you need to upgrade the tool, or just deploy the new simulated TTPs?
Range and Quality of techniques and sub-techniques. Attack simulation vendors should be able to show you their supported MITRE ATT&CK® techniques and sub-techniques. As to quality of those techniques and sub-techniques, it’s very difficult to determine. The data generated via the Integrations with deployed controls surely helps. We recommend testing at least two similarly architected tools in your environment to determine the quality of their attack simulations.
Vulnerability management is a cornerstone of every cybersecurity compliance framework, maturity model, and set of best practice recommendations. However, most organizations are overwhelmed with the number of vulnerabilities that are discovered, and do not have the resources to remediate all of them.
In response to this triage problem, vendors developed a variety of prioritization methods over the years. Despite its limitations, the Common Vulnerability Scoring System (CVSS) is the dominant means of scoring the severity of vulnerabilities. However, even NIST itself states that “CVSS is not a measure of risk.” Furthermore, NIST states that CVSS is only “a factor in prioritization of vulnerability remediation activities.” https://nvd.nist.gov/vuln-metrics/cvss
Risk-based factors for vulnerability management include the following:
Business Context. What is the criticality of the asset in which the vulnerability exists? For example, production systems vs development systems.
Likelihood of exploitability. A combination of threat intelligence and factors associated with the vulnerability itself determine the likelihood that a vulnerability will be exploited. The Exploit Prediction Scoring System (EPSS) is an example of this approach.
Known Exploited Vulnerabilities. The Cybersecurity & Infrastructure Security Agency (CISA) maintains the Known Exploited Vulnerabilities (KEV) Catalog. Vulnerabilities on the KEV list should get the highest priority for remediation.
Asset Location. What is the location of the asset with the vulnerability in question? Internet-facing assets get the highest priority.
Compensating Defensive Control. Is there a Defensive Control that can prevent the vulnerability from being exploited?
Modern Defensive Controls generate large amounts of telemetry that can be used to monitor their performance and effectiveness. Automating metrics reporting enables continuous monitoring and measuring the performance of a larger number of deployed controls.
While automated cybersecurity performance management platforms are not always considered an alternative to Attack Simulation and Risk-based Vulnerability Management solutions, they do have the advantage of being less intrusive because they are passive. All they need is read-only access to the Defensive Controls. There are no agents to deploy and no risk of unplanned outages.
The key factors when evaluating automated metrics solutions include the following:
Scope of Coverage. The range of metrics based on your priorities such as vulnerability management, incident detection and response, compliance, and control performance.
Integrations. Does the metrics solution vendor support integrations to your controls? If not, are they willing to add support for your controls? Will they charge extra for that?
Reporting flexibility. How flexible is the report building interface? What, if any, constraints are there to generate the reports you want? Can you build customized dashboards for different users? Is trend analysis supported?
Ease-of-Use. How easy is it to generate custom reports?
Scalability and Performance. Given the amount of data you want to retain, how fast are the queries/reports generated?
All security controls need to be configured and maintained to meet individual organization’s policy requirements, threat profile, and risk culture. The amount of time and effort needed to initially implement the controls and then keep them up to date varies depending on the control type and the functionality provided by the vendor.
Firewalls are at or close to the top of the list of controls requiring the most care and feeding. Therefore, it’s not surprising that the first security control configuration management tools were created two decades ago to improve firewall policy (rule) management. These tools eliminate unused and overlapping rules, and improve responsiveness to the steady stream of requests for changes, additions, and exceptions.
Security Information and Event Management (SIEM) systems are also at or near the top of the list of controls requiring extensive care and feeding. One critical aspect of a SIEM’s effectiveness is the extent of its coverage of MITRE ATT&CK® techniques and sub-techniques. This also maps back to the SIEM’s sources of log ingestion. Furthermore, SIEM vendors provide hundreds of rules which generally need to be tailored to the organization.
To reduce the level of effort needed to tune SIEMs, consider tools that evaluate SIEM rule sets and provide assistance to detection engineers.
The variety of tools available for managing security control configurations will continue to grow, encompassing additional types such as endpoint agents, email security, identity and access management, data security, and cloud security.
Process mining is a method used to analyze and optimize business processes by collecting and analyzing event logs generated by information systems. These logs contain details about process execution, such as the sequence of activities, the time taken to complete each activity, and the resources involved. Process mining algorithms use this data to automatically generate process models that visualize how a process is executed in reality, as opposed to how it is expected to be executed.
While process mining is not a new concept, it is new for cybersecurity processes. For cybersecurity process mining to be useful, logs must be collected from non-security sources as well as cybersecurity controls.
Process mining is actually a separate class of higher-level analysis and measurement. All the others, with the exception of security operations platforms (SIEMs) here are testing, measuring, or obtaining data on individual controls. Having said that, at present, processing mining does not specifically measure the effectiveness of defensive controls.
An example of a common cybersecurity process use case is user on-boarding and off-boarding. To perform this analysis, the process mining tool must integrate with human resource systems in addition to authentication and authorization systems.
In addition to (1) improving compliance to defined processes, process mining will (2) expose bottlenecks, (3) reveal opportunities for additional process automation, and (4) make it easier for stakeholders to understand how processes are executed using visual representations of the processes.
While scalability, performance, and integrations are important, the way processes and variances are rendered in the user interface and the way you can interact with them is critical to understand the causes of variances and opportunities for improvement.
Having reviewed the types of Performance Controls available to monitor and measure Defensive Control efficacy, it’s worth noting that they all monitor and measure control effectiveness individually.
The processing mining folks might disagree with the above statement in the sense that they aggregate multiple control functions by the processes in which they play a role. However, process mining does not actually measure the efficacy of the individual controls in processes. It focuses on improving the effectiveness of processes.
While there is no doubt about the value of discovering and remediating deficiencies in individual controls, there is another function needed from a risk management perspective. That is calculating Aggregate Control Effectiveness. How well does your portfolio of Defensive Controls work together to reduce the likelihood of a loss event?
Aggregate Control Effectiveness must consider attack paths into and through an organization. A Defensive Control that has strong capabilities and is well configured will not reduce risk as much as anticipated if it is on a path that does not see many threats or is on a path with other strong controls.
In addition to discovering and prioritizing Defensive Control deficiencies, a Performance Control measurement program will improve the accuracy and precision of Aggregate Control Effectiveness calculations.
My next article will address the issue of Aggregate Control Effectiveness and its relevance to risk management. Stay tuned!
Next Steps: WEI provides enterprises with increased visibility at all touch points of the IT estate, and that includes at the edge and applications within the data center. How can we help your enterprise with its current and future cybersecurity architecture? Contact our experts today to get started.
Mr. Frank is one of two inventors of Monaco Risk’s patented Cyber Defense Graph™. It is the core innovation for Monaco Risk’s cyber risk quantification software which enables a more accurate estimate of the likelihood of loss events.
Prior to Monaco Risk, Mr. Frank spent 12 years assisting clients select and implement cybersecurity controls to strengthen cyber posture. Projects focused on controls to protect, detect, and respond to threats across a wide range of attack surfaces.
Prior to his consulting work, Mr. Frank spent most of the 2000s at a SIEM software company where he designed a novel approach to correlating alerts from multiple log sources using finite state machine-based, risk-scoring algorithms. The first use case was user and entity behavior analysis. The technology was acquired by Nitro Security who in turn was acquired by McAfee.
Bill Frank's contact information: