BLOG

Effective Use of Benchmarks in Evaluating Your Information Security Awareness Program-2

Publication Date

December 28, 2023

Share

In the previous article, we mentioned the importance of choosing effective metrics in external (with other institutions) and internal ( with units within the organization) comparisons  to evaluate your Information Security Awareness Program and how they relate to your position in your organization’s maturity model.

In the second article of this series, we cover the topic of Tiering, which provides a more effective evaluation of the analysis of phishing drill data, in other words, an “apples to apples” comparison . Within the scope of the leveling concept, the content of the phishing scenario, which metrics will be used, and the difficulty of detecting phishing are discussed.

The SANS leveling model includes 5 levels defined below. Each level has 3 specific areas: target audience, experience and characteristics of the simulation scenario. Although the target audience and experience may vary from level to level, the main factor that distinguishes the levels is the characteristics of the simulation scenario.

As you can see from the diagram below, a Level-1 simulation is easy to spot as phishing because it is very “spammy” in nature. As the levels increase, it becomes more difficult to detect phishing, as scenarios use known brands or content that is familiar to the target audience. This can be seen
in the example of a scenario using a well-known brand in Level-2 or
a scenario related to Zoom business in Level-3.

Level-4 includes phishing scenarios where personalization, reliability and some prior information are used. Level-5 is for targeted high-profile individuals (Board of Directors, C-Level Executives, etc.) and scenarios are personalized for executive leaders.

As you can see in the Level-1 simulation in the example below, indicators are easy to identify and should be quickly identifiable by trained employees (target audience).

SANS - Awareness Example

Now let’s take a look at why leveling is important when comparing with external organizations.

As mentioned in our previous article , benchmarking is often easy to implement. Because there are many variables that affect the validity of the results. Many of these variables are easy to define, such as target audience demographics or phishing scenario variables.

However, defining the difficulty of phishing is more difficult depending on the simulation tools used and the credibility of the scenarios.

There are many institutions that use phishing simulations as an awareness tool, we can easily count institutions in verticals such as banking/finance and similar sectors. We can also narrow this down by size and even employee (target audience) maturity. But typically, features and similarities in real simulation data are difficult to compare. The SANS leveling model ensures that benchmark data is as applicable as possible, allowing you to perform better simulations.

For example, if an organization wants its overall security posture to be evaluated by a jury among its peers, past simulation results can be looked at to make evaluations appropriate to the selected level. When each industry organization shares metrics at a specific tier, metrics can be compared much more effectively than metrics across all levels or all difficulty levels.

The ideal situation can be achieved by choosing a leveling that represents a simulation with the same features, in similar sectors and under conditions where similar cyber security programs are implemented. If target audience demographics can also be compared, the Undesired Action Rate (UAR) will provide a reliable benchmark, along with the proportion of employees reporting suspected phishing.

Communication and collaboration between external organizations is crucial for such comparisons, which can be difficult depending on the typical workloads of the person responsible for the information security awareness program and the operational teams. When strategic priorities, as well as simulation execution time and resources, are jointly determined, analytical evaluations and comparisons of past results, taking into account leveling, can be reliable for teams. In this case, teams may choose to prepare simulations for a similar level of benchmarks by reducing resources and planning for impact.