From EO to Action: Human Factors of Enabling a Cyber Safety Review Board

President Biden’s executive order (EO) on improving the nation’s cybersecurity was a call to action to prioritize cyber safeguards in both the public and private sectors.

A key component of the EO is Section 5, which mandates a Cyber Safety Review Board (CSRB) to systematically review significant cyber incidents. Creation of the CSRB falls to stakeholders from organizations such as the Department of Homeland Security (DHS) and Cybersecurity and Infrastructure Security Agency (CISA), as well as private industry.

Recent media commentary has noted potential challenges in operating a review board. It has also highlighted the value of differentiating between common and rare events, and the importance of recognizing issues specific to cybersecurity compared to, say, aviation and public health.

But discourse on the CSRB hasn’t addressed a vital element: human factors. The following three imperatives highlight the value of prioritizing people, both to better serve those directly involved in cybersecurity incidents and to ensure a balanced approach within the CSRB.

Effectively Balance People, Process, and Technology

The CSRB must include individuals who are subject matter experts in cognitive science and human performance. Members of the review board with expertise in “people” will be critical to providing balanced assessments of incidents, and for creating effective mitigation plans.

The People, Process, Technology (PPT) operating model identifies three central factors that must be balanced to optimize organizational performance. To achieve balance, sufficient attention must be dedicated to each component. However, technology-focused domains such as cybersecurity often leave the human component as an afterthought. While the right technology is needed to protect information assets, and the right processes are necessary for implementing good security practices, the strengths and weaknesses of the people working with the technology and following the processes are rarely considered. If the “people” part of PPT is underrepresented, or not represented on the CSRB, there is a risk that assessments will be inadequate and proposed solutions will fail.

Recognize Cybersecurity as a Human-Machine System

Recognizing cybersecurity as a human-machine system will promote an interdisciplinary approach to incident reviews and recommendations that include readily adoptable criteria. It is unlikely that human beings will be redesigned in the near future, so CSRB recommendations must highlight opportunities to build resilient technologies that bolster human performance and protect against known human weaknesses.

The relationship between people and technology has never been stronger, and one could argue that the boundaries between people and technology are increasingly difficult to define. However, in cybersecurity incidents, people are frequently called the “weakest link.” What often goes unmentioned is that in many situations resulting in failures or breaches, people were asked to perform unrealistic tasks, or to perform with unrealistically high levels of consistency.

Technology research and development processes include stringent performance tests, along with myriad additional assessments. However, we often overlook parallel strategies for assessing human performance, including overlooking processes to better understand how people interact with technology. This oversight propagates a vicious cycle where responsibility for human performance issues, such as clicking a bad link or misconfiguring a system, falls solely on the person or people who made the observable mistake.

Make Robust Recommendations

Cybersecurity incidents vary dramatically and building a comprehensive understanding of the factors that contributed to, or that could protect against, reoccurrence will be a continuous challenge for the CSRB. To create and communicate robust recommendations, the CSRB must focus on people.

Recommendations prompt changes that often have unanticipated consequences. The CSRB’s ability to plan for, and mitigate the impact of, such consequences requires an assessment of how recommended actions might impact human performance as well as the performance of cybersecurity technologies.

In addition, the types and strength of the recommendations made by the CSRB will have a major role in their effectiveness. The CSRB can leverage findings from other industries such as aviation and healthcare, and adapt lessons learned elsewhere to improve recommendations for updating security strategies and requirements. For instance, adapting concepts from existing frameworks such as the hierarchy of intervention effectiveness (Institute for Safe Medication Practices, 1999) could promote the adoption of stronger, system-focused changes.

For instance, most organizations require and offer cybersecurity awareness training and additional training for employees working directly on building or maintaining IT systems. While training will always be a valuable component for improving cybersecurity, it does not serve as a strong standalone solution. In the hierarchy of intervention effectiveness, training is categorized as a weak, people-focused intervention. In contrast, interventions such as reducing complexity, developing standards, and using automation, are categorized as stronger, system-oriented, interventions.

In sum, recommendations must address both human-focused and systemic factors to promote holistic advancement towards a more resilient cybersecurity domain.

It’s important to note that both on an organizational level, as well as across government and industry, efforts to improve cybersecurity will be incremental. By building interdisciplinary teams and by engaging with experts in human behavior and performance to consciously integrate human factors into our security assessments and responses, we can identify the solutions that will have the greatest potential for improving our cyber safeguards.

About Margaret Cunningham
Margaret Cunningham, Ph.D., is the principal research scientist for analytics and behavioral science in the Global Government and Critical Infrastructure (G2CI) group at Forcepoint.