Attacker Advantage in Cyberspace: An Enduring Feature of the Domain

By Dr. Michael Senft

March 15, 2024

Image: “Attacker Advantage in Cyberspace,” image generated by OpenAI’s DALL-E, March 1, 2024

Throughout the annals of military history, and now in the domain of cyberspace, the relative advantage between attack and defense has been a focal point of debate. Some argue that the perceived upper hand of cyber-attackers is merely a myth, while others assert that the latest breakthroughs in Artificial Intelligence have turned the tables, granting cyber defenders a newfound advantage [1,2,3]. I contend to end this debate in favor of the cyber-attacker, who possesses freedom of maneuver due to the complexity underpinning the security of modern digital systems. This complexity requires an intricate lattice of security assumptions, which cannot be holistically validated. Freedom of maneuver through security assumptions provides attackers with an advantage in cyberspace, irrespective of technological advancements. Elevating to fact the oft-quoted maxim “The attacker only has to be right once. Defenders have to get it right every time.” provides a new operational mindset to shape the future of full spectrum cyberspace operations. 

The assertion of enduring attacker advantage in cyberspace is controversial, but well-supported in hindsight through exploration of the security foundation for digital systems developed over the past half-century. These “modern” digital systems are in fact an amalgamation of elaborate functional and technical implementations, grounded in an intricate lattice of security assumptions. Systems and network defenders are constrained by security assumptions, while attackers enjoy freedom of maneuver, needing only to invalidate one or more of these assumptions to achieve their goals. Distinguished cybersecurity researcher, Dr. Dorothy Denning highlighted this predicament in 1999 asserting, “Security models and formal methods do not establish security. Systems are hacked outside the models' assumptions” [4].     

For designers, developers and defenders of modern digital systems “the superstructure of security depends upon the underlying assumptions” [5]. An assumption is “something that is believed to be true or probably true but that is not known to be true” [6]. It is impossible to write software without the assumption that the hardware and OS components will function correctly to transform instructions into machine code, which is then executed flawlessly by the device’s central processing unit processor and other hardware components [7]. While security assumptions can be validated individually or in small groups, complexity creates a combinatorial explosion with the vast number of configurations and interactions possible within a computer system, which is increased exponentially when these systems are connected to each other. 

The reliability and functionality of modern digital systems are a testament to human ingenuity and achievement, but are also a testament to human capacity for complacency and hubris. From the Ware report written in 1970 to the present day, the ability of attackers to violate security assumptions to completely bypass layers of security is well documented [8]. The enduring advantage enjoyed by attackers in cyberspace through the ability to invalidate these assumptions is exemplified by the “LogoFAIL” firmware attack disclosed in 2023 and the SolarWinds software supply chain hack identified in 2020. Exploration of these two vulnerabilities provides contemporary context for both how security assumptions are invalidated and the asymmetric level of effort required for mitigation.

“LogoFAIL” exploited multiple vulnerabilities to compromise the Unified Extensible Firmware Interface (UEFI) used to boot system hardware [9]. UEFI was designed to increase security by preventing malicious code from executing by requiring valid digital signatures for software being loaded [10]. However in the event the UEFI is compromised, attackers are able gain full control of all software along with storage and memory on the device since UEFI operates at a level below the Operating System (OS) [11]. In “LogoFAIL” attackers were able to gain full control of the UEFI, and by extension the OS, by replacing the legitimate logo image presented when a system starts with an image specially crafted to exploit vulnerabilities in the UEFI image processing software [12].   

Through “LogoFAIL” attackers were able to systematically invalidate security assumptions made by the developers of UEFI, hardware manufacturers that implemented UEFI on their systems and OS vendors including Microsoft, who relied on UEFI to provide secure boot functionality for their OS. Dissecting the “LogoFAIL” exploit, the first security assumption made was the absence of exploitable vulnerabilities within the UEFI firmware components. Discovery of vulnerabilities within the image processing software used by UEFI provided a focal point for attackers to explore pathways to exploit these vulnerabilities. Replacing the original logo image loaded by UEFI with a maliciously crafted image allowed attackers to exploit the vulnerabilities identified [13]. Here another security assumption was made, whether by design or through oversight. The use of UEFI by a wide range of vendors necessitated the ability to load the logo image of choice for each vendor, but these images did not require valid digital signatures, unlike the software being loaded. Devices sold by vendors who protected access to the logo image file were not impacted by “LogoFAIL”. Even though UEFI was designed to prevent malicious code from executing, it unwittingly introduced a new vulnerability that may have been exploited for years before being publicly disclosed. “LogoFAIL” also reinforces the asymmetry between attackers and defenders in cyberspace in the resources needed to identify and exploit vulnerabilities compared with the resources required for mitigation. While a non-trivial level of effort was required to identify and exploit the collection of vulnerabilities that enabled “LogoFAIL”, this level of effort paled in comparison to the level of effort required to remediate the vulnerability. First, the UEFI firmware on each device impacted needed to be updated, provided that updated firmware patching the vulnerability is available [14]. Firmware, unlike software, is infrequently updated resulting in a complicated process for loading updates, which also carries a slight risk of making the device inoperable if an error occurs during the update process. The cumbersome firmware update process only addresses the known vulnerability within the UEFI image processing software. Defenders must next attempt to determine if impacted systems were previously compromised. With the ability to bypass OS security protections following the compromise of UEFI, the task of identifying compromised systems becomes exceedingly challenging even for highly experienced cybersecurity teams.

Unfortunately “LogoFAIL” is not an isolated incident, rather it is another example where an attacker invalidates one or more security assumptions to completely bypass multiple layers of security measures. However “LogoFAIL” is notable in the scope of systems impacted. The vulnerabilities identified in “LogoFAIL” were present across multiple system manufacturers impacting a wide swath of hardware configurations and the OSs running on to of them. “LogoFAIL” also raises questions on what other yet-to-be discovered vulnerabilities are lurking deep within system firmware and hardware.

Even with assumptions of vulnerability free firmware and hardware, the 2020 SolarWinds hack provides an example where software security assumptions can be invalidated by cyber-attackers to create wide-ranging effects. Unauthorized access to the SolarWinds software development environment enabled a Russian-linked cyber-attack group to compromise IT administration software used widely across government and commercial organizations [15,16]. Code signing was used by SolarWinds to ensure the integrity of their software code once released, but this protection failed to ensure the security of their software development process [17]. Several lines of code were inserted into a Dynamic Link Library (DLL), which then retrieved less than 4,000 lines of malicious code that enabled the attackers to compromise systems and networks belonging to potentially 18,000 customers using the affected SolarWinds software [18,19]. Once the malicious code was identified SolarWinds quickly released patches to remove the malicious code, but the damage was already done [20]. The list of organizations impacted was wide-ranging, from Mandiant to Microsoft in the commercial sector, and from the Department of Defense to the Treasury Department in the government sector. As with the “LogoFAIL” attack, the level of effort required to compromise the SolarWinds software was dwarfed by the level of effort required to respond to the intrusions enabled by the exploited software. 

While egregious security lapses facilitated access to the SolarWinds software development environment by the Russian-linked cyber-attack group, the ability of cyber-attack groups to infiltrate even organizations with advanced cybersecurity capabilities is well documented [21]. Software supply chain attacks may seem novel, but the dangers of trusting software were clearly identified in 1984 by Ken Thompson. He used a simple example demonstrating in an article aptly named “Reflections on Trusting Trust” why “You can't trust code that you did not totally create yourself” [22]. The complexity of modern digital systems requires an implicit assumption that code, from microcode within the central processing unit to high-level programming code used by OSs and applications, is trusted. This assumption is but one of many creating the intricate lattice of security assumptions available to attackers to undermine. 

Published in 1991, “Computers at Risk: Safe Computing in the Information Age” offers “The interaction of threat and countermeasure poses distinctive problems for security specialists: the attacker must find but one of possibly multiple vulnerabilities in order to succeed; the security specialist must develop countermeasures for all. The advantage is therefore heavily to the attacker until very late in the mutual evolution of threat and countermeasure.” Unfortunately in cyberspace, this evolution is currently at an early stage [23].  

For cybersecurity professionals, acceptance of attacker advantage provides powerful insights into meaningful approaches to nullify this advantage. In a prescient post in 1999, noted security researcher Bruce Schneier predicted that “As systems get more complex, security will get worse.” and “As systems become more interconnected, security will get worse” [24]. Only reductions in complexity and interconnectedness will shrink the intricate lattice of security assumptions that provide attackers with advantage. Thorough investigation and assessment of these security assumptions also serves to decrease the attacker's advantage [25]. Finally, deception, both technical and psychological, provides opportunities to turn attacker’s assumptions into advantage for defenders [26,27].   

For decision-makers and policy-makers, accepting that attackers possess an enduring advantage over defenders in cyberspace provides a new operational mindset for the conduct of full spectrum cyberspace operations in an era of great power competition. The 2018 policy shift to persistent engagement and defending forward in cyberspace was met initially with significant concern for further escalation by both policymakers and academics [28,29].  In hindsight, these concerns were largely unfounded, but accepting attacker advantage in cyberspace, also raises the question if this policy shift went far enough given the ever-increasing scope and impact of cyber-attacks. From SolarWinds to Typhoon Volt, restraint in cyberspace has thus far failed to reduce threats to the United States and our allies [30].  

Establishment of norms for responsible state behavior in cyberspace is a noble goal, but these efforts fail to account for the incentives created by the attacker's advantage [31]. Relatively modest investments in offensive cyber capabilities by a less technologically advanced nation can impose significant costs on a more technologically advanced nation for mitigating defensive cyber capabilities. This helps explain why technologically advanced nations are eager to promote restraint in the execution of offensive capabilities in cyberspace and why these efforts will ultimately fail in an era of great power competition [32].   

Restraint however may leave the United States unprepared for large scale conflict in cyberspace, much like the case of submarine warfare at the start of the Second World War. Prior to December 7, 1941 U.S policy prohibited the United States Navy from conducting unrestricted submarine warfare, another domain where an attacker possesses an advantage [33,34]. As a result, the United States was wholly unprepared to conduct unrestricted submarine operations following the attack on Pearl Harbor, with largely ineffective operations during the first two years of the war [35]. That unrestricted submarine warfare ended the war being a key contributor to the defeat of Japan is a testament to the perseverance of the United States Navy submarine force, which came at great cost and sacrifice due to grave policy errors, however noble their original intent. While escalation in cyberspace is not desirable, it is a function of the attacker's advantage provided by freedom of maneuver across the intricate lattice of security assumptions required by modern digital systems. Accepting attackers’ advantage in cyberspace acknowledges escalation will continue irrespective of the desire to establish norms because it enables actors at all resource levels to impose costs [36]. Lessons from the past, however, can help inform an increasingly uncertain future.


[1] Martin Libicki, Lillian Ablon & Timothy Webb, The Defender’s Dilemma, (Santa Monica: RAND Corporation, 2015),

[2] Charles Smythe, “Cult of the Cyber Offensive: Misperceptions of the Cyber Offense/Defense Balance,” Yale Journal of International Affairs (2020),

[3] Joe Slowick, “The Myth of the Adversary Advantage,” Dragos, 2018,

[4] Dorothy Denning, “The Limits of Formal Security Models,” 1999,

[5] Matt Bishop and Helen Armstrong, “Uncovering Assumptions in Information Security”, Proceedings of the Fourth World Conference on Information Security Education (2005), 

[6] “Assumption,” The Britannica Dictionary, Britannica, accessed March 1, 2024,

[7] Peter Loscocco, “The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments”, 1998,  

[8] Willis Ware, Security Controls for Computer Systems Report of Defense Science Board Task Force on Computer Security, (Santa Monica: RAND Corporation, 1979),

[9] Dan Goodin, “Just about every Windows and Linux device vulnerable to new LogoFAIL firmware attack”, Ars Technica, December 6, 2023,

[10] “Secure Boot,” Microsoft Corporation, accessed 1 March 2024,

[11] Goodin, “Just about every Windows and Linux device vulnerable to new LogoFAIL firmware attack”.

[12] Ibid. 

[13] Ibid. 

[14] Megan Crouse, “Widespread Windows and Linux Vulnerabilities Could Let Attackers Sneak in Malicious Code Before Boot”, Tech Republic, December 7, 2023,

[15] Rob Lefferts, “A report on NOBELIUM’s unprecedented nation-state attack,” Microsoft Corporation, December 15, 2021,

[16] Microsoft Threat Intelligence, “Analyzing Solorigate, the compromised DLL file that started a sophisticated cyberattack, and how Microsoft Defender helps protect customers,” Microsoft Corporation, December 18, 2020,

[17]  Digicert, “The Solarwinds Tipping Point,” Digicert, Inc, 2021,

[18] Pam Baker, “The SolarWinds hack timeline: Who knew what, and when?,” CSO, June 4, 2021,

[19] Kim Zetter, “The Untold Story of the Boldest Supply-Chain Hack Ever,” Wired, May 2, 2023,

[20] Ibid. 

[21] Center for Strategic and International Studies (CSIS), “Significant Cyber Incidents,” CSIS, accessed March 1, 2024,

[22]  Ken Thompson, “Reflections on trusting trust,” Communications of the ACM  Volume 27, Issue 8, 1984,


[24] Bruce Schneier, “A Plea for Simplicity,” Schneier on Security, November 19, 1999,

[25] Peter Loscocco, Gregory Machon, and Robert Meushaw, “Assumption-Driven Design A Strategy for Critical Thinking in Trusted Systems Design,” CERIAS Technical Report 2018-2, 2018,

[26] Kimberly Ferguson-Walter, Maxine Major, Chelsea Johnson, and Daniel Muhleman, “Examining the Efficacy of Decoy-based and Psychological Cyber Deception,” Proceedings of the 30th USENIX Security Symposium, 2021,

[27] Michael Senft, “Exploratory Data Analysis of Defensive Cyber Deception Experimentation” (PhD diss., Naval Postgraduate School, 2023),

[28] Michael Fischerkeller and Richard Harknett, “Persistent Engagement, Agreed Competition, and Cyberspace Interaction Dynamics and Escalation,” Cyber Defense Review, 2019,

[29] Brandon Valeriano and Ben Jenson, “The Myth of the Cyber Offense: The Case for Cyber Restraint,” Cato Institute Policy Analysis No. 862, 2019, Available at SSRN:

[30] Cybersecurity and Infrastructure Security Agency (CISA), “PRC State-Sponsored Actors Compromise and Maintain Persistent Access to U.S. Critical Infrastructure,” CISA, February 7, 2024,

[31] James Andrew Lewis, “Creating Accountability for Global Cyber Norms,” Center for Strategic and International Studies, February 23, 2022,

[32] Paul Rosenzweig, “Volt Typhoon and the Disruption of the U.S. Cyber Strategy,” Lawfare, March 5, 2024,

[33] Janet Manson, Diplomatic Ramifications of Unrestricted Submarine Warfare, 1939-1941 (Westport: Greenwood, 1990). 

[34] Joel Ira Holwitt, “Execute against Japan”: freedom-of-the-seas, the U.S Navy, fleet submarines, and the U.S. decision to conduct unrestricted warfare, 1919-1941,” (PhD diss., The Ohio State University, 2005),

[35] Clay Blair Jr., Silent Victory: The U.S. Submarine War Against Japan (Philadelphia: Lippincott, 1975). 

[36] Bill Gertz, “Beijing engaged in ‘critical buildout’ of offensive cyber tools, FBI Director Wray warns,” Washington Times, February 19, 2024,

About the Author

Michael is currently the Director of Cyber Research at SIXGEN. An Army veteran, he previously served as a lecturer and researcher at the Naval Postgraduate School. His operational experience includes tours supporting Joint, Special Operations and Intelligence Community organizations. Michael’s research interests include cyberspace operations, cyber deception, and cyber and electronic warfare convergence. He holds a Ph.D. and M.S. from the Naval Postgraduate School and a B.S. from Virginia Tech. 

The views expressed in this article are those of the author and do not necessarily represent the views of SIXGEN.