Following excerpts adapted from Cyber Persistence Theory: Redefining National Security in Cyberspace, authored by Michael P. Fischerkeller, Emily O. Goldman, Richard J. Harknett, and Foreword by General Paul M. Nakasone, published by Oxford University Press
Advances in machine learning and deep learning, heretofore referenced as arti¬ficial intelligence or AI, will impact cyber security and, we argue, cyber stability. There are valid competing arguments for whether AI platforms will provide more advantage to those wanting to secure the international status quo vice those seeking to alter it.
Schneier believes that “AI has the capability to tip the scales more toward de¬fense” yet offers arguments supportive of both offense and defense. AI will support discovering new vulnerabilities for offensive operations to exploit along with new types of vulnerabilities for defensive operations to patch, enabling au¬tomatic exploitation and patching. AI will support reacting and adapting to an adversary’s actions, both offensively and defensively. AI will support abstracting lessons from individual incidents, generalizing them across devices, systems, and networks, and applying those lessons to increase overall attack and defense effectiveness. And AI will support identifying strategic and tactical trends from large datasets and using those trends for both offense and defense.
Others emphasize how AI could advantage attackers by automating tasks and thus alleviating the existing trade- off between scale and efficacy of attacks.AI could support rapid and wide- scale identification of vulnerabilities and their automated exploitation. AI may expand the threat of labor- intensive cyberattacks like spear phishing. AI could enable novel attacks to exploit human vulnerabilities, such as using speech synthesis for impersonation. AI could also identify and exploit the vulnerabilities of others’ AI systems through adversarial inputs, data poisoning, and model extraction. Absent a contribution from AI that is more qualitative than quantitative in character, some believe the defense will eventually gain the upper hand.
At the time of this writing, the technical community has not reached con¬sensus on whether AI will advantage the offense or defense. From the perspec¬tive of cyber persistence theory, this lack of consensus is immaterial because the analytical measure itself (offense- defense balance) is not applicable to the cyber strategic environment, except in limited tactics. However, in the logic and lex¬icon of cyber persistence theory, a couple of observations can be made about the impact of AI on cyber stability.
Greater adoption of AI has the prospect of further compressing time to ob¬serve, orient, decide, and act. The literature on stability suggests that time com¬pression exacerbates destabilizing tendencies. Additionally, AI holds out the prospect of automating the exploitation of vulnerabilities as well as their preven¬tion and mitigation. Given that testing of AI defensive systems has demonstrated that most are vulnerable to exploitation through a discovery process of repeated intrusion attempts, Wyatt Hoffmann proposes that “unlike other domains where engagements between attackers and defenders might be episodic, e.g. autonomous weapon systems in kinetic warfare,” competitors will be inclined, instead, toward persistent behavior to seek out vulnerabilities in adversary defenses. Although Hoffmann argues this will introduce instability, we argue at the outset of this chapter that a strategic environment of persistent action, that is, the cyber strategic environment, is not inherently unstable.
A question worthy of critical examination is “Where must the human deci¬sion maker sit on the loop to make consequential decisions over introducing destabilizing tendencies or controlling the choice to breach cyber agreed com¬petition?” Policymakers also should be concerned with the global rates of dif¬fusion and adoption of AI platforms. Should diffusion be slow or adoption be limited, imbalances of initiative may emerge and incentivize States on the “losing ends” to resort to armed attack to remedy their perceived loss of rela¬tive power.
Intriguingly, the process of machine learning could, itself, be leveraged as a stabilizing reinforcement. If core security algorithms leveraged by States for cyber faits accomplis or direct cyber engagements emphasize tactics, operations, and campaigns that cumulatively advance national interests short of actions and outcomes that would produce instability, the algorithms can be taught and trained to promote cyber stability.
The voluminous activity comprising cyber agreed competition does not preclude the coexistence of stability and fluidity. For this to occur, States must align with the structural imperative of the environment and effectively execute strategies that map to the prescriptions and expectations of cyber persistence theory. The GGE (Group of Governmental Experts), OEWG (Open Ended Working Group), and other explicit, formal approaches to establishing norms of acceptable behavior in support of cyber stability are not aligned with key features of either the current geostrategic or the cyber strategic environments. To mitigate the consequences of this misalignment, tacit coordination and tacit bargaining should play more prominent roles in States’ efforts to construct gran¬ular, mutual understandings of acceptable and unacceptable behaviors.
Copyright © Oxford University Press