A copy of the full analysis can be downloaded by clicking on the link at the bottom of this blog entry.
In Part 1: Game Theory Basics of War, I described the three potential options players may take in traditional war: don't arm, arm, or attack. I described the benefits and costs associated with arming for defense, as well as the benefits and costs associated with arming for offense or attacking. I indicated that there are ambiguities in interpretaions by players associated with the actions other players are taking: sometimes arming can increase a player’s security, while other times, arming can decrease a player’s security. I explained that there are two crucial variables that contribute to the ambiguity: (i) whether defensive weapons can be distinguished from offensive weapons, and (ii) whether defense or offense has the advantage.
In Part 2: Defining Cyberwar, I indicated that cyberattacks represent a new form of attack, and attempting to frame cyberattacks in terms analogous to those of traditional, real-world attacks has proven to be problematic, in part because cyberwar represents a new kind of war attempts to provide parallels between traditional war and cyberwar has proven to be problematic. I distinguished cyberattacks, cyberwar, and cyberterrorism from one another based on (i) whether the actor was government or civilian and (ii) whether the motivation was personal/commercial or political in nature.
In Part 3: Unique Properties of Cyberwar, I discussed some of the more significant unique properties of cyberwar that distinguish it from traditional war: (i) Cyberwar creates a security dilemma since (a) it's difficult to distinguish offensive from defensive actions in cyberspace, and (b) offense has the advantage over defense; (ii) in cyberspace it's difficult to know who the perpetrator of a cyberattack is; (iii) in cyberwar, there are generally no human injuries or death; and (iv) many cyberweapons are one-time-use in nature.
In this section, the last section of the analysis, I explore several defensive and offensive strategies for players engaged in cyberwar.
Decrease Incentives to Attack
As I mentioned in the previous section, cyberwar has the potential to create a security dilemma, since (i) offensive weapons cannot be distinguished from defensive weapons and (ii) cyberwar tends to favor offense. One player might simply be trying to defensively arm itself against cyberattacks from other players. However, by doing so he may induce insecurity in other players that might lead them to arm themselves for cyberwar and possibly even attack the player preemptively.
To mitigate against the potential for a security dilemma – and particularly against the possibility of preemptive attacks by others – a player who wishes to arm himself to defend against cyberattacks can increase the transparency of his actions to better clarify (signal) his intentions to other players.
Nicholas C. Rueter explains in more detail how this could work (emphasis mine)
It is difficult to conceive of a way in which offensive and defensive cyber weapons will ever become fully distinguishable. However, the trouble of distinguishing between offensive and defensive cyberwarfare programs and policies could be reduced through proper signaling. Militaries are necessarily secretive about their cyberwarfare programs and capabilities. But by clarifying and improving military doctrine on cyberwarfare, and by increasing the transparency of various cyberwarfare programs and units, states could better signal their intentions to potential adversaries and thus partially reduce the fear and uncertainty that exacerbate the security dilemma.
For example, states could clearly divide responsibility for cyber defense and cyber offense into separate military units. Similarly, they could increase the transparency, where possible, of both military and civilian cybersecurity organizations and efforts. By doing so, they might be able to allocate further resources and personnel to cybersecurity programs without substantially threatening their adversaries.
States could also reduce fear and uncertainty by clarifying their military doctrines regarding cyberwarfare. For instance, a doctrine holding that offensive cyber capabilities should only be used in response to a kinetic or cyber attack might help reassure adversaries that weapons are not being developed for hostile or aggressive uses. While there can be clear advantages to doctrinal ambiguity, these may be outweighed by the benefits of reducing an adversary’s fear. Of course, actions speak louder than words, and in order to realize these benefits, states will be expected to comply with their own doctrine. Thus, doctrine should be carefully crafted, and promises should not be made if they cannot be kept.
Alternative to trying to clarify his intentions to others, a player might decrease the possibility that others will attack by decreasing the gains that others could potentially generate from attacking. Of course, the problem as it applies to cyberattacks with being able to decrease an adversary’s gains from attack is that one must be able to identify who the adversary is, so one can determine how to decrease the gains that attacker would generate. And as I indicated above, the attribution problem associated with cyberattacks makes identifying the perpetrator extremely difficult to do. It follows that being able to identify a perpetrator of cyberattacks, in-and-of-itself, will create a deterrence by being able to hold a perpetrator responsible for his actions.
Peter W. Singer and Allan Friedman in “What about deterrence in an era of cyberwar?”explain in more detail the ideas about decreasing the potential gains from an attack and decreasing the attribution problem (emphasis mine)
When we think about deterrence, what most often comes to mind is the Cold War model of MAD, mutually assured destruction. Any attack would be met with an overwhelming counterstrike that would destroy the aggressor as well as most life on the planet, making any first strike literally mad.
Yet rather than just getting MAD, deterrence really is about the ability to alter an adversary’s actions by changing its cost-benefit calculations. In addition to massive retaliation, the adversary’s decisions can also be affected by defenses, in what has been called “deterrence by denial.” If you can’t get what you want by attacking, then you won’t attack in the first place.
Theorists and strategists have worked for decades to fully understand how deterrence works, but one of the key differences in the cyber realm, as we have explored, is the problem of “who” to deter or retaliate against. Specifically, this is the issue of attribution we explored earlier…
This problem has made improving attribution (or at least making people think you have improved attribution) a key strategic priority for nations that believe themselves at risk of cyberattack. So, in addition to considering the massive retaliatory forces outlined by the Defense Science Board, the United States has grown its messaging efforts on this front. In 2012, for example, then Secretary of Defense Panetta laid down a public marker that “Potential aggressors should be aware that the United States has the capacity to locate them and to hold them accountable for their actions that may try to harm America.” In turn, these potential aggressors must now weigh whether it was bluster or real.
As far as “the ability to alter an adversary’s actions by changing its cost-benefit calculations” goes, this might be accomplished, say, by increasing the benefits to potential attackers of cooperating. For example, the US might deter other nations from committing cyberattacks by increase offers of financial aid or decreasing barriers to trade. Another means of changing the cost-benefit calculation is to increase the costs of arming or attacking, say, by increasing sanctions for doing so.
Layers of Defense
Perhaps the most well-known defense against cyberattacks is “defense in layers,” which entails the use of multiple layers of defensive devices in an attempt to at least slow, if not entirely block, uninvited entry into one’s system. (Wikipedia provides a list of potential defense layers.)
Amit Sharma in “Cyber Wars: A Paradigm Shift from Means to Ends” provides a caveat to the use of a “defense in layers” strategy.
The notion of ‘defence in layers’ is a tried and tested dictum which is extensively used to protect both the commercial and the defence networks. It relies on installing multiple layers of defences so as to make the penetration almost impossible. Even though this notion has extensively been used toprotect cyber infrastructure, it is a known fact that such a system is as strong as its weakest link. No matter how much the system is hardened and no matter how many layers are used to secure the system, there is still no guarantee that the system security is foolproof. It is safe only up until the time when someone doesn’t find any vulnerability or an exploitable construct in the system, which can be exploited to gain access in to the system. Yes, this notion of defence at least assures that the penetrator will require time to defeat multiple layers of security. It is this time that is crucial for defenders to take necessary action to thwart the threat. Hence this provides for a minimum deterrence, but nevertheless is not a complete and foolproof solution.
Cyberspace is a medium for transmitting information. It should come as no surprise, then that any targets in cybespace will necessarily be information-related. And the most vital information in a country is that which controls its critical infrastructure. As such, the idea a country’s critical infrastructure as being an ideal target of cyberwar should come as no surprise. Jason Rivera discusses this issue in more detail (emphasis mine)
In cyberspace operations, the object of offense or defense will almost always be some form of information. Even cyberattacks that yield physical effects are informational in nature... Information is the functional aspect of cyberspace and is the core purpose of its existence. In general, information can be created, stored, transferred, modified, deleted, secured, and processed. Humanity derives usefulness from information via the means by which it is processed; therefore, a national targeting strategy should specifically target information processes that are most critical to a nation’s adversaries. Information, in general, is processed in one of two ways:
Information processed by humans exists in the form of ideas; the most valuable ideas within an organization comprise that organization’s intellectual property. Intellectual property is comprised of plans, schematics, formulas, strategic communications, etc.
Information processed by machines exists in the form of protocol; the most critical protocol within an organization comprises that organization’s critical control systems. Control systems include those protocols that operate network-centric weapons, life support systems, sensors, communications systems, transportation systems, etc.
Randall R. Dipert proposes three sets of targets for cyberwar: military communications, weapons systems, and joint-use infrastructure. These sets of targets are overlapping, if not identical to Jason Rivera’s proposed targets. From Dipert:
Against a nation’s military targets, there are essentially three categories for targets of cyberweapons.
First, the cyberweapons may target and impair a main function of military chains of command, namely command and control: communications and information gathering, as well as the communication of precise orders to maneuver, defend, or attack, may be harmed. Command and control data may be blocked, altered, or false reports and commands inserted...
Second, weapons or weapon systems can be rendered inoperable for a time or even physically sabotaged by faulty messages or intrusions into their controlling information systems; in the extreme case, they could be directed by an enemy to attack a false target. The most sophisticated computerized weapon systems, such as the Patriot and other anti-aircraft and anti-missile systems, and the Aegis system of the US Navy, could conceivably be rendered inoperable or directed to fire at false targets...
Finally, cyberweapons could target joint-use infrastructure, that is, systems and structures for both civilian and military uses, or even civilian targets with the goal of demoralizing, weakening, or confusing an enemy’s military and civilian leadership. Joint-use infrastructure would include a wide variety of computer-guided systems, such as the satellite global positioning system (GPS) network, and energy, communication, water, or sanitation infrastructure, or petroleum and chemical refining systems. Targeting joint-use industries and systems in a lawful military conflict is generally permitted by international law and Just War Theory... Targeting primarily civilian structures and networks is prohibited by international law and by almost all theories of morality in warfare.
Another potential target for cyberwar is software vulnerabilities. Cyberwar perpetrators can use vulnerabilities discovered in a target’s software system to perpetrate an attack on the system. Allan Friedman, Tyler Moore, and Ariel D. Procaccia use software vulnerabilities as the basis of their game theoretic model to see which actions players will be led to take upon discovery of just such a vulnerability.
A vulnerability that has not been detected before, and that a program’s creator does not know exists, is called a “zero day”… A mechanism for exploiting an undiscovered vulnerability is referred to as a ’zero day’ attack...
… [W]hat should a cybersecurity organization do upon discovery of a previously unknown software vulnerability? We argue that a civilian or uniformed manager faces two conflicting options: to use the knowledge as a weapon in a cyber arsenal, or to treat the knowledge as an opportunity to secure our own systems. The choice is to behave aggressively [stockpile] or defensively [defend]...
[Under the model specified] Without any social cost [i.e., civilian damage], both actors will pursue an aggressive strategy of always stockpiling, regardless of one’s technical advantage. This is because neither has a strong incentive to defend: the worst case is that both end up with large stockpiles pointing at each other without any explicit cost. Even with a low degree of technical sophistication, there is always a positive probability that the other state will not discover the vulnerability, leading to a pure advantage.
Increased social costs impose an externality… for any equilibria under substantial social cost, someone will elect to share the vulnerability information with the [software] vendor, making the world safer. Who ends up bearing the cost of this externality? It will be borne by the less technically sophisticated nation…
Other Offensive Strategies
Jason Rivera lists “four general categories of offensive capabilities” for cyberattacks: cyber espionage, psychological operations, denial of service, and cyber sabotage:
Once a target is acquired and the decision is made to engage the target, there are four general categories of offensive capabilities that can be delivered through cyberspace. The first of these capabilities is the most passive: cyber espionage. Cyber espionage is the process of engaging a networked target for the purposes of conducting state-sponsored intelligence operations. The second offensive capability is the execution of network-enabled psychological operations. Psychological operations include the alteration of information processed humans (ideas and intellectual property) in order to sow confusion, distrust, rebellion, or other such emotional uncertainties amongst the adversary’s combatant or non-combatant population. The third offensive capability in cyberspace is the denial of service. This category includes those actions taken through cyberspace to either deny Internet service entirely or deny access to authorized users. The last and potentially most dangerous offensive capability in cyberspace is cyber sabotage. Cyber sabotage includes those actions conducted through the use of cyberspace designed to sabotage computers, computer networks, or networked machines in such a manner as to cause mechanical failure, procedural error, or physical destruction.
Michael Joseph Gross, in “A Declaration of Cyber-War”, describes how the Stuxnet worm is an example of a perfect cyber weapon. While the malware made its way through a large portion of cyberspace, it was perfectly honed to accomplished its mission – sabotage Iranian centrifuges – without causing any other damage to any other systems.
In spite of Stuxnet’s many muddling effects, it also offers a clear answer to one of cyber-war’s most difficult problems. Academics and software developers have long wondered how cyber-attacks could be weaponized but remain side-effect-free. If you aim a cyber-weapon at a power station, how do you avoid taking out a hospital at the same time? “Stuxnet is a really good example of how to do that,” Rieger says, “how to make sure that you actually only run on the system that you’re targeting.” To Rieger, Stuxnet’s success on this point “shows that the effort put into its development has been on not just a technical level but a strategic level too, thinking through: How should the proper cyber-weapon be constructed?”