Incidents of hackers ("Black Hats", that is, those with nefarious intentions) breaking into technology systems are certainly nothing new. But each new report reminds us that we’re all vulnerable to falling victim to some hacker’s attack on some system that would cause us harm.
Hackers hacking into systems that contain our financial information can cause us financial harm. Examples include incidents last year in which hackers stole customer credit card information from Target and Home Depot.
Hackers hacking into systems that contain our personal information can cause us reputational harm. The recent hack of the Ashley Madison website, together with the release of participants’ identification provides a good example of this.
Hackers will also increasingly be able to cause physical harm, for example, by hacking into automobile systems.
While we cannot expect systems developers to make systems completely impenetrable, we can surely expect them to take sufficient efforts to make their systems reasonably robust to hacker attacks. Unfortunately, in many environments, systems developers simply do not have the incentives needed that would encourage them to make their systems nearly as robust as we – the potential users and victims of attacks – would like them to.
This analysis examines the game between Systems Developers, Hackers, and Users (Victims) to determine when developers have too few incentives to make their systems robust and what might be changed to incentivize them to take more care.
Overview of the Game
Systems Developers (SysDev) develop and/or host products with software components, which Users use. Hackers may hack into SysDev software through vulnerabilities in the software, generating benefits for themselves, while causing damages to both SysDev and Users. The possibilities of having their systems hacked generally create incentives for SysDev to invest some amount of resources (e.g., time, man-hours, money) into making their systems robust to attacks by Hackers. If Hackers are successful at hacking into SysDev systems, then SysDev may subsequently invest resources to patch the vulnerabilities, thereby mitigating potential losses to SysDev and Users of SysDev products.
Fundamentally, examination of the SysDev-Hacker Game is an examination of how SysDev acts under different scenarios. It’s taken as given that
• SysDev’s systems are vulnerable,
• Hackers will attempt to break into systems and many will be successful, and
• SysDev and Users will be harmed by the breaches.
While SysDev know that Users suffer damages after successful breaches, SysDev only care about Users’ damages to the extent that SysDev has to pay for them. This is the heart of the divergence between private and socially optimal outcomes. As long as Users suffer costs from breaches that SysDev don’t have to pay for, SysDev will continue to invest too little effort from a social perspective into making their systems robust to attacks.
Systems Developers may patch software after vulnerabilities have been discovered, both to mitigate damages to current Users and to prevent damages to future Users. However, since patching vulnerabilities is costly, SysDev may also choose to not invest in fixing the vulnerabilities, that is, leave them unpatched. Regardless of whether or not SysDev choose to fix vulnerabilities, SysDev may also suffer damages caused by hacking, including systems repair and costs associated with remunerating Users for damages.
How SysDev choose to address hacks generally has an impact on SysDev reputations, where such impacts affect and reflect future sales and profitability potential.
In other words, there are two separate cost components to SysDev associated with having vulnerabilities in their software. First, there are direct costs of fixing vulnerabilities, installing (or having Users install) patches, and fixing any other damages caused by the hack. And second, there are indirect, reputational costs, which affect and reflect future profitability potential.
If hacks become publically known, then SysDev will generally maximize their reputation by both (i) patching the vulnerabilities and (ii) announcing that they have fixed the problems as soon as possible. Making sure the public knows that SysDev have fixed problems will reassure Users that damages to current Users have been mitigated and damages to future Users have been prevented. SysDev will still suffer losses in reputations, but not as large as they would if Users knew that SysDev refused to address the problems.
If hacks do not become publically known, then SysDev may do better by hiding the vulnerabilities. By leaving vulnerabilities unpatched, SysDev save the direct costs that would have been required to fix them. As just mentioned, any public announcement that vulnerabilities have been found will generally have a negative impact on SysDev reputations. However, if Users are unaware that hacks have occurred, then Users will not punish SysDev through losses in reputations. The problem for SysDev comes if SysDev do not announce hacks when they occur, but then Users subsequently find out about them later on. In these cases, Users tend to brutally punish SysDev by severely downgrading their reputations.
Based on the discussion so far, we see that when confronted with software vulnerabilities, SysDev have two choice variables:
• To patch the vulnerabilities or not to patch, and
• To announce the vulnerability and/or patch or not announce.
SysDev also have a third choice variable:
• The amount of effort (resources) to spend beforehand to make software robust to vulnerabilities.
The greater the amount of effort SysDev take beforehand, the greater the direct costs of development, but the lower the subsequent probabilities of being hacked.
Hackers hack into systems for different reasons. The two most obvious motivations are (i) money and (ii) fame (notoriety, ideology).
Hackers motivated by money generally break into systems to steal information that they or others can use to steal money (via financial data) or identities (via medical or other personal identity data). In these cases, Hackers will generally maximize the value of their hacks by keeping the hacks confidential. By not making their actions public, they buy time to capture as much information as possible and use it for as long as possible, before the owners of the information are able to mitigate damages.
Hackers motivated by notoriety generally break into systems to steal sensitive (private) information that the owners don’t want others to know. In these cases, Hackers will generally maximize the value of their hack by publicizing the information, and thus the fact of the hack, as widely as possible.
After a successful hack, Hackers can choose whether or not to announce their hack publically. It follows from the discussion that Hackers who generally seek financial rewards will tend to choose not to announce their hacks, while those who seek notoriety will tend to choose to announce their hacks. Of course, some Hackers are motivated by both money and fame, in which case they may, for example, delay announcing their hacks, so as to benefit from both financial and reputational rewards.
Based on the discussion of Hackers, we see that after completing a successful hack, Hackers have one choice variable:
• To announce or not announce the hack.
When software is hacked, Users generally face costs, for example, financial costs, reputational costs, identity theft, etc. The magnitudes of Users’ costs are generally affected by actions taken by both Hackers and SysDev.
Costs to Users are generally minimized when (i) SysDev quickly patch vulnerabilities, thereby stemming losses to Users, and (ii) either SysDev and/or Hackers announce hacks to Users, thereby enabling Users to take further actions to stem their losses.
Costs to Users are generally moderate when (i) SysDev refuse to patch vulnerabilities, and (ii) Hackers announce hacks to Users, thereby enabling Users to take further actions to stem their losses. Note that if SysDev choose not to patch vulnerabilities, then they should not find it in their interest to announce this fact to Users, since Users would impose large reputational losses on such SysDev.
Costs to Users are generally largest when (i) SysDev refuse to patch vulnerabilities, and (ii) Hackers do not announce hacks to Users, thereby preventing Users from taking actions to stem their losses.
I realize that there are plenty of exceptions to these three sets of generalities. For example, financial losses may be easy to mitigate, even when Users are not aware of breaches. In particular, credit card companies are very proactive in trying to minimize fraud on Users accounts and often find cases of fraud even when Users are unaware of them. As another example, the generalities may break down in some types of reputational losses, which can be far more devastating than financial losses.
Timeline of Game
t = 1: SysDev choose effort level e to minimize total costs associated with vulnerabilities.
SysDev have ex ante and ex post costs associated with vulnerabilities:
• Ex ante costs come in the form of efforts taken during the developmental stage to minimize the probability of Hackers finding vulnerabilities, and
• Ex post costs come in the form of patching, repair, and reputational costs incurred during the post-release stage to address hacks performed by Hackers.
t = 2: Hackers find vulnerabilities in software.
t = 3: When Hackers find vulnerabilities during t = 2, then
• Whether or not to patch vulnerabilities, and
• Whether or not to announce vulnerabilities.
Hackers choose whether or not to announce hacks.
If either SysDev and/or Hackers announce vulnerabilities, then Users will
• Take action to mitigate damages
• Punish SysDev with a decrease in reputation
Model of the Game
In order to analyze the game and better understand the motivations SysDev and Hackers face, I filled in the payoff matrix in Figure 2 with some sample values. I wanted to make the payoffs as widely applicable to different scenarios as possible. However, once I started playing with the numbers and thinking about alternative scenarios, I realized that the payoffs aren’t as generic as I’d like them to be. In other words, the payoffs for each player in each of the payoff scenarios and/or the relative magnitudes of the payoffs for each player
• SysDev: cDRL vs. cDRM vs. cDRH
• VpvtL vs. VpvtM vs. VpvtH
• VpubL vs. VpubM vs. VpubH
• Vpvtk vs. Vpubk for k = L, M H
• Users: cUL vs. cUM vs. cUH
can vary dramatically across different scenarios.
For example, the payoffs/costs to each set of players, SysDev, Hackers, and Users, differ radically across the following examples:
• Hacking into Target or Home Depot to steal credit card numbers
• Hacking into the Ashley Madison website and revealing User names
• Hacking into driverless cars and causing collisions
• Hacking into US DoD systems to steal information
• Hacking into a system to force the system to go down (e.g., Amazon, NYSE, Facebook, the US power grid)
• Hacking in to a school system to change someone’s grades
In the follow sections on players’ incentives, I do my best to analyze how each player’s payoffs are generally affected by the parameters of the system; however, I do realize that the players’ payoffs across different scenarios, and thus their incentives to take action, may vary widely.
SysDev choose e, p, s to Minimize Total Costs associated with Vulnerabilities:
The first term in Eq (1) is the cost to SysDev of taking effort level e to make their systems robust to hacking.
The second term in Eq (1) consists of three components:
1. The discount factor, beta
2. The probability Hackers will be successful, given the effort SysDev expends on making the system robust, h(e), and
3. The magnitude of the costs to SysDev if Hackers are successful, the quantity in curly brackets
SysDev’s optimal choice of e involves a tradeoff between:
• Expending more effort now (having a larger 1st term in Eq. (1)) and decreasing expected future costs of system break-ins (having a smaller 2nd term in Eq. (1))
• Expending less effort now (having a smaller 1st term in Eq. (1)) and increasing expected future costs of system break-ins (having a larger 2nd term in Eq. (1)).
SysDev’s optimal choice of e will balance out these two sets of costs.
The lower are the ex post repair and cleanup costs associated with a hack, the less incentive SyDev have to invest in preventing attacks ex ante. This will generally be the case when
• The costs of patching a hacked vulnerability, cDP, are lower
• The costs of repairing other damages associated with break-in, CDO, are lower
• Reputational damages, cDR, are lower
• The probability Hackers will announce their attacks, a, is lower
Hackers choose a to Maximize Total Expected Benefits:
In my model, Hackers’ choices of whether or not to announce successful attacks are basically just reflections of their motivations for their attacks. What this means is that if SysDev think Hackers would be more likely to break in their systems for ideological reasons, then they should expect successful Hackers to announce their feats. Alternatively, if SysDev think Hackers would be more likely to break in their systems for financial reasons, then they should expect successful Hackers to not announce their feats.
Hackers’ expected benefits are generally greater when
• SysDev expend less effort, e, to make their systems robust to attacks
• SysDev are less likely to patch the vulnerabilities discovered and exploited by Hackers
Expected Costs to Users associated with hacks are:
The expected costs to Users associated with hacking are generally greater when
• SysDev expend less effort, e, to make their software robust to attacks,
• The probabilities SysDev patches vulnerabilities after successful attacks, p, is lower
• The probabilities SysDev announce successful attacks, (1 - s), is lower
• The probabilities Hackers announce successful attacks, a, is lower
Now that we have a general framework for analyzing the SysDev-Hacker Game, we can better examine some issues.
SysDev’s Choice Variables Are Generally Too Low
To find the optimal choices of e, a, and s, SysDev minimize their total expected costs associated with systems vulnerabilities, as indicated in Eq (1). The socially optimal levels of the choice variables, however, are determined by minimizing the total expected costs for SysDev and Users combined. What this means is that unless SysDev’s total expected costs include or otherwise cover Users' total expected costs, then SysDev’s optimal choices of e, a, and (1 - s) will be too low. The smaller the portion of cUk that is covered by SysDev, the greater will be the shortfall in SysDev optimal choice variables from a social perspective. Of course, this problem could be mitigated by increasing SysDev’s liabilities for Users’ damages, that is, forcing them to cover cUk.
On the other hand, governments have enacted laws, Forced Disclosure Laws, which compel SysDev to disclose to Users system breaches instigated by Hackers. From Wikipedia:
Security breach notification laws or data breach notification laws are laws that require an entity that has been subject to a data breach to notify their customers and other parties about the breach, and take other steps to remediate injuries caused by the breach. Such laws have been enacted in most U.S. states since 2002. These laws were enacted in response to an escalating number of breaches of consumer databases containing personally identifiable information.
Such laws have clearly been designed to give Users fair warning of breaches so they can take actions to minimize associated costs or damages. The fact that such laws had to be enacted suggests that SysDev do not otherwise face adequate incentives from a social perspective to announce breaches. In turn, this suggests that SysDev's expected costs do not, in fact, adequately cover Users' costs.
When Systems Breaches Can Lead to Loss of Life
The stakes always increase when lives are at risk. When Companies’ products cause loss of life to Users, the financial costs, notably compensation to Users’ families for loss of life, can be high. Yet, the reputational costs that Companies suffer – in the form of drops in stock prices – generally significantly outweigh any financial costs.
The advent of driverless cars, together with the recently announced hack into driverless car systems, has forced this issue out into the open.
When the potential costs to SysDev increase, then they will be led to invest more effort in preventing attacks. Of course, SysDev cannot completely eliminate the possibility of attacks on its systems that lead to loss of life. Yet, the amount of damages SysDev suffer when such attacks do occur may very well depend on how they manage the situation.
If SysDev think that Hackers will not announce hacks that lead to Users’ death, then SysDev may do best by hiding the fact of the breach from Users. And if patching the vulnerability is expensive, and at the same time if SysDev think the issue can be kept hidden from Users, then SysDev may even do best by not patching the vulnerability. However, if SysDev chooses to hide the vulnerabilities and attacks from Users and then Users subsequently become aware that SysDev knew, but hid this information from Users, then SysDev will probably face a massive reputational loss (i.e., a large decrease in stock price).
On the other hand, if SysDev think that Hackers will announce hacks that lead to Users’ death, then potential losses in reputation may force SysDev to openly acknowledge the vulnerability, patch it, and announce the patch.
Possibility of Hacker Extortion
As alluded to in the previous section on Loss of Life, there are times when Hackers are able to breach systems and SysDev would save money in reputation (or financial) costs by keeping either the fact of the breach or the subject of the breach hidden from Users. That it, they would prefer no to announce the hack and at the same time they would prefer that Hackers didn’t announce the hack. In such cases, Hackers could potentially extort payments from SysDev to in exchange for keeping knowledge of and/or the knowledge gained from the attack secret. I would not be surprised to learn that such events happen more than we would like to think they do. Of course, the question then becomes how Hackers can credibly commit to SysDev that they will not announce the attack after SysDev pays the extortion fee.
Thanks for the idea Steve!
Other Regulations to Ensure Appropriate Actions by SysDev
In situations where SysDev develop systems that house valuable User information, it is generally the SysDev, rather than the Users, who are most capable of protecting the information contained in the systems. In such cases, efficiency would dictate that SysDev retain the liability for keeping the information safe. This is, in fact generally how things work. However, the problem remains that in many cases SysDev’s liability does not cover the entire amount of damages caused to Users after a breach.
As mentioned above, governments have enacted forced disclosure laws so as to enable Users to take as much action as possible themselves to mitigate damages subsequent to a breach. Yet, SysDev liability and forced disclosure laws together still generally fail cover Users’ total damages.
One problem is that in most cases, Users are not uniformly damaged by breaches; that is, some Users stand to incur larger costs than others. Another problem is that some Users put themselves into riskier situations than others. A case in point is the Users who chose to participate in or otherwise join the Ashley Madison community.
The non-uniformity of damages and of risky behavior across Users suggests that perhaps the best way to address the problem is by facilitating the provision of insurance for Users to insure themselves against damages caused by system breaches.
Ideologically vs. Financially Motivated Attacks
Which is worse for SysDev, attacks that are ideologically motivated or those that are financially motivated?
I’m inclined to say that ideological attacks are worse. I would say that financial attacks are more clearly defined in terms of what’s been damaged and how to fix it, they’re less emotionally charged, and they can generally be addressed through financial means alone. Ideological attacks, on the other hand, tend to be more emotionally charged and less well-defined in terms of action and damages. I think they provide a much trickier challenge for SysDev because they generally cannot be addressed simply through financial means, but rather, they require much more strategic actions and communications (spin) with the public.