Imagine the US was just hit with a cyberattack. What happens next?
A group of well-resourced hackers have been combing through the networks of the gas pipeline operator for almost a year, harvesting crucial information. Now the hackers know the network better than the pipeline company does: every piece of equipment, the company’s entire workforce, usernames and passwords. They have the privileges needed to access both the firm’s desktop computers and the machinery of the pipeline itself. Now they are ready to strike.
The US has a lot of cyber-enemies. It trades blows with China, Russia, Iran, and North Korea on a daily basis. A full-blown cyberwar, thankfully, remains the stuff of theory and tabletop exercises. But what happens when one breaks out for real?
To better understand how it would play out, we talked to a number of experts in cybersecurity and national security. We asked them to consider hypothetical scenarios, including the one on the opposite page in which unknown hackers have accessed the computers, networks, and hardware of gas pipelines in New England.
The potential consequences would range from espionage and intellectual-property theft to more devastating attacks that could leave Boston without power or, in the worst case, cause fires and life-threatening damage. What happens next—and whether it escalates into a real cyberwar—depends on who is on the attack, what their goals are, and how the US responds.
The variables at play mean there’s no telling exactly how this would go. But imagining the worst might help us better understand how conflict is changing, and let us plan how to act when cyberwar lands on our doorstep.
Our panel was made up of some of the US’s leading experts in cyberwarfare.
SandraJoyce is senior vice president of global intelligence at the cybersecurity firm FireEye, the first company to openly name Chinese government hackers working against US companies.
RichardClarke has worked in the administrations of Bill Clinton, George W. Bush, and Barack Obama. He was among the first high-level White House officials to focus on cybersecurity.
MichaelDaniel was cybersecurity czar under President Obama. He now leads the Cyber Threat Alliance, a team of cybersecurity companies sharing information on threats.
EricRosenbach was the chief of staff to former secretary of defense Ash Carter. He led the Defense Department’s cyber activity and crafted the military’s cyber strategy.
JohnLivingston is the CEO of Verve Industrial Protection, a company that handles management of industrial cybersecurity for projects including natural-gas pipelines and other critical infrastructure.
RepresentativeMikeGallagher is a former counterintelligence officer in the US Marine Corps and now cochair of the Cyberspace Solarium Commission, a panel of experts charged with formulating a US cybersecurity doctrine.
SenatorAngusKing is a member of the Senate Select Committee on Intelligence and cochair of the Cyberspace Solarium Commission.
Rosenbach: The first thing that would happen is the NSA [National Security Agency] collecting intelligence abroad. When this first comes through there’s just kind of a fuzzy gray picture that someone is operating in natural-gas infrastructure. And you don’t know necessarily whether they intend to immediately pull the trigger on the attack.
King: The first problem is attribution [i.e., who is behind the attack]. That’s one of the key challenges in this field, because the adversaries are getting smarter all the time about their tracks.
I’m proposing to the Cyberspace Solarium Commission that the US government should have an attribution center that would combine resources from NSA, FBI, CIA, and other intelligence agencies so that there’d be one central place to go.
Clarke: The attribution problem is not as bad as people think it is. With regard to cyber, if you are in the enemy systems, then you’ll know who did it because you’ll see them doing it. If you can see it live, you’ve got a very good chance to figure out attribution. If it’s a post hoc analysis or forensic analysis, then attribution can be harder, especially since we know now that many nations, possibly including the US, are using attack tools created by other countries. What if they used a computer with a certain kind of keyboard, or used other techniques that fingerprint to another country? That creates a problem.
Rosenbach:Next, you would see whether there could be a cooperative relationship between [various US government agencies] to try to figure out where the attack might occur, look for certain types of malware that adversaries may have used in the past.
You see whether you can get more granular intelligence about that. The whole time that you’re working on all those kinds of domestic mitigation issues, you can try to think about what would happen in the case that the attack is successful and what you do. What is incident response during winter if there are hundreds of thousands of people, or millions of people, without heat?
You think about what you would say about that. At the same time, you’re thinking about whether or not you would confront the adversary nation with this information. Do you go to them and say, “We know that you have malware in the natural-gas infrastructure and grid”? Do you actually threaten them? And then, just like in the case of the 2016 elections, and also for the first time during the  cyberattack on Sony [Pictures], the president would have to talk to all the senior advisors and staff about whether he goes public with this information. Is it fair to the public, if you know that there’s an attack about to occur, that you keep it to yourself? What are the pros and cons of publicly attributing?
Gallagher: One problem we have to deal with in cyber is whether the difficulty of attribution creates deterrence problems and deterrence failures. If you’re trying to deter an adversary from conducting a cyberattack, you need to be able to establish who the adversary is and also signal clearly what your response will be. There’s an open debate as to whether we should have such a declaratory policy in cyber or whether that would incentivize that behavior just below whatever threshold we determine is acceptable or unacceptable.