In November 1988, a graduate student at Cornell University named Robert Morris, Jr. inadvertently sparked a national crisis by unleashing a self-replicating computer worm on a VAX 11/750 computer in the Massachusetts Institute of Technology's Artificial Intelligence Lab. Morris had no malicious intent; it was merely a scientific experiment to see how many computers he could infect. But he made a grievous error, setting his reinfection rate much too high. The worm spread so rapidly that it brought down the entire computer network at Cornell University, crippled those at several other universities, and even infiltrated the computers at Los Alamos and Livermore National Laboratories.
Making matters worse, his father was a computer scientist and cryptographer who was the chief scientist at the National Security Agency's National Computer Security Center. Even though it was unintentional and witnesses testified that Morris didn't have "a fraudulent or dishonest bone in his body," he was convicted of felonious computer fraud. The judge was merciful during sentencing. Rather than 15–20 years in prison, Morris got three years of probation with community service and had to pay a $10,000 fine. He went on to found Y Combinator with his longtime friend Paul Graham, among other accomplishments.
The "Morris Worm" is just one of five hacking cases that Scott Shapiro highlights in his new book, Fancy Bear Goes Phishing: The Dark History of the Information Age in Five Extraordinary Hacks. Shapiro is a legal philosopher at Yale University, but as a child, his mathematician father—who worked at Bell Labs—sparked an interest in computing by bringing home various components, like microchips, resistors, diodes, LEDs, and breadboards. Their father/son outings included annual attendance at the Institute of Electrical and Electronics Engineers convention in New York City. Then, a classmate in Shapiro's high school biology class introduced him to programming on the school's TRS-80, and Shapiro was hooked. He moved on to working on an Apple II and majored in computer science in college but lost interest afterward and went to law school instead.
With his Yale colleague Oona Hathaway, Shapiro co-authored a book called The Internationalists: How a Radical Plan to Outlaw War Remade the World, a sweeping historical analysis of the laws of war that spans from Hugo Grotius, the early 17th century father of international law, all the way to 2014. That experience raised numerous questions about the future of warfare—namely, cyberwar and whether the same "rules" would apply. The topic seemed like a natural choice for his next book, particularly given Shapiro's background in computer science and coding.
Despite that background, "I honestly had no idea what to say about it," Shapiro told Ars. "I just found it all extremely confusing." He was then asked to co-teach a special course, "The Law and Technology of Cyber Conflict," with Hathaway and Yale's computer science department. But the equal mix of law students and computer science students trying to learn about two very different highly technical fields proved to be a challenging combination. "It was the worst class I've ever taught in my career," said Shapiro. "At any given time, half the class was bored and the other half was confused. I learned nothing from it, and nor did any of the students."
That experience goaded Shapiro to spend the next few years trying to crack that particular nut. He brushed up on C, x86 assembly code, and Linux and immersed himself in the history of hacking, achieving his first hack at the age of 52. But he also approached the issue from his field of expertise. "I'm a philosopher, so I like to go to first principles," he said. "But computer science is only a century old, and hacking, or cybersecurity, is maybe a few decades old. It's a very young field, and part of the problem is that people haven't thought it through from first principles." The result was Fancy Bear Goes Phishing.
The book is a lively, engaging read filled with fascinating stories and colorful characters: the infamous Bulgarian hacker known as Dark Avenger, whose identity is still unknown; Cameron LaCroix, a 16-year-old from south Boston notorious for hacking into Paris Hilton's Sidekick II in 2005; Paras Jha, a Rutgers student who designed the "Mirai botnet"—apparently to get out of a calculus exam—and nearly destroyed the Internet in 2016 when he hacked Minecraft; and of course, the titular Fancy Bear hack by Russian military intelligence that was so central to the 2016 presidential election. (Fun fact: Shapiro notes that John von Neumann "built a self-reproducing automaton in 1949, decades before any other hacker... [and] he wrote it without a computer.")
But Shapiro also brings some penetrating insight into why the Internet remains so insecure decades after its invention, as well as how and why hackers do what they do. And his conclusion about what can be done about it might prove a bit controversial: there is no permanent solution to the cybersecurity problem. "Cybersecurity is not a primarily technological problem that requires a primarily engineering solution," Shapiro writes. "It is a human problem that requires an understanding of human behavior." That's his mantra throughout the book: "Hacking is about humans." And it portends, for Shapiro, "the death of 'solutionism.'"
Ars spoke with Shapiro to learn more.
Ars Technica: Your overarching theme is that hacking is ultimately about humans. The defect is not in the programming, it's in human cognition and human behavior—what you describe as "upcode," as opposed to "downcode" (the programs). Our culture and our biases and our assumptions actually shape the programs.
Scott Shapiro: It's like to understand God, you've got to understand the people who made him. My first draft of this book was just about downcode. And then I read it over, as one does, and I recognized that I was giving two different explanations for vulnerabilities. One is the technical explanation and the other was the political human ones. I'm a legal philosopher. I talk all the time about how law and norms guide conduct. It's amazing that I forgot that. So I rewrote it and then realized that there was a third explanation: the philosophical explanation. So I had to rewrite the book again, but it came into shape by the third time.
Ars Technica: You write about Alan Turing's seminal 1936 paper and the notion of "meta code," which is what hackers target. What is metacode, and why is it so central to these issues?
Scott Shapiro: As a philosopher, I'm most interested in metacode. In 1936, this 24-year-old British computer scientist and mathematician, Alan Turing, decides that he's going to try to show that not every problem can be solved by an algorithm, by a computing device. He has to first come up with a model of computing devices and then show that they can solve solvable problems, but there's going to be an infinite number of problems that you can't solve.
One principle of meta code is the idea that computation, the act of computation, is a physical act of manipulating symbols. That sounds complicated, but it isn't, because when you add two numbers, you're manipulating symbols. We learn how to do that in elementary school. I call that physicality. Physicality ensures that a computing device can be built to solve a solvable problem. But what Turing also showed was that you could build not just a computing device but a general computing device that can solve any solvable problem. Instead of building the programming into the hardware, the decision logic, the way it was done for several decades, it would be piped in through software, through binary strings.
This is the second principle of metacode that Turing discovered. Despite the fact that code and data are so different from one another, they can still be represented by the same symbols, namely numerical symbols. That makes general computing devices possible. Now we have computing devices, but also, we can load software on them using the same sort of symbols that we use for data. I call that duality, the idea that code and data can be represented by the same symbols.
These two basic principles that make our world possible are the very principles that hackers exploit. It allows us to, first of all, group together very different kinds of technical hacks. All these different technical hacks are really motivated by exploiting a philosophical principle of computation of meta code. The second thing is to show why perfect cybersecurity is impossible because the very principles that make hacking possible are the ones that make general computing possible. So you can't get rid of one without the other because you cannot patch metacode.
Ars Technica: Your conclusion is what you call the "death of solutionism." We don't like feeling helpless, and we don't like to feel like we can't solve a problem. But you're saying we cannot solve this problem. The cat-and-mouse game never ends. All you can do is make sure the cat mostly wins.
Scott Shapiro: That's right. In a way, Turing himself showed that perfect cybersecurity is impossible through the proof that he gave. It's easy to extend the proof just to see that among the problems that cannot be solved are finding bugs in computer programs. So in a way, what I'm saying is uncontroversial as a conclusion. In the epilogue, I try to lay out the Turing proof. It's a bit hard to understand. But I think this explanation seems very straightforward. The five hacks I write about in the book are all very different kinds of hacks, but they can be grouped into these two categories. By the end of the book, I hope to convince you that this is the way it's done. Computers are built this way and hacking works this way, so how are you going to fix it? So I think it's an easier way to get the same conclusion.
Ars Technica: I was struck by your description of the different cultures of the scientific and the military communities. When the Internet was being developed, the scientists wanted open sharing, and they were willing to sacrifice security for that. The military wanted the exact opposite. So is there a way to "hack the hackers," so to speak, in terms of their culture—by altering their upcode?
Scott Shapiro: It's a great question, and I do believe there is. Education has to change. As professors, as teachers, as cultural figures, we have to present hacking as a very interesting subject but a very dangerous one. It's not fun and games. I mean, it is if it's done safely. So I teach students how to hack, but I teach them how to do it safely and legally. I could tell them, "Hey, the odds of you getting caught are pretty low. Go out, have a good time, you'll get better at it." That would be a bad idea. That would be telling people that this is an OK thing to do. The way in which computer science education has changed, people are starting to realize that this isn't a joke. Hacking is very serious, and it needs to be taught and introduced in a very responsible manner.
I taught at Tel Aviv University in Israel, and I was really surprised by the different culture that Israel has with respect to technology, and also with respect to cybersecurity, compared to the United States. For example, one of the parents said that their 7-year-old child is going to a camp where they learn how to write viruses. I don't think that that's a great idea. Also, the NSO group is an Israeli cyberweapons manufacturer and provider. They're the developers of Pegasus, which is a certain spyware tool being used against human rights activists and journalists. At least in the United States, the response has been that this is a bad thing for NSO to do, for any kind of company to do. There have been sanctions placed on NSO.
This is an important message to the American people that this is not a legitimate way to run your business. On the other hand, when I was in Israel, students would tell me that their parents would be very proud if they worked for NSO because NSO is a big success story in Israel, or at least it was. So Israel and the United States have very different cultures. One is a security state, the other is not. It's a different view about the relationship that people should have toward hacking. And I think leaders and educators are the ones who are responsible for changing cultural attitudes.
Ars Technica: The scientific community in various disciplines has struggled with this in the past. There's an attitude of, "We're just doing the research. It's just a tool. It's morally neutral." Hacking might be a prime example of a subject that you cannot teach outside the broader context of morality.
Scott Shapiro: I couldn't agree more. I'm a philosopher, so my day job is teaching that. But it's a problem throughout all of STEM: this idea that tools are morally neutral and you're just making them and it's up to the end user to use it in the right way. That is a reasonable attitude to have if you live in a culture that is doing the work of explaining why these tools ought to be used in one way rather than another. But when we have a culture that doesn't do that, then it becomes a very morally problematic activity. We're now seeing a lot of hand-wringing about AI. We always see hand-wringing about every single new technology. There's the techno-utopians and there's the techno dystopians, and usually a couple of years later, the cooler heads prevail.
Ars Technica: There are advocates of hiring the hackers. Teaching young kids how to code a virus can be useful if they grow up to be cybersecurity experts and help solve the problem. You do say there is a great need for experts in cybersecurity.
Scott Shapiro: That's exactly right. I'm not sure that a 7- or 8-year-old is ready for that, to be honest with you. But I teach people how to hack. Anybody can learn how to hack. But we're constantly reminding people about their ethical and legal responsibilities. We are not teaching them just to hack. We're teaching them the ideas behind hacking, how the Internet works, how operating systems work, so they can appreciate the powerful technology that we're showing them how to exploit. I hope we do it in a very responsible fashion because it isn't a joke, and it needs to be taken seriously. But I want people to do this because I think it's the only way to learn how to protect yourself. Plus, it's fun. Everyone's talking about it and almost nobody understands it, but it's not that hard.
Ars Technica: You write that for most people, taking some basic precautions means that 90 percent of the time, they're going to be OK.
Scott Shapiro: The book's not trying to make you feel bad, like, "Hey, your password's too short." And I'm not trying to say that we're all going to die. The truth is in the middle. For most people, the risks are not big at all. The culture presents to us a picture of hackers which is a sensational caricature: Somebody who is almost completely asocial, maybe has mental illness, maybe is morbidly overweight. There's the 400-pound person sitting in their pajamas in their basement in their parents' house, socially maladapted human beings who are malicious and evil. There have been hackers in the last several decades who've challenged that picture.
Yes, of course, hacking is a real risk. But the vast majority of hacking, of cybercrime, is financially motivated— to make money. They do not want to break into your computer specifically. They want to break into lots of computers easily to create a botnet or to distribute spam or ransomware. They don't really want to spend that much time on you. So for most of us, basic precautions make it just a little bit more expensive to attack you. They're more likely to move on to somebody else because these are basically automated tools that are very low-level type of things.
So most of us are low-value targets. But there are people who are high-value targets: journalists, activists, CFOs, CEOs, celebrities. They are under attack. There's just no question. They can protect themselves, but they probably should seek the help of a professional. So some people really do need to worry about it because there really are people who are after you. But for most of us, it's not true.
Ars Technica: Do you have a favorite hacker among those you write about in your book?
Scott Shapiro: That's tough. What's my favorite child? But I feel a connection to Robert Morris just because we're the same age, our dads worked in the same building, and we're both kind of obsessed with Unix. He was not only the first one to crash the Internet, his case raises a whole set of questions of legal interpretation. What downcode did he write? But also, what upcode applies to him? And going forward, how are we going to deal with people like this? When anybody teaches cybersecurity law, United States vs. Morris is the first case you teach because it's so seminal.