Battlestar Galactica Lessons from Ransomware to the Pandemic
Networked vulnerabilities are thorny
Currently, only about one fourth of gas stations in North Carolina (where I am) are reporting that they have any gas. A ransomware attack shut down a key pipeline supplying these stations. Will this attack be the wake up call? I doubt it. The problem has been evident for such a long time that I’m afraid we won’t really address the threat until there is a true catastrophe.
Before the pandemic, I wrote a lot about digital security. Well, the lack thereof. I once compared what we are doing to “building skyscraper favelas in code — in earthquake zones.” It still feels the same, we’re just hearing more rumbles.
The dynamics of digital insecurity and ransomware and related threats are eerily similar to the lead-up to the pandemic. Just like with the pandemic, the alarm has been ringing about digital security for decades, but we are just hitting snooze instead of waking up and dealing with the threat. Of course, while the parallels in terms of the nature of the threat and some of its dynamics are fairly striking, the specifics are different and matter a great deal.
The fictional Battlestar Galactica series explains a key similarity: networked systems are vulnerable. That ship survived the initial attack by the Cylons (humanoid robots) simply because it was old and had just been decommissioned in the process of being turned into a museum. Being older, it had never been networked into the system. The “shutdown” command sent by the attackers never reached it, and, unlike every other battleship in the human fleet, it was thus spared.
In pandemic terms, Galactica was an island with no travel to it.
For digital (in)security, what we have is a thorny combination built upon our expanding networked infrastructure: technical complexity and “debt” (more on that in a bit) has met negative externalities, which has now met, well... Bitcoin.
Let me backup to discuss the roots of it all.
Technically, our software infrastructure was not built with security in mind. That’s partly because a lot of it depends on older layers, and also because there has long been little to no incentive to build software infrastructure that prioritizes security. Operating systems could have (and should have) been built with different features like “sandboxing”: that’s when a program can only play in a defined, walled area called a “sandbox,” where it can’t reach anything else. So if that program is malicious, the only damage it can reach is in that sandbox--and only in that sandbox. (This is similar to the idea of “air-gapping”: essentially, unplugging critical parts of the infrastructure from the network.
It’s very hard to add security after the fact to a digital system that’s not been built for it. And there is a lot of what’s called “technical debt” all around us. These are programs that work but were written quickly or sometimes decades ago. We don’t touch these rickety layers because it would be very expensive and difficult to do, and messing with them could cause everything else to crumble.
From an article I wrote in 2015:
TECHNICAL DEBT: A lot of new code is written very very fast, because that’s what the intersection of the current wave of software development (and the angel investor / venture capital model of funding) in Silicon Valley compels people to do. Funders want companies to scale up, quickly, and become monopolies in their space, if they can, through network effects — a system in which the more people use a platform, the more valuable it is. Software engineers do what they can, as fast as they can. Essentially, there is a lot of equivalent of “duct-tape” in the code, holding things together. If done right, that code will eventually be fixed, commented (explanations written up so the next programmer knows what the heck is up) and ported to systems built for the right scale — before there is a crisis. How often does that get done? I wager that many wait to see if the system comes crashing down, necessitating the fix. By then, you are probably too big to go down for too long, so there’s the temptation for more duct tape. And so on.
Plus, the global network we ended up with isn’t built for this at all. This is from a 2018 piece:
The early Internet was intended to connect people who already trusted one another, like academic researchers or military networks. It never had the robust security that today’s global network needs. As the Internet went from a few thousand users to more than three billion, attempts to strengthen security were stymied because of cost, shortsightedness and competing interests. Connecting everyday objects to this shaky, insecure base will create the Internet of Hacked Things. This is irresponsible and potentially catastrophic.
Plus, forget about expensive and difficult technology, we haven’t even stopped many of our ordinary devices from getting shipped with passwords that are essentially drawn from a pre-existing list that includes the very-hard-to-crack specials such as “password,” “1234,, and “default.” Here’s a 2019 piece I wrote about giant zombie baby-monitor chains being used to cripple infrastructure (like bringing down cell communication in Liberia) or to censor journalists :
The problem is painfully simple and terribly thorny, and it is as much about globalization, law and liability as it is about technology. Most of our gizmos rely on generic hardware, much of it produced in China, used in consumer products worldwide. To do their work, these devices run software and have user profiles that can be logged into to configure them. Unfortunately, a sizable number of manufacturers have chosen to allow simple and already widely known passwords like “password,” “pass,” “1234,” “admin,” “default” or “guest” to access the device.
In a simple but devastating attack, someone put together a list of 61 such user name/password combinations and wrote a program that scans the Internet for products that use them. Once in, the software promptly installs itself and, in a devious twist, scans the device for other well-known malware and erases it, so that it can be the sole parasite. The malicious program, dubbed Mirai, then chains millions of these vulnerable devices together into a botnet—a network of infected computers. When giant hordes of zombie baby monitors, printers and cameras simultaneously ping their victim, the targeted site becomes overwhelmed and thus inaccessible unless it employs expensive protections.
How can all this be? Why isn’t this problem fixed? Well, a key issue is what economists call negative externalities: essentially, it’s free to ship software or devices like this, and expensive to fix any issues that come up. There is no immediate reward for taking the more expensive route. It’s like telling factories that they can pollute as much as they want, dumping their waste into the air or the nearby river, or they can choose to install costly filtering systems, in a set-up where the pollution isn’t quickly visible through smell or appearance. Well, guess what happens? The companies don’t worry about it, because they don’t have to, i.e. externalization. This is from a piece in 2017:
As a reminder of what is at stake, ambulances carrying sick children were diverted and heart patients turned away from surgery in Britain by the ransomware attack. Those hospitals may never get their data back. The last big worm like this, Conficker, infected millions of computers in almost 200 countries in 2008. We are much more dependent on software for critical functions today, and there is no guarantee there will be a kill switch next time.
It is time to consider whether the current regulatory setup, which allows all software vendors to externalize the costs of all defects and problems to their customers with zero liability, needs re-examination. It is also past time for the very profitable software industry, the institutions that depend on their products and the government agencies entrusted with keeping their citizens secure and their infrastructure functioning, step up and act decisively.
The better question is, why didn’t digital hacks and ransomware happen more if the problem has been so widespread? In one sense, it has happened a lot. There has been hack after hack, theft of profitable data (like the Equifax hack), and devices being chained together for denial-of-service attacks like those explained in the zombie baby-monitor piece. Sadly, there has been little-to-no accountability of the scale that mattered either.
Plus, of course, just like the pandemic, the root of the digital vulnerability is a connected network with coupled vulnerabilities: the biological viruses can travel when we do (and before the pandemic, we did, more than ever), and malware and the software viruses can travel through interconnected networks (which are now everywhere, as software eats the world). Coupled essentially means when one thing goes wrong at one level, it usually ends up dragging other things with it. It’s well understood that tightly-coupled systems are prone to cascading failures, where one failure essentially triggers an avalanche. (More on that in future writing as well).
But one thing that had been missing before was an easy or obvious way to monetize all of this digital malfeasance. Unlike with software, the financial sector is fairly heavily regulated, globally. Despite the possibility of transferring money here and there, it’s really not that easy to get money out of the global financial system if the regulators in a few choke points are dead set against it. Some checkpoints include the SWIFT money transfer systems, the United States Treasury and the OFAC program, and the U.S. attorney for the Southern District of New York, where Wall Street is located. The chokepoints are surprisingly few, and most of them lead back to the United States.
Enter Bitcoin. It’s still not as easy as people might think to use Bitcoin to move truly large amounts of money out of the system—to buy things with it, or turn it into cash. Small amounts, sure. The kind of sums that would make large-scale fraud as attractive? Not really. (A longer piece for the future on why this is: if anything, it’s probably made much harder by the Blockchain infrastructure). But, Bitcoin sure makes it more tempting to try, even for small sums. A lot of ransomware attempts aren’t for huge sums. Thus, Bitcoin and the crypto coin ecology have given ransomware a scalable business model, at least in the minds of its “entrepreneurs.” It makes attacks worth trying, even for small sums, because trying it is so easy.
So where do we go from here?
Is it hopeless? No. But let’s be honest: this is a very costly problem to fix. It’s not going to solve itself. A solution would require our government to shift its priorities. And we would need a regulatory environment to encourage and force different practices, to devote resources to the issue. And more. This is from a different 2015 piece I wrote in the New York Times::
It isn’t hopeless. We can make programs more reliable and databases more secure. Critical functions on Internet-connected objects should be isolated and external audits mandated to catch problems early. But this will require an initial investment to forestall future problems — the exact opposite of the current corporate impulse. It also may be that not everything needs to be networked, and that the trade-off in vulnerability isn’t worth it.
Fixing the crisis of digital insecurity would entail focusing on the financial side. That would be the easiest fix. But that raises the thorny question of what to do, finally, about the speculative asset bubble around “crypto” instruments. The crypto issue is related to the thorny problem of massive global inequality, the vast amounts of cash sloshing around our financial system, and its concentration in a few hands.
Addressing digital insecurity would also entail providing better regulation up and down the technical stack so that the negative externalities became more internalized by the companies--so they are responsible for solving the problems they create. The companies would have to at least do the easy things they can do before they ship products. We would have to make massive investments to redo parts of our infrastructure, including isolating key parts of it from vulnerable networks, building redundancies and safeguards, and even rewriting parts of it. (Yes, you can stop laughing or crying, or both).
The more likely scenario is that there will be some moves on the financial side (making it harder to get large sums out) and state-sector side (you can disincentivize another government from hacking your infrastructure, mutually, but it’s much harder to do that to independent players). There may also be efforts “to make an example” of a few high-profile attempts: tracking down the people and handing down massive sentences. This isn’t as difficult as it sounds, but it requires resources. If the ransomware attempts proliferate, the punishment will not be as effective a deterrent, because most people will not be caught, since so many are making attempts. This is essentially setting up a catastrophe lottery for the ransomware folks: most of them probably will not get caught, but the few that are will be crushed.
But as long as it is this easy to cause digital havoc and hope to profit from it, and as long as getting small sums out without getting caught or punished is plausible, it will be tried again and again. Despite their potentially terrible consequences, it’s difficult to address decentralized threats even if they are not very profitable.
This is a bit like the pandemic had been for me before 2020: we knew a major threat was afoot, and that our infrastructure had been lacking. We had SARS in 2003, we had the Ebola crisis in 2014-2016, and we had the HIV/AIDS catastrophe starting with the 1980s. Did we move to fix it all? We did not. So here we are again. Meanwhile, my Honda Civic has half a tank of gas, so I’ll be fine, for now. I’m not so sure about the future of our networked world.,
I have begun to wonder whether the economist perspective - identifying "externalities" - should be "inverted." The basic problem is the "default" perspective that markets work except when they don't, and then you have to intervene to fix it. But maybe it's more accurate for the "default" to be markets DON'T work, except when they do, and that it actually takes a lot of effort to get them to work.
Another analogy is perhaps public health versus medicine in a capitalist society -- money is made by treating disease, not by preventing it, even though from an aggregate point of view prevention is far better. Indeed, from a profit-making perspective, it's better to "churn" -- make money off excessive food/behaviors that make you unhealthy, then make more money off treating the resulting diseases.
Maybe all of these work on our tendency (individually and collectively) to be short-sighted, which is now further exacerbated algorithmically through media (social and otherwise), as well as having a lack of imagination as to importance of risks that haven't happened (e.g., 9/11), and in many cases even those that have.
I’ve lived with the reality of tech debt for a long time. Y2K brought daylight to a lot of it. But when I was a state employee, I learned that much of the state’s critical financial infrastructure ran on Sperry mainframes from the 1960s. IBM mainframe knowledge has waned; Sperry mainframe knowledge, let alone coding abilities, is 100 times worse. As you rightly point out, cost is a critical factor, along with “if it ain’t broke, don’t fix it.”
With the rise of Ethernet and TCP/IP, problems really took off. Where mainframe networking protocols like VTAM were very much based on identity and authentication, the new protocols were inherently anonymous. The issue became how to bolt an identity management system to an anonymous protocol. If your web page retrieves data from a database, whose identity does it use? Yours, as the requestor or the web page’s?
Bitcoin (and blockchain protocol in general) capitalize on that anonymity, and try to compensate for it with encryption puzzles. Little wonder that IBM’s investment in blockchain has reintroduced identity at the core, making it more secure, less openly available, but contrary to the initial designs.
I think you’re pointing out rightly that regulation can go only so far. When national entities and individuals from anyplace in the world can hack for profit, regulation becomes increasingly diaphanous. I don’t yet see a clear way forward out of the mess.