Networked vulnerabilities are thorny
I have begun to wonder whether the economist perspective - identifying "externalities" - should be "inverted." The basic problem is the "default" perspective that markets work except when they don't, and then you have to intervene to fix it. But maybe it's more accurate for the "default" to be markets DON'T work, except when they do, and that it actually takes a lot of effort to get them to work.
Another analogy is perhaps public health versus medicine in a capitalist society -- money is made by treating disease, not by preventing it, even though from an aggregate point of view prevention is far better. Indeed, from a profit-making perspective, it's better to "churn" -- make money off excessive food/behaviors that make you unhealthy, then make more money off treating the resulting diseases.
Maybe all of these work on our tendency (individually and collectively) to be short-sighted, which is now further exacerbated algorithmically through media (social and otherwise), as well as having a lack of imagination as to importance of risks that haven't happened (e.g., 9/11), and in many cases even those that have.
I’ve lived with the reality of tech debt for a long time. Y2K brought daylight to a lot of it. But when I was a state employee, I learned that much of the state’s critical financial infrastructure ran on Sperry mainframes from the 1960s. IBM mainframe knowledge has waned; Sperry mainframe knowledge, let alone coding abilities, is 100 times worse. As you rightly point out, cost is a critical factor, along with “if it ain’t broke, don’t fix it.”
With the rise of Ethernet and TCP/IP, problems really took off. Where mainframe networking protocols like VTAM were very much based on identity and authentication, the new protocols were inherently anonymous. The issue became how to bolt an identity management system to an anonymous protocol. If your web page retrieves data from a database, whose identity does it use? Yours, as the requestor or the web page’s?
Bitcoin (and blockchain protocol in general) capitalize on that anonymity, and try to compensate for it with encryption puzzles. Little wonder that IBM’s investment in blockchain has reintroduced identity at the core, making it more secure, less openly available, but contrary to the initial designs.
I think you’re pointing out rightly that regulation can go only so far. When national entities and individuals from anyplace in the world can hack for profit, regulation becomes increasingly diaphanous. I don’t yet see a clear way forward out of the mess.
About 20 years ago my spouse co-wrote a book on cybersecurity for the Swedish Ministry of Defense. The chapter we both mention to each other most often is the one on the cyber-weaknesses in the U.S.'s electrical grid, similar to the pipeline deficiencies. I think he kind of gave up trying to lead policy people to understand that this is really, really important, though he still works in this field, but sometimes it's only when a system fails that people understand how important it was to protect it proactively. Like when a bridge collapses due to poor building materials or lack of maintenance.
Reading some of the comments below, I'm reminded of my current research into the commons and land ownership, and how often I end up looking at Kate Raworth's recent Doughnut Economics model. There are ways that the negative externalities of digital technology relate to her idea of ecological limits that I hadn't thought of before.
Do you expect regulation to be of much use here? All I have seen so far has been building new floors of administration onto the crumbling tower of Babel. GDPR has made a bunch of my friends in Europe afraid of running personal websites. FERPA probably got us the Canvas/Moodle/Blackboard oligarchy in teaching. HIPAA has done wonders for the health of compliance officers; not so much visible improvement for anyone else. All of these were perfectly well-intentioned; all were meant to solve real problems. Yet here we are. With ransomware crews switching from lockware to leakware, I'm wondering if we aren't helping them by penalizing companies for leaking customer data -- doesn't it add yet another incentive to pay up?
My only hope is for the tech community to untangle some of the complexity when financial incentives point the way (it has to be financial ones, as regulatory ones don't push in the right direction). Very few cases of this happening so far. Maciej Ceglowski has been trying to get web developers to stop loading their sites with megabytes of insecure crap (which also slows them down significantly on mobile); no success (other than everyone retweeting his slides because they are gorgeous https://idlewords.com/talks/website_obesity.htm ). Systemd still rules most popular Linux distros. The move from C to Rust is giving me some hope (or maybe Rust is just better at hiding its dirty underbelly); another is the advent of Raspberry Pi tinkering (although it's a far cry from the heydays of hacking around 2000). How do we make *these* things happen?
I'm now running a mental comparison between this piece, and the other pieces of yours you link to, and several Bruce Schneier articles over the years that say very similar things. I wonder if a joint statement by the two of you might in some way help raise awareness about this further. In particular, you've both emphasized the idea that companies which ship egregiously insecure products that create a public nuisance ought to face liability for that nuisance-- what you call the negative externality effect.
One point of different emphasis, IIRC, is that Schneier has placed more importance on shifting the focus of US agencies like the NSA from offense to defense, so that the full resources of the federal government can be devoted to help industry become more secure, rather than exploiting security vulnerabilities for government spying purposes. I'd be interested to know your view on this-- is it something you deemphasized here out of a belief that it's politically even less feasible than the rest of the plausible reforms?
Thanks for the deeply insightful analysis, as usual. I particularly appreciate the formulation of the "catastrophe lottery" concept as a way of capturing why accountability, though necessary, will never be sufficient. But before-the-fact mechanisms are hard, even if we can muster the will to prioritize them. One possible model for how to think about it is the way the payment card industry has managed to create a system through which large players and small can confidently transact with unknown parties anywhere in the world. (It's an astounding accomplishment!) The principles include standards and process regulation for merchants and financial institutions, treating large and small players differently (large players are regulated to an extent that would not be viable for small ones), serious thinking about what types of risks require what kind of mitigation, and constant heuristic monitoring. Digital security generally is an even more complex challenge, of course, but I suspect many of the same principles would apply...
> The early Internet was intended to connect people who already trusted one another, like academic researchers or military networks
For people who want to go deep on this history, the book "Designing an Internet" by David Clark (MIT Press, 2018) is a great academic read on this history, as well as a few imagined alternative paths for what our technical infrastructure could have looked like given different incentives or design pressures.
Well said and spot on (btw, really really like the discourse here on a variety of topics). But while raising alarm bells is necessary, I think we need to look at this in the “historical’ context of evolution of any technology. Created -> Tested (by early adopters) -> Exponentially used and Abused -> Vital Life...and so on. And for all practical purposes we are in the Exponential Use/Abuse phase with data and “information” technology. I would compare this with aviation in the 70/80s (yes old enough to remember ). We’d walk to the tarmac to show our luggage, we’d undergo only high-level search of bodies and hand luggage... we would fly with open cockpit door watching pilots go about their work and so on. Then came hijackings and bombings and flying became hairy... So, the industry players stepped in: manufacturers and airline companies and airport operators and government and security vendors and... Fast enough. no, effectively, yes! Now, we have for all practical purposes one of the safest systems one can think of. And an extremely complex one. And we have awareness by travelling public. But it took decades...
Current proliferation of data and “information” technology is unprecedented in speed and reach, but it makes it no less immature on the curve of technology evolution. Will “we” catch up: vendors, governments, as well as general public, very likely. Should we press for better regulatory environment of all types and levels, yes and hard. Should we invest in educating public (really the key), absolutely (how many people know how to log in and change password on their router???). Should we panic, no, I think we should count blessings we live in this endlessly exciting time of technological and social discontinuity that undoubtably creates risks but also endless opportunities. I choose to see the glass half full (and for transparency sake, yes I live of data and deep understanding of human and consumer behaviour, both good and bad)
I wasn't hip to so much of this technical stuff despite being a computer buyer and user for over thirty years, so I'm grateful that Zeynep, who is familiar with a lot more than just the technical angle, made the effort to explain the danger in plain language, which is all I understand, these days, and for a lot longer than that, actually.
She says it's going to be dificult and expensive to fix. No doubt, alas.
When national security is threatened, no matter how you define national security, I expect that DARPA might want to see about helping before we're crippled by fraudsters, domestic and foreign.
And then we might see some action.
Or hope that some kid invents something that nobody has imagined before.
We have a way of turning technical advances into ways of taking advantage of the next guy, from clubs, to arrows, to TNT, to airplanes, to the latest forms of tech, don't we?
Next time we invent the wheel, we must also invent brakes.
So, how come we haven't taken care of this little security problem earlier?
Um, I think she already explained that; I'd better review her article.
Do you think it makes sense to have something parallel to the FDA that approves any networked device marketed in the US? If your device isn't certified to have sufficient security to prevent it from being roped into one of these botnets, it can't be sold in the United States (presumably written with a several year lead time so that the agency has time to approve all the next generation devices, rather than trying to retroactively fix the things that are already on market).
As always, thank you for the insights and facts that enable a person to choose to take action, political action or some other form of being part of change for the better.
I wonder if it is necessary and effective to make the Battlestar Galactica observations? BG is not a widely shared narrative. I mean, if Zippy The Pinhead had pointed to something that seemed to shadow or symbolize some aspect of this tech misuse issue, would it be worth mentioning it?
My point or question: when we look at an issue or opportunity or challenge through the lenses of a set of symbols, how much distortion and obstruction and blurring occurs, and how much agency and collective opportunity is diminished?
Try the essay with any reference to BG; see, it is a really useful piece!!! Thank you, again for it.
I wish I could find it, but years ago, Unix Review ran a short story in which the protagonist was a computer that ran Unix, which has a hard-coded variable called ctime, which will "run out" in 2036 or something (Unix geeks can correct me on the date). It's like a Y2K problem, except with no solution, because of the technical debt. That date was approaching -- meaning that his mind was going to go blank at midnight. "What can I do to help?," his girlfriend asked as the hour approached. "You could hold my hand, if I had one," was his memorable reply.
Nice, if bleak, overview. A possible analogy here with Covid-19: if the "infection" lingers long and large in the "population," new variants will make things even harder to manage.
Another question is, how to profit from a vague, serious, somewhat likely digital security crisis?
This is quite unfair to the errors of the pandemic. It did not require massive investments up upfront to be prepared to massively test trace and isolate. And research in new vaccine technology HAD been going on. The failures were all after the first cases appeared and it was clear that there was aerosol transmission via asymptomatic individuals.
Re cyber attacks: There has been too little law enforcement and counterchecking resources directed at perpetrators.
But seeing how much effort it takes to get Congress to appropriate fund for popular programs, how much harder it will be to get it to spend on nerdy future harm avoidance?