42 Comments

I have begun to wonder whether the economist perspective - identifying "externalities" - should be "inverted." The basic problem is the "default" perspective that markets work except when they don't, and then you have to intervene to fix it. But maybe it's more accurate for the "default" to be markets DON'T work, except when they do, and that it actually takes a lot of effort to get them to work.

Another analogy is perhaps public health versus medicine in a capitalist society -- money is made by treating disease, not by preventing it, even though from an aggregate point of view prevention is far better. Indeed, from a profit-making perspective, it's better to "churn" -- make money off excessive food/behaviors that make you unhealthy, then make more money off treating the resulting diseases.

Maybe all of these work on our tendency (individually and collectively) to be short-sighted, which is now further exacerbated algorithmically through media (social and otherwise), as well as having a lack of imagination as to importance of risks that haven't happened (e.g., 9/11), and in many cases even those that have.

Expand full comment
author

Yes, there are many things for which markets do not work well, by themselves, and in fact, the history of functioning markets is us identifying those aspects and fixing them so that markets do work (for example: anti-trust law).

Expand full comment

"markets...by themselves [i.e., 'markets by themselves']...functioning markets...anti-trust law [in RE 'trusts', as refreshingly clearly discussed in https://stratechery.com/2020/anti-monopoly-vs-antitrust/ ..." These are not entities within the context of real externalities in our world; but, by thinking about particular social activities and doing these activities, we have described these socializing activities by these. Is that something we can agree on?

What else? So, referring again to these agreed social activities, do we want to identify what we mean when we say 'functioning'? Would this do: a functional social activity is one that simply and clearly outwardly characterizes a collectively agreed on means-ends, a something achieved by means each person says is best to obtain something everyone wants?

In other words, something that is an individual, internalized valued achievement with a clear valued way of acting to achieve it becomes a collectively agreed on, or externalized or conventionalized, protocol and activity? Mutually desirable, mutually lauded, mutually obtainable?

Because if we agree on these, then the original questions to answer -- in RE networked cyber systems, a slinky, and the current infectious disease organism transmission -- may have some partial and functional (yep!) answer. Is the cyber network secure? Well, what is in it for each and every person to secure the network and respect all users' securities? What is in it for each and every person to contribute to 'least transmission' and most self-care and public care activities? Unlike the slinky, people in society need to consciously construct and use mutual respect and mutual trust to agree on common rules of engagement (in our system, rule of law), whereby we, in our American society, can contribute to functional non-violent civil mutually beneficial activities.

Alternatives?

Expand full comment

This sounds right, although you could argue that same perspective has already been filtered ad nauseum through various blanket critiques of late capitalism (a market-based system is inherently flawed or even pathological, and can only lead to ruin). But maybe inverting "default" adds a new wrinkle? It would probably be necessary to similarly reconceptualize what it means for a market to "work" (the time frame, what criterion is used to measure success). Is the criterion now that it successfully averts a national or global disaster? Certainly when the malicious actors are the ones creating the market, we specifically want it to fall. I have only the barest understanding of cybersecurity issues, but my impression has always been that the problem goes far beyond market rationality, even if this might exacerbate the situation. There also seems to have been an underestimation of the sheer complexity baked into the system, and especially, how our current "is" is constrained by the old infrastructure of "was."

Expand full comment

I like the idea of inverting the default market perspective. That could be really interesting.

Expand full comment

I’ve lived with the reality of tech debt for a long time. Y2K brought daylight to a lot of it. But when I was a state employee, I learned that much of the state’s critical financial infrastructure ran on Sperry mainframes from the 1960s. IBM mainframe knowledge has waned; Sperry mainframe knowledge, let alone coding abilities, is 100 times worse. As you rightly point out, cost is a critical factor, along with “if it ain’t broke, don’t fix it.”

With the rise of Ethernet and TCP/IP, problems really took off. Where mainframe networking protocols like VTAM were very much based on identity and authentication, the new protocols were inherently anonymous. The issue became how to bolt an identity management system to an anonymous protocol. If your web page retrieves data from a database, whose identity does it use? Yours, as the requestor or the web page’s?

Bitcoin (and blockchain protocol in general) capitalize on that anonymity, and try to compensate for it with encryption puzzles. Little wonder that IBM’s investment in blockchain has reintroduced identity at the core, making it more secure, less openly available, but contrary to the initial designs.

I think you’re pointing out rightly that regulation can go only so far. When national entities and individuals from anyplace in the world can hack for profit, regulation becomes increasingly diaphanous. I don’t yet see a clear way forward out of the mess.

Expand full comment
author

Also, people dismiss Y2K because we averted it. But it was SO MUCH WORK and resources. It's like SARS: a near-miss made us feel complacent rather than learn from it.

Expand full comment

And so way more money made by making it infinitely bigger than necessary. Not that it wasn't a problem it was, but controllable and manageable and required resources. Then vendors stepped in and media, and legal system and...

Expand full comment

and that money did not go to solving the issue of tech debt, only to the purchase of more and yet more elaborate band aids, thus lining the pockets of many in perpetuity as the underlying infection continued to fester.

Expand full comment
May 13, 2021Liked by zeynep

Totally agree. As long as the economic model we live in is the way it is things will not change (I am not either pro or against change, those are just the practicalities of the "rules" currently in effect)

Expand full comment
author

Yep

Expand full comment

Y2K is a good example, but it seems so relatively simple compared with the issues around cybersecurity. My transit agency did most of its own IT work, including Y2K preparation. I'm not sure what the external mandates were that compelled the agency's priorities and resource commitment, but the Y2K project was well organized and closely managed on a reasonable timeline. We all had a generic checklist that we had to apply to our own applications. (With amused irony a few months into the new millennium, I discovered a minor Y2K bug that I had missed, in spite of my best efforts.)

The other comparable area is PCI (Payment Card Industry) compliance: again something I had to work on in the same application. These were the days when you had to build your own interface to the bank's credit card process. Crucial to the process was our network administrator, retired from the Army, who took things very seriously. The external mandate for PCI was that the bank would quit releasing your funds to you if you weren't compliant. (I never check the "Save credit card for future use" box and delete from Amazon and Apple after each purchase clears. And nope, nothing financial on my phone.)

Cybersecurity is much, much bigger. It'll be interesting to see how things develop. In the meantime, I'm trying to figure out how to incorporate it into my personal disaster planning (mostly earthquake-based here in Cascadia.)

Expand full comment

Fellow Cascadian! I agree totally that the high-level problems needing a solution were way simpler for Y2K than for the whole of cybersecurity.

If you worked from a generic checklist, you’ve pinpointed part of the issue. While date algorithms and date entry fields needed can be checked and verified, how many of the applications reviewed depended on obsolescing technology like COBOL and DB2? If there were none, great. But many that I’ve run into had strong allegiance. Given the chance to address Y2K while also updating both the needed hardware and software development issues, the answer was nearly always “just Y2K.” So the immediate problem was addressed, but the overall tech debt was not.

The complexity of opening up long-deprecated code on long-deprecated hardware and long-deprecated middleware is humongous. Inasmuch as it’s been “fixed” by creating hybrid solutions using snazzy frontends for a dinosaur backend, now these old faithfuls have joined the Internet and have become susceptible to modern hacking techniques. Y2K as it was primarily executed, has increased the range of susceptibility today because it failed to address the tech debt then.

On the other hand, updating projects I was involved with inevitably failed because the sponsors of the “new way” didn’t understand the business side well enough, and failed to persuade those holding the purse strings that such an investment was worth it.

Expand full comment

About 20 years ago my spouse co-wrote a book on cybersecurity for the Swedish Ministry of Defense. The chapter we both mention to each other most often is the one on the cyber-weaknesses in the U.S.'s electrical grid, similar to the pipeline deficiencies. I think he kind of gave up trying to lead policy people to understand that this is really, really important, though he still works in this field, but sometimes it's only when a system fails that people understand how important it was to protect it proactively. Like when a bridge collapses due to poor building materials or lack of maintenance.

Reading some of the comments below, I'm reminded of my current research into the commons and land ownership, and how often I end up looking at Kate Raworth's recent Doughnut Economics model. There are ways that the negative externalities of digital technology relate to her idea of ecological limits that I hadn't thought of before.

Expand full comment

Do you expect regulation to be of much use here? All I have seen so far has been building new floors of administration onto the crumbling tower of Babel. GDPR has made a bunch of my friends in Europe afraid of running personal websites. FERPA probably got us the Canvas/Moodle/Blackboard oligarchy in teaching. HIPAA has done wonders for the health of compliance officers; not so much visible improvement for anyone else. All of these were perfectly well-intentioned; all were meant to solve real problems. Yet here we are. With ransomware crews switching from lockware to leakware, I'm wondering if we aren't helping them by penalizing companies for leaking customer data -- doesn't it add yet another incentive to pay up?

My only hope is for the tech community to untangle some of the complexity when financial incentives point the way (it has to be financial ones, as regulatory ones don't push in the right direction). Very few cases of this happening so far. Maciej Ceglowski has been trying to get web developers to stop loading their sites with megabytes of insecure crap (which also slows them down significantly on mobile); no success (other than everyone retweeting his slides because they are gorgeous https://idlewords.com/talks/website_obesity.htm ). Systemd still rules most popular Linux distros. The move from C to Rust is giving me some hope (or maybe Rust is just better at hiding its dirty underbelly); another is the advent of Raspberry Pi tinkering (although it's a far cry from the heydays of hacking around 2000). How do we make *these* things happen?

Expand full comment

I'm now running a mental comparison between this piece, and the other pieces of yours you link to, and several Bruce Schneier articles over the years that say very similar things. I wonder if a joint statement by the two of you might in some way help raise awareness about this further. In particular, you've both emphasized the idea that companies which ship egregiously insecure products that create a public nuisance ought to face liability for that nuisance-- what you call the negative externality effect.

One point of different emphasis, IIRC, is that Schneier has placed more importance on shifting the focus of US agencies like the NSA from offense to defense, so that the full resources of the federal government can be devoted to help industry become more secure, rather than exploiting security vulnerabilities for government spying purposes. I'd be interested to know your view on this-- is it something you deemphasized here out of a belief that it's politically even less feasible than the rest of the plausible reforms?

Expand full comment
author

Yes, one would hope that we would work on this before a full catastrophe happened.

I write the same thing about shifting NSA to defense as well in this piece. Agree with Bruce there. https://www.nytimes.com/2017/05/13/opinion/the-world-is-getting-hacked-why-dont-we-do-more-to-stop-it.html

"It is past time that the N.S.A. shifted to a defensive posture and the United States government focused on protecting its citizens and companies from malware, hacking and ransomware — rather than focusing so much on spying. This isn’t just about disclosing vulnerabilities, a hot-button topic that often distracts from deeper issues. It also means helping develop standards for higher security — something an agency devoted to finding weaknesses is very well suited to do — as well as identifying systemic cybersecurity risks and then helping fix them, rather than using them offensively, to spy on others."

Expand full comment

Thanks for the deeply insightful analysis, as usual. I particularly appreciate the formulation of the "catastrophe lottery" concept as a way of capturing why accountability, though necessary, will never be sufficient. But before-the-fact mechanisms are hard, even if we can muster the will to prioritize them. One possible model for how to think about it is the way the payment card industry has managed to create a system through which large players and small can confidently transact with unknown parties anywhere in the world. (It's an astounding accomplishment!) The principles include standards and process regulation for merchants and financial institutions, treating large and small players differently (large players are regulated to an extent that would not be viable for small ones), serious thinking about what types of risks require what kind of mitigation, and constant heuristic monitoring. Digital security generally is an even more complex challenge, of course, but I suspect many of the same principles would apply...

Expand full comment

> The early Internet was intended to connect people who already trusted one another, like academic researchers or military networks

For people who want to go deep on this history, the book "Designing an Internet" by David Clark (MIT Press, 2018) is a great academic read on this history, as well as a few imagined alternative paths for what our technical infrastructure could have looked like given different incentives or design pressures.

Expand full comment

Well said and spot on (btw, really really like the discourse here on a variety of topics). But while raising alarm bells is necessary, I think we need to look at this in the “historical’ context of evolution of any technology. Created -> Tested (by early adopters) -> Exponentially used and Abused -> Vital Life...and so on. And for all practical purposes we are in the Exponential Use/Abuse phase with data and “information” technology. I would compare this with aviation in the 70/80s (yes old enough to remember ). We’d walk to the tarmac to show our luggage, we’d undergo only high-level search of bodies and hand luggage... we would fly with open cockpit door watching pilots go about their work and so on. Then came hijackings and bombings and flying became hairy... So, the industry players stepped in: manufacturers and airline companies and airport operators and government and security vendors and... Fast enough. no, effectively, yes! Now, we have for all practical purposes one of the safest systems one can think of. And an extremely complex one. And we have awareness by travelling public. But it took decades...

Current proliferation of data and “information” technology is unprecedented in speed and reach, but it makes it no less immature on the curve of technology evolution. Will “we” catch up: vendors, governments, as well as general public, very likely. Should we press for better regulatory environment of all types and levels, yes and hard. Should we invest in educating public (really the key), absolutely (how many people know how to log in and change password on their router???). Should we panic, no, I think we should count blessings we live in this endlessly exciting time of technological and social discontinuity that undoubtably creates risks but also endless opportunities. I choose to see the glass half full (and for transparency sake, yes I live of data and deep understanding of human and consumer behaviour, both good and bad)

Expand full comment
author

Yes, that is the curve, plus more and more of it is "coupled" together or networked (depending on which disciplinary terminology we are using) and that just really amplifies the threat of cascading failures.

Expand full comment

Which makes it all the more powerful, challenging and hence exciting! Just hope this time 'we" do not need to send the fleet flying into the sun to break the cycle.

Expand full comment

I wasn't hip to so much of this technical stuff despite being a computer buyer and user for over thirty years, so I'm grateful that Zeynep, who is familiar with a lot more than just the technical angle, made the effort to explain the danger in plain language, which is all I understand, these days, and for a lot longer than that, actually.

She says it's going to be dificult and expensive to fix. No doubt, alas.

When national security is threatened, no matter how you define national security, I expect that DARPA might want to see about helping before we're crippled by fraudsters, domestic and foreign.

And then we might see some action.

Or hope that some kid invents something that nobody has imagined before.

We have a way of turning technical advances into ways of taking advantage of the next guy, from clubs, to arrows, to TNT, to airplanes, to the latest forms of tech, don't we?

Next time we invent the wheel, we must also invent brakes.

Fire?

Fire extinguishers.

So, how come we haven't taken care of this little security problem earlier?

Um, I think she already explained that; I'd better review her article.

Expand full comment

"Negative externalities:" No wonder I couldn't recall...

Expand full comment
author

Not the easiest term but it really does explain so much!

Expand full comment

Do you think it makes sense to have something parallel to the FDA that approves any networked device marketed in the US? If your device isn't certified to have sufficient security to prevent it from being roped into one of these botnets, it can't be sold in the United States (presumably written with a several year lead time so that the agency has time to approve all the next generation devices, rather than trying to retroactively fix the things that are already on market).

Expand full comment
author

That sounds like a minimum to me, to be honest. Right now, there is little incentive not to produce things that will go into everything we use that are not "admin/password" for username and password, and usually there isn't even a way to fix it after the fact.

Expand full comment

As always, thank you for the insights and facts that enable a person to choose to take action, political action or some other form of being part of change for the better.

I wonder if it is necessary and effective to make the Battlestar Galactica observations? BG is not a widely shared narrative. I mean, if Zippy The Pinhead had pointed to something that seemed to shadow or symbolize some aspect of this tech misuse issue, would it be worth mentioning it?

My point or question: when we look at an issue or opportunity or challenge through the lenses of a set of symbols, how much distortion and obstruction and blurring occurs, and how much agency and collective opportunity is diminished?

Try the essay with any reference to BG; see, it is a really useful piece!!! Thank you, again for it.

Expand full comment
author

I hear you! But I can't ways resist a bit of geek culture or science fiction. :-D

Expand full comment

Thanks for this reply and for admitting that powerful symbols can be powerful tools. The latter is the experiential support for my comment. My choice is, simply, focus on building a case for something using factual -- and not metaphorical -- experience. The power of collectively recognized fact and the willingness to collectively construct agreed on alternative choices of action from fact seem to me to be the bases for bringing into being a successful organization effort that endures over time, through many challenges and with the comings and goings of members or supporters. Such effectiveness and endurance are characteristics of organized collective action which you -- very legitimately I will gratefully say and say based on my public interest projects group experiences -- point to as essential to, but often missing from issue advocacy campaigns performed through social media discussion and so on.

Metaphor is a powerful tool. Part of its power is to act as a foil to common experience of fact. And yet, it is an abbreviation of the real, of the immediate; and, it is able to impart the surreal aspect of experience, the aspect of the 'liminal', which seems inseparable from perception itself .

I suggest that, for any secular organized group effort to grow and make itself functional and adaptable, i.e., present, nimble and perilously effective, the collective aim must be to cultivate collective fact and collective fact-based agreements for collective action.

I really admire your work and the way you persevere, Ms. Tufekci. Very grateful, indeed.

PS (I thank John Milton for creating rich environments for being metaphorical and for participating in metaphorical expression.)

Expand full comment

I wish I could find it, but years ago, Unix Review ran a short story in which the protagonist was a computer that ran Unix, which has a hard-coded variable called ctime, which will "run out" in 2036 or something (Unix geeks can correct me on the date). It's like a Y2K problem, except with no solution, because of the technical debt. That date was approaching -- meaning that his mind was going to go blank at midnight. "What can I do to help?," his girlfriend asked as the hour approached. "You could hold my hand, if I had one," was his memorable reply.

Expand full comment
author

It's this! https://www.wired.com/2006/07/the-unix-end-ti Unix register variable (time_t) is 32 bit, so runs out *checks notes* January 19, 2038, at 3:14 a.m. UTC.

Expand full comment

Yes, that's it! I wish I could find the story, but so far, no luck. Old trade magazines from the pre-Innerwebs era are mostly not online - they're in a never-never land of "stale nonfiction," which doesn't get archived. A few years back, I wanted to get some back issues of Digital Review, a DEC magazine, which was my first job out of college. It took years of searching -- I finally found some on eBay, courtesy of a librarian who was unloading the contents of a library from a Catholic school that was closing. Unix Review probably met the same fate.

Expand full comment

Nice, if bleak, overview. A possible analogy here with Covid-19: if the "infection" lingers long and large in the "population," new variants will make things even harder to manage.

Expand full comment

Another question is, how to profit from a vague, serious, somewhat likely digital security crisis?

Expand full comment
author

I wish we made it profitable to help *fix* it before it got worse.

Expand full comment

Just saw this on twitter:

"Crazy idea you all will hate for solving ransomware:

Make ransomware legal except you can only charge individuals $100 and organizations $5000.

Legality induces entry into the ransomware industry so that we quickly find all the obvious security holes without catastrophic cost."

https://twitter.com/elidourado/status/1392984926300123140

In jest of course but hey sometimes these humorous sorts of things can lead to something useful idk.

Expand full comment

Yes, of course, I don't mean to be oppotunistic and cynical. But if comparison to the pandemic is apt, one would bet on a serious incident occuring at some point and causing a fair amount of panic. Impossible to know what the market reaction would be, but it's hard not to wonder.

Expand full comment
author

Agree!

Expand full comment

The pessimist in me says such an incident would be another opportunity for tech corporations to consolidate even more power, as they'll have the resources and sway to be the savior.

Actually, what've been the consequences to Microsoft's Exchange vulnerabilities?

Something to look into..

Expand full comment

Maybe, I'm not sure though. I've been wondering if it would push even more money into crypto.

Expand full comment

This is quite unfair to the errors of the pandemic. It did not require massive investments up upfront to be prepared to massively test trace and isolate. And research in new vaccine technology HAD been going on. The failures were all after the first cases appeared and it was clear that there was aerosol transmission via asymptomatic individuals.

Re cyber attacks: There has been too little law enforcement and counterchecking resources directed at perpetrators.

But seeing how much effort it takes to get Congress to appropriate fund for popular programs, how much harder it will be to get it to spend on nerdy future harm avoidance?

Expand full comment