39 Comments

I was wondering about the "heckler's veto" and "information glut" -- and that maybe part of the problem is the actual velocity of discourse, that maybe we need a way to "slow down" and add some "friction" in the social media realm. The analogy I'm thinking of is with respect to high frequency trading (HFT) on Wall Street -- firms were literally buying space closer to the NYSE servers so that they could shave microseconds off their "latency" and increase their profits in a way that has no economic rationale once or ever, and that adds to market volatility. In finance, even a minuscule financial transaction tax would reduce HFT substantially... Is there some way to add some type of "virality tax" that could at a sort of circuit-breaker to the (destructive) business model for social media? Would that help?

Expand full comment

Friction, in my view, always helps. It helps people get a handle on what's going on. For the longest time most design was geared towards removing friction. We finally started seeing some friction-design with WhatsApp I think (limiting forwarding, etc.) We always have to be faster isn't really true, because value of fast is relative, as you point out--need to buy space closer to NYSE only if others are. It's queuing rather than speed. So I totally agree, friction would be in the solution space.

Expand full comment

What grounds are there to believe that the American public, working at the federal level, can in fact make better decisions than the tech executives on these questions-- which as you say are hard questions, new to most people and requiring unusual domain knowledge? The pitfalls of leaving things to Mark Zuckerberg are apparent and you're right to call them out. But "getting our act together as a society" may not actually be a feasible alternative in this case, and without that, the decisions political leaders would make on these issues and the laws they would pass are likely to be even worse than unaccountable Zuckerbergian rule.

Rational civic deliberation on novel, nuanced issues like these happens at the municipal level in the US sometimes-- perhaps occasionally even at the state level. It happens at the national level in other countries much smaller and less polarized than ours. There is no reason to believe that it can be made to happen anytime soon at the US federal level, regardless of the results of the next few elections. Given that reality, for all the real problems of leaving referee authority with tech executives, I'm inclined to believe it is the least bad alternative on offer.

Expand full comment

It seems pretty important that the *kinds of decisions* that would be available to the public would be different. Whether or not you think the government would do a "better" or "worse" job, what they nearly by definition couldn't do is arbitrarily change ad policy the week before an election, or capriciously silence someone within hours of a controversy, or etc etc. The friction of a democracy is part of the point, sometimes!

Expand full comment

Yep. And many times, accountability or a correction that comes too late is irrelevant. The ship will long have sailed.

Expand full comment

In theory this should be true. In practice, regulators make arbitrary and capricious decisions all the time! Think of all the discretion various federal agencies have lately exercised about when to enforce laws and when not to, many of which have rightly been reported as evidence of corruption. Some of those discretionary decisions are probably illegal, but most are likely within their legal authority, even if they cut against the spirit of the relevant laws.

Whether you are a public or private governance institution, you can write a rule that looks like a bright line, but if it's in a domain where judgment calls are inevitably required, there's going to be scope for arbitrary and capricious behavior. And I don't see how you moderate online platforms without lots of tough judgment calls. There is at least one possible rule that really is a bright line, namely "no moderation," but to put it mildly it is not clear that would be an improvement.

Expand full comment

Oh yeah, I'm sure a regulatory body would get it wrong a lot too. I'm just saying that I like my wrong decisions to be slow and part of the public record, not instantaneously because jack fasted for three days and listened to a cat made of eyeballs. If someone snows a regulator in a hearing we can at least look up who they are and who's funding them, but eyeball cat is completely unaccountable.

Judgement calls are unavoidable, but there's no reason we shouldn't get to know who made the call, and why, and who was pressuring them. Which is why decisions that will dramatically impact millions of peoples lives instantly really ought to be public with enough lead-time that any swarm of op-eds about the impact can happen *before the change*, in my opinion.

Expand full comment

hahaha but also yeah. Future history of the decline: "CEO woke up in a bad mood, you see..."

Expand full comment

Again: in theory, sure. In practice, there are so many ways democratic accountability goes wrong that I actually think eyeball cat is a less bad option.

Also, democratic mechanisms are generally supposed to produce uniform rules, which is a good thing overall, but terrible when applied to such a heterogeneous space. For all the social power Facebook has there are plenty of us who aren't on it, and who go to other online spaces in part because they have different policies! There's no reason Facebook, LinkedIn, Ravelry, AO3, etc etc all should have the same rules for moderation, and no way I can see to tailor an FCC rule to each.

Moreover, many of the very best discourse spaces are so good because of the unaccountable, arbitrary, unfettered authority of unusually excellent moderators. I'm commenting here right now because I trust our gracious host to be one such. I comment on forums moderated by Scott Alexander and his acolytes for the same reason. Stripping them of their authority in order to rein in Zuckerberg's strikes me as a bad trade. And if you say "but those are little private spaces not de facto public spaces": good luck drawing *that* line sensibly.

Expand full comment

The point is precisely that we CAN draw that line sensibly, simply by pointing at certain websites with enormous reach and saying that they, in particular, need to be controlled. Facebook, Youtube, Twitter, maybe Reddit, are all just categorically different things that this newsletter or the SSC comments section or whatever. It does us no good to put on our free market hats and say that all of these social media sites should be treated the same because any one of them theoretically *could* become places where millions upon millions of advertising dollars churn in bad faith attempts at disinformation, when we can just look at the really quite small list of sites that do work like that and say "okay, you all have different rules."

If they want to respond by artificially reducing their reach before whatever breakpoint ends up being set by such an act - great, they handled the monopoly busting for us! But saying that we can't tailor an separate FCC rule to Facebook vs AO3 seems like an extraordinary premature and cowardly concession when this market is so heavily dominated by a few players that are constantly taking an even higher percentage of the pie from everyone else. How is making a separate rule for the biggest players any harder than a progressive tax code, or heightened disclosure requirements for large corporations? And if companies artificially break themselves apart to game a threshold, isn't that just an unalloyed good by increasing the diversity of players and lowering the brittleness and impact of any one particular platform?

Tim Wu's "The Curse of Bigness" is part of what's informing my perspective here. We have gobs of evidence by now that a complete lack of regulation tends to lead to monopolies/duopolies, not a free and competitive market. I want a reinflation of the small-to-medium scale independently web very badly - it's why I'm here too! But I don't think we get there by taking it as a prima facie position that Facebook is just another website in the market of websites that needs to be treated like everybody else. Can Scott Alexander influence genocides in Southeast Asia from his action or lack thereof? No, he cannot, so he doesn't need the same oversight Mark Zuckerberg, who can and did, needs. And maybe pinning down this *exact* threshold is tricky, but it's very easy to prove that the vast majority of all websites don't meet such a definition, and it's very easy to prove there are a few that do. The fact that nebulous reality doesn't map cleanly into discrete categories shouldn't stop us from acknowledging that the landscape is incredibly tail-dominated and it's the tails that are the problem.

Expand full comment

So this argument has the same problem as "curse of bigness" arguments generally: namely, the government you want to empower against monopolistic companies has the same problems they do, only more so. You're trying to restrain one kind of badness by unleashing an even worse one. The fact that you and I can correctly judge that FB and SSC are qualitatively different in their social influence does not imply that a federal regulatory agency-- which is doomed by its institutional structure and incentives to be immensely stupider and more corruptible than either of us-- can be trusted to come up with a general rule which correctly treats both of them and all the myriad people in between. The problem is perhaps most comparable to that of deciding when a bank is "systemically important" to the financial system; and this is a problem which bank regulators, who are much more experienced, knowledgeable, and thoughtful than any tech regulators are going to be for the foreseeable future, have done a poor job of solving.

Expand full comment

(Also: googling "eyeball cat")

Expand full comment

May refer to https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

Shorter: image recognizers are tuned for classes of image ("dog", "cat", "screwdriver"). You can feed a dog recognizer (e.g.) a picture of a cat, ask it "what's not doglike here?" (what is the error?) and remove some of that "error" from the picture. Repeat this process till the recognizer says "that's a dog!". This tends to put dog eyeballs on your cat picture, because the recognizer "likes" dog eyeballs.

Expand full comment

So did Dorsey see a cat eyeball image and it told him how to monitor Twitter posts? I'm just fascinated by the thought of what feeds into these decisions.

Expand full comment

I've become somewhat of a fan of Elinor Ostrom's economic ideas centered around commons systems. What I like about her theory (and research in actual communities) is the focus on nesting -- decision control kept at the local community level, but nested inside larger systems with a need to reconcile them. From an ecological perspective it makes a lot of sense, since my own community can protect its water sources, for example, all it wants only to have them ruined by manufacturing or extraction upstream.

That's more on the philosophy side, but I have thought about how her work, or Kate Boworth's Doughnut Economics, would apply to the problems of tech and online platforms. The problem with the tech executives making these decisions is that their values aren't necessarily society's values (whatever we mean by that), and running a corporation obligates them to think of shareholder value first and societal harm second. On the other hand, the problem of regulatory capture is also very real, as demonstrated by U.S. forest policy and the like. If government is to make these choices (which would be my preference as I still believe the values promoted by democratic participation are more beneficial than that promoted by an unfettered free market), at what level would it be most effective?

Expand full comment

I am no fan of excessive government control, and how to make the elected legislators representative and responsive is a big part of the challenge. We have a slow and corrupt gerontocracy running a good part of the show. So I acknowledge that part! But again, we have to get back to some sense of democratic legitimacy because otherwise, we're toast.

Expand full comment

I agre 100%. No matter how bad things are (love the word "gerontocracy"), I keep bumping up against the reality that if the human species can't grow up, and fast, then there won't be much left to argue about anyway.

Expand full comment

In general, nesting does seem promising for making democratic governance work better-- which in the US context would traditionally mean spending more time on municipal/state governance and less time on federal. Hard to see how to apply that to a type of industry where the transcendence of local geographies is pretty much the point, though. Doesn't seem workable to have different sets of online moderation rules for San Francisco and Missoula and on and on.

But maybe that's where creativity would be fruitful. That is, we could ask: is there a notion of online "locality" that would balance voice and exit and allow democratic input into rules where the relevant polity is sufficiently "local" in scale and commonality of interest to do deliberation well, but doesn't have to be geographically concentrated? Man, that is totally a conversation we could have had a lot of fun with in the heady days of 1995. :)

Expand full comment

Considering someone had to tell me what a website did that year and I barely knew how to use email, I'm not sure it would have been fruitful when applying to tech! But it would have been good to think about then as something to be applied beyond land-based geography or a particular political system.

Maybe it doesn't have to be geography, though. Maybe the geography is online and relates more to the *type* of discourse and what its effects might be in the offline world. Which would require far deeper examination of the offline effects on people of hate speech, etc.

Expand full comment

I mean — hey there, did we go to college together?

Expand full comment

Yes, that's me. :) Hi Nia!

Expand full comment

Hey Nick! This is fun—we can continue conversations we had 25 years ago!

Expand full comment

Oh wow, this is the best!

Expand full comment

Kudos to Zeynep for creating a community space like this! Nick and I once had conversations around political philosophy, with differing viewpoints but mutual respect :)

Expand full comment

as for "no new ads" embargo (on Facebook, we match 89% of DEM and 92% of REP voters - so, this platform is ludicrously influential) -- there are so many shenanigans one can play with ads discouraging turnout, that I think the public good is served by not allowing any ads. FB will also turn ALL paid political ads off on Nov 4th. I think they're projecting post-election interference there. Ads announcing wins, inaugurations, ballot shenanigans. Thank you for saying -- we need to tell the legislators what to do with this stuff. You average pol (my clients) ... they barely understand the difference between an email and a text message. I'm not joking. We must do better. Thank you for this piece.

Expand full comment

That's an amazing number. I mean, I know these things but still it's an amazing number to wrap one's mind around.

Expand full comment

Some people enjoy our lucha libre political spectacle for its comic aspects, and some, like Nate Silver, enjoy it because a contest — any contest— interests them more than what’s actually being contested. Others, like Mitch McConnell, enjoy it because they specialize in profiting from chaos and misdirection. The rest of us, despite being deafened by the present din and idiocy, know full well that if apocalypse is coming, we’re gonna have to handle it ourselves.

Expand full comment

Excellent statement of the problem. I understand our propensity to rush to possible answers, but I think our current situation is a complex predicament, not a complicated problem, and there will therefore be no simple answers. This is why I think we need to start having deep, probing conversations with a cross-section of informed and diverse people if we hope to surface and evolve approaches that will do more than paper over the predicament until it pops out somewhere else. I've suggested the approaches of David Bohm and Daniel Schmachtenberger for such conversations, and IMO the sooner we start the better. The concept of Citizens' Assemblies, being touted by XR and others, are also along this line. Zeynep, your idea of making these giants into ad-free public utilities owned cooperatively by $20/year subscribers is a fascinating idea, and such an assembly might start by probing if/how that might work, and come to be.

Expand full comment

Thank you. I agree that there will be no simple answers. Plus, there might not even be quick right answers. Legislation tends to focus on "getting it right" the first time because our legislative process is so broken that we can hardly fix things. But tech companies "iterate" all the time because that's how you can figure things out. It's really hard but fixing all this requires fixing both how we legislate (more responsive, iterative and understands we sometimes need to try-and-see) and the rest at the same time. It sounds daunting, but I always think about the post World War II period. If we can get out of that...

Expand full comment

Did you see that new initiative Schmachtenberger and friends are working on? I'm wary of all the thought leaders running around on stages talking about all the answers they have for Humanity 2.0, but Schmachtenberger admitted straight out that they're essentially doing what good education and good journalism should be doing, and he seems to at least be looking at things at a granular level in a way that similar thinkers aren't.

Expand full comment
founding

What do you make of the House report's allegations of meaningful and traditional monopolistic activities (eg buying potential competitors and scuttling them)? It feels like that's a valid piece of the puzzle, although I'm not super convinced that would actually fix the issue.

I do think that a lot of public outrage about disinfo campaigns tries too hard to be "nonpartisan", completely ignoring that the platforms allow it because of threats from conservative legislators. From what I've read elsewhere, as a % of profits political ads aren't terribly consequential.

Expand full comment

The part I worry about is that overly-concentrated power is true, but it's linked to all the other issues. If you break them up without addressing the rest, we just go back to the conditions that produced them (network effects etc.) while unleashing vicious competition that may undo what restraint and sensible steps they have taken.

Expand full comment

Have you seen Mike Masnick's article from August 2019, published on knightcolumbia.org, called "Protocols not Platforms"? I would love to hear your thoughts on it.

Expand full comment

To go back to one of Zeynep's original points, I agree about the debates. My English in-laws were surprised when they were visiting once during the early Obama years and I wasn't interested in watching a debate (or it might have been a State of the Union). These have been stages for aimless talking points for so long it seems pointless to spend the time. It took my spouse a while longer to agree, though he still wants to watch the debates (this is also his first presidential election as a citizen, so he might have more of a vested interest now). It seems such a core problem to get people running for office to talk honestly about what they can or will do and why in a way that has any meaning.

Expand full comment

An idea to try to fix YouTube/Facbook/Twitter is to change the law so that if an online platform promotes any content, and that content is paid for (ads, or paid for by the producer but not the publisher in any other way), then the platform takes 100% responsibility for that content. They can be sued for copyright infringement, slander, etc. and there would be no DMCA safe harbor. The idea is the RIAA/MPAA will enforce the rule for us, since copyright infringement is so expensive. Facebook would then be able to have ads around content which they moderate, and the crazy stuff wouldn't have ads (and so wouldn't be promoted). Promoting content would be any sort of non-public prioritization (e.g., personalized), and strictly-by-date would be exempted. I'm thinking sites like slashdot/fark.com and possible reddit would be exempt, even if they show ads, since they are not promoting individual stories, and they don't often push people down rabbit holes. But Twitter would be affected--they would need to go back to a simpler timeline implementation. And YouTube and Facebook would have to change their business models. And sites with comments would simply need to move comments to a page with no ads. And if you subscribe to a site, and it doesn't show ads, it wouldn't be affected at all.

Expand full comment

I'm not certain about that, but I think this gets to an important distinction: speech vs amplification. It's not just paid ads, it's also amplifying that is distinctly different than just publishing.

Expand full comment

Section 230 and the DMCA allow individuals to post crazy stuff to get attention, with the platforms having no responsibility. With advertising, the platform side is incentivized to get people mad and to argue (to keep people engaged on the site and watching ads) about the crazy information individuals are posting. It's not the bad information itself which is necessarily the problem (Mein Kampf sitting on a bookshelf in a library isn't causing much damage), it's the incentives that now exist where platforms and posters can make (some) money by burning down the world by spreading bad information. It took the market a little while to figure it out, but under current law, Facebook and YouTube are profit maximizing when they push stuff like terrorist content and QAnon conspiracies (i.e., down the rabbit hole). Some pushback has gotten the terrorism stuff partly fixed, but that's not a general fix. I don't think we will ever be able to solve this problem by moderating the content itself, since the platforms are incentivized to find loopholes.

Expand full comment