Why I changed my mind on modeling electoral forecasts
As a computational modeler who specializes in acknowledging and quantifying "uncertainty" (I even recently published a Bayesian COVID model for US states), I have actually recently been wondering about the ethical implications of the entire modeling/prediction/uncertainty quantification endeavor. Part of my dilemma stems from the long-standing tactic of special interests ranging from tobacco to the chemical industry to the fossil fuel industry in "playing up" the uncertainty -- i.e., manufacturing "doubt" -- and wondering whether I've becoming unwittingly complicit. Now, as you mention, having election predictions that predict a "sure thing" can actually have the opposite effect and thereby destroy its own prediction... it's like perverse reversal of a Greek tragedy where trying to avoid a prediction ends up causing it -- here making the prediction can actually undo it... It seems you are suggesting that before endeavoring on a modeling or prediction exercise, we need to consider not only how to effectively communicate the results (which has improved over the years), but what kind of "feedback" making the predictions will have on the predictions themselves? Kind of like the quantum mechanics idea that you can't measure something without disturbing the system? But couldn't you try to model that too (just like they do in quantum mechanics)?
Don't feel too bad about getting it wrong a decade ago, you were a young, smart technophile, and the models were a shiny, smart object. I've always had a skeptical disdain for those models (yeah, the stupid NYT needle was the worst), but then again, I'm not a technophile.
The most informative polls for me are those that have a robust N, and give a +/- error rate.
I like your concept of not being a spectator or a gambler, but an acting participant. This year, I'm a little of all three. The news as of Friday morning that Harris County TX early voting had already exceeded all voting in 2016 seemed more significant that any close poll could be. If a low turnout state was now going to turn out big, the data in that poll seems of less importance than the poll's notion of who a likely voter would be.
Hearing that, I put down a friendly, and maybe aspirational, $5 bet on Texas going blue.
Great piece. I'm curious why you here take the pessimistic stance that you "don’t think the general public understands what election forecasts are, what they are not, and what they cannot ever be" and are "not sure there’s a way to overcome this problem". One of the things I've always appreciated about your work is how you denounce political cynicism, and insist on trusting the public to be able to handle complex information, when that information is presented accurately.
I'm not saying I don't share your pessimism here, but I'm curious why you don't see the misinterpretation of election forecasts as more of a media / communication problem, rather than a public ignorance problem.
There are so many important points in what you've written. Your observations about the horse race-like coverage resonates with two other phenomena I believe you've commented on: the mistaken sense Americans now have that watching (like at a horse track) - and posting about - something is a kind of activism or involvement (it is not) and the reality that AI further drives this kind of attention (refresh-refresh-refresh - more eyeballs on the page - more monetization for the platforms, more anxiety for the readers). I'd add - and again, I'm sure you've said this - that humans have great difficulty with understanding anything that involves math (statistics, probabilities, effects like exponential growth) and this is made far worse by reporting that - instead of helping contextualize the numbers - oversimplifies or misleads about what the numbers mean (or can mean). My understanding of what works in framing communications to help people think more productively about issues is that numbers / data needs to be used very carefully in order for it to aid in productive thinking. Leading with numbers is very ineffective in priming productive thinking - yet social media is not designed to aid in thinking, but rather to produce strong reactions (anger, fear). I am not at all interested in the polling on the election - after doing all I can, including voting, I myself am taking a Shrodinger's cat approach as a thought experiment for these last few days (Biden has both won and lost until we open the box) - and no good comes from me distorting my sense of reality by scrolling, refreshing and triggering adygdala hijacks through this useless activity. Once again you're saying what needs saying, and I hope your piece gets significant attention, even as it competes with more 'race to the bottom of the brainstem' articles and posts.
For me, polls and forecasts are not so much about excitement but about reassurance and the lessening of anxiety. As individual voters, especially ones who do not live in swing states, we can feel powerless to affect the outcome and feel awash in the broader currents of news and culture. Yes, we can give money, attend rallies, make phone calls, etc., and all of that can help, but we ultimately want reassurance that what we are doing is having an impact. Most of all, in this cycle more than any in my lifetime, we want the nightmare to be over as soon as possible, and it is just hard to wait until November 3 to be able to feed our feelings of hope. FiveThirtyEight and other outlets that report on the progress of the race, can give us feelings of hope sooner and more frequently. Of course, they can also plunge us into deeper anxiety and fear, but then we can just ignore them for a while. Liberals have been eager consumers of narratives that predict the end of the Trump era for all sorts of reasons, selfish and altruistic. Poll numbers and model predictions can be the most concrete support for our hopes. Forecasts like Nate Silver's, which are methodologically rigorous, can be the most effective antidote for the daily disappointments and disgust at what the Trump administration has carried out.
Bravo to you for explaining thoughtfully how your mind changed and the things that have brought you to your present understanding of the subject. We need to be humble and aware that our perception of ideas does not get fixed once and for all time. It would be wonderful if your example could become the norm. It is so hard to keep enough ofan open mind that one can allow one's own concepts to evolve.
I hear you about how information from models can be misused but you mention something that is a much bigger problem, which is the horse race mentality of the media. This latter problem, that models try to counterbalance is the real culprit in misinforming the public (along with both-side-ism). These problems are even more difficult to solve than the misuse of models but much, much more important to the future of our democracy.
You write that "If the models and the polls they rely on contain errors and biases, those errors are likely correlated..." and "That makes modeling them even shakier than many other kinds of modeling we do." There is a way for models to account for such things - if a model predicts 70% probability of outcome A vs. B, but there's more uncertainty, then you'd just bias your prediction further towards 50% (i.e., your final prediction would be 60%, not 70%). That's how you express uncertainty.
You also write, "I’m saying that we have no way of knowing they are right or wrong, even after the election happens." There are ways to know if a model is "right" or "wrong". You check if the model is properly calibrated, i.e., if a model says something will occur 70% of the time, then for all similar-probability events, you check if it happened with a frequency of 0.7. While the presidential election is more rare, you can calibrate your model against the House of Representative outcomes, etc.
Models can be useful. They just have to be properly understood.
I view election models (and 538 in general) in the lens of anti-positivism in social research (https://en.wikipedia.org/wiki/Positivism#Criticisms). Namely, what gives them confidence that they can create a reductive statistical model about social forces at all? Like actually-good interpretivist research, any such model can suggest what *could* happen, but has very little to do with what *will* happen.
Thanks for the piece — it makes me think that the polling models are a lot like physics models with pool balls on frictionless surfaces: they’re reasonably accurate if the exogenous effects are minimal. However, in close elections the exogenous effects aren’t minimal, in addition, they change not only in each iteration (election) but also in the lead up to each election.
Take the ex-felon voting situation in Fl.
Move 1: referendum allowing ex-felons to vote
Move 2: sorry ex-felons need to pay their fines
Move 3: OK Bloomberg will pay the fines
Move 4: Sorry, no way to determine how much they owe, but if it’s not paid it’s a felony (?, not sure if it’s a felony)
I don’t think that any of the modelers would claim to have a good way of modeling who, if anyone, would vote in this situation. In a close election, in a swing state, that sub population could easily determine the outcome.
I think you’ve written about this kind of thing back when the Obama targeting tools were being touted: “next cycle the opposition party will be using an improved version”
I have believed for many years that, as you point out, "our politics has long been broken" and I have substantial doubts that we as a people and our elected representatives have the will to "unbreak" our politics. Politics is awash in money and candidates and incumbents spend a significant proportion of their time raising money to be elected or to stay in power. Gerrymandering runs rampant. Campaigns for president last years, and...well, I could go on. I would like to live in a world in which policy discussions were at the forefront of the political process and we could be guided by rationale thought. But, I am afraid the sound and fury, and reporting on a horse race, has become all pervading and my dreams will not be realized. I live in Utah and my vote for president, even though I cast it, will never count because the Electoral College is winner take all. I wonder what it may be like to live in a country with a Parliamentary democracy where election cycles are counted in weeks, not months and years, and presumably, I don't know, the wishes of the majority count in the vote.
When false rigor becomes the selling point, we should be worried indeed; there's a recent piece ( https://www.thenewatlantis.com/publications/put-not-thy-trust-in-nate-silver ) commenting on Jill Lepore's book as well.