A move, some news, and a new essay on privacy and technology in a post-Roe America
I've changed institutions! And more!
A few quick updates. I have been granted tenure at Columbia, and decided to accept a job there. This is bittersweet, as it means I am leaving the University of North Carolina to become a professor at Columbia University. My experience at UNC was largely positive, and especially my colleagues at the School of Information and Library Science, as well as the The Center for Information, Technology, and Public Life (CITAP), have been wonderful.
I was also a Pulitzer finalist this year! That one was a surprise. I’m genuinly thrilled—it’s the first year I was ever nominated.
I’ve been doing some of my usual writing (can be found here) on many aspects of the pandemic, but I’ve also just written an essay about the many ways in which technology, privacy and reproductive rights will continue to intersect—especially in a post-Roe America.
This new piece is centered around the basic insight, over a century-old, that new technology requires updating our laws and regulations and norms, just to keep our foundational rights to liberty and privacy in place.
Over 130 years ago, a young lawyer saw an amazing new gadget and had a revolutionary vision — technology can threaten our privacy.
“Recent inventions and business methods call attention to the next step which must be taken for the protection of the person,” wrote the lawyer, Louis Brandeis, warning that laws needed to keep up with technology and new means of surveillance, or Americans would lose their “right to be left alone.”
Decades later the right to privacy discussed in that 1890 law review article and Brandeis’s opinions as a Supreme Court justice, especially in the context of new technology, would be cited as a foundational principle of the constitutional protections for many rights, including contraception, same-sex intimacy and abortion.
Now the Supreme Court seems poised to rule that there is no constitutional protection for the right to abortion. Surveillance made possible by minimally-regulated digital technologies could help law enforcement track down women who might seek abortions and medical providers who perform them in places where it would become criminalized. Women are urging one another to delete phone apps like period trackers that can indicate they are pregnant.
But frantic individual efforts to swat away digital intrusions will do too little. What’s needed, for all Americans, is a full legal and political reckoning with the reckless manner in which digital technology has been allowed to invade our lives. The collection, use and manipulation of electronic data must finally be regulated and severely limited. Only then can we comfortably enjoy all the good that can come from these technologies.
I found the graphic by Ard Su that accompanied the piece to be quite striking:
One particular reason I wrote a lengthy piece with many details is that I don’t think most people are aware of how much new technologies can do—and that none of the suggested options, like not using certain products, using phones behind, requiring anonymized datasets (they can often be de-anonymized to identify individuals!), having opt-out options, etc. work well in practice. Further, this kind of surveillance isn’t just available to law-enforcement: vigilantes, too, may well start hunting down women in states where abortion may soon be criminalized.
After the Supreme Court’s draft opinion that could overturn Roe was leaked, the Motherboard reporter Joseph Cox paid a company $160 to get a week’s worth of aggregate data on people who visited more than 600 Planned Parenthood facilities around the country. This data included where they came from, how long they remained and where they went afterward. The company got this location data from ordinary apps in people’s phones. Such data is also collected from the phones themselves and by cellphone carriers.
That this was aggregated, bulk data — without names attached — should be of no comfort. Researchers have repeatedly shown that even in such data sets, it is often possible to pinpoint a person’s identity — deanonymizing data — by triangulating information from different sources, like, say, matching location data on someone’s commute from home to work, or their purchases in stores. This also helps evade legal privacy protections that apply only to “personally identifiable information” — records explicitly containing identifiers like names or Social Security numbers.
For example, it was recently revealed that Grindr, the world’s most popular gay dating app, was selling data about its users. A priest resigned after the Catholic publication The Pillar deanonymized his data, identified him and then outed him by tracking his visits to gay bars and a bathhouse.
Phone companies were caught selling their customers’ real-time location data, and it reportedly ended up in the hands of bounty hunters and stalkers.
I think, in particular, the growing powers of machine learning, which can infer conclusions that aren’t explicitly in the data, are underestimated and not very-well understood.
For example, algorithmic interpretations of Instagram posts can effectively predict a person’s future depressive episodes — performing better than humans assessing the same posts. Similar results have been found for predicting future manic episodes and detecting suicidal ideation, among many other examples. Such predictive systems are already in widespread use, including for hiring, sales, political targeting, education, medicine and more.
Given the many changes pregnancy engenders even before women know about it, in everything from sleep patterns to diet to fatigue to mood changes, it’s not surprising that an algorithm might detect which women were likely to be pregnant. (Such lists are already collected and traded). That’s data that could be purchased by law enforcement agencies or activists intent on tracking possible abortions.
Many such algorithmic inferences are statistical, not necessarily individual, but they can narrow down the list of, well, suspects.
How does it work? Even the researchers don’t really know, calling it a black box. How could it be regulated? Since it’s different, it would need new thinking. As of yet, few to no laws regulate most of these novel advances, even though they are as consequential to our Fourth Amendment rights as telephones and wiretaps.
As usual, I remain an optimist that solutions exist, and are possible. I’m not as optimistic that we will do the required political and technical work necessary to make them a reality, but at least, we can try.
Your essays about the pandemic are definitely Pulitzer quality - I'm betting you saved a lot of people a lot of grief (let alone death!) with your "insightful, often prescient, columns" that were so much more useful, consistent, and understandable than what 'official' sources were providing. Kudos to you and congrats on tenure at Columbia! Also, your NYTimes article should be required reading for anyone who thinks our surveillance culture is no big deal - again, useful and understandable even to the tech-adjacent.
Congratulations on the opportunity at Columbia. Pulitzer is the best! As we often hear, just getting nominated is a big deal. Winning is validating on some level. Learning from you is the prize and I hope your students realize it in the moments they have with you.