It’s time for one of those posts where a few different ideas have coalesced into something for me. It’s been a while since I’ve written here–I’d originally intended to post an angry rejoinder to Steven Pinker’s infotisement in the Chronicle, but in the process of writing it, I managed to get him out of my system. There’s still some material there that I want to post, but it’s not what I’ve been thinking about lately.

The first thread I want to collect comes from several weeks back, an essay that I happened across, probably on Facebook. Dorothy Kim, over at Model View Culture, has a great piece about the ethics of social media research, particularly (but not exclusively) when it comes to questions of race. Kim draws provocative connections between recent research and “[t]he scientific and academic history of disregarding rights and ethics in relation to the bodies of minorities and especially women of color,” connections that cannot simply be waved away with recourse to the assumption that “Twitter is public.” I strongly recommend Kim’s essay–to be honest, I began reading it defensively, because as someone who doesn’t do a lot of qualitative or experimental research, my understanding of the public nature of Twitter was pretty uninformed. I’m not going to summarize her essay fully here, because I want to connect it to a couple of other things here, but Kim persuaded me to take another look at my assumptions.

Kim’s argument relies on what I still find to be a pretty big “if”:

If one imagines that digital bodies are extensions of real bodies, then every person’s tweets are part of that “public” body. By harvesting digital data without consent, collaboration, discussion, or compensation, academics and researchers are digitally replicating what happened to Henrietta Lacks and her biological cells.

Earlier in the essay, she explains that “We know from #RaceSwap experiments that Twitter as a medium views digital bodies as extension of physical bodies,” for example, but there’s an elision here. I don’t quite buy the idea that “Twitter as a medium” does anything of the sort. And yet, in the examples Kim offers, she cites several cases where researchers do make that assumption. Their projects rely on particular identity claims in order to demarcate their object of study. If we rely on the same assumption, Kim argues, then “the affordances of public spaces must be respected:”

“People in public spaces are allowed to protest, hang out with friends, watch and participate in public lectures and performance art events. If Twitter is Times Square, then the media, academia, and institutions (government, non-profit, etc.) must respect that Twitter users have a right to talk to their friends on the Twitter plaza without people recording them, taking pictures of them, using their digital bodies as data without permission, consent, conversation, and discussion.”

This is an intriguing claim to me, and I’m not sure I have the expertise to evaluate it, but it’s one that has made me think for the past month or so. In passing, Kim mentions the Facebook emotional contagion study, and that functions as a bit of a bridge to one of the other threads I’ve been thinking about. I asked my Digital Writing students this semester to read a couple of chapters from Christian Rudder’s new book Dataclysm, as well as Evan Selinger’s LA Times review of the book. Again, an essay worth reading in its entirety, only a glimmer of which I’ll be able to articulate here.

Selinger is critical of Rudder not for poor writing, nor for reaching the wrong conclusions from the data he collects, but rather for the cavalier way that issues of privacy are ignored:

Rudder presumes that if information is shared publicly in one context, there’s no privacy interest in it being shared publicly in any other. But as law professor Woodrow Hartzog and I argue, information exists on a continuum: on one end lies information we want to disseminate to the world; on the other end is information we want to keep absolutely secret. In between is a vast middle ground of disclosures that we only want to share with selective people, our “private publics.” As a matter of both etiquette and ethics, it’s important to consider whether someone would have good reason to be upset with us moving information from one place to another, where it can be more easily seen by new audiences. Privacy scholar Helen Nissenbaum defines this as an issue of “contextual integrity.”

This idea of contextual integrity was what triggered this connection for me with Kim’s piece, reminding me that I’d already been thinking about this question. Selinger remarks earlier in his essay that Rudder’s data strikes him as a case of “an analyst’s deliberate decision to abstract data away from its intended context, and put it to what privacy scholars call ‘secondary use.'” The idea of requiring researchers or corporations to secure informed consent for secondary use is almost certainly already an uphill battle, if we think about the opacity of Terms of Service agreements and the “commonsense” rejoinders that “you have nothing to worry about if you have nothing to hide” or that “if you don’t want FB to sell your data, don’t use FB.”

Part of Selinger’s point is that the choice is rarely that simple. There’s the fact that social media (and technology more broadly) aren’t simply an extraneous, optional layer laid across an otherwise perfectly functional world. That ship has sailed. Regardless of where we choose to locate agency in our description, our social and professional worlds change as our means of mediating them do. Given the potential costs of “simply” opting out, according to Selinger, “it becomes untenable to construe our options as a matter of individuals needing to make smart choices that weigh costs and benefits. The deck is stacked, and many of the available choices exhibit anti-privacy bias.”

I found myself thinking about the question of contextual integrity this past week, when I learned about the arrangement that ProQuest has with Turnitin, which is a couple of years old now, a relationship that benefits ProQuest (I assume) through ongoing payments and Turnitin in the aggregate. One can opt out of this “partnership.” But one cannot simply opt out of ProQuest–most institutions’ arrangements with them predate the establishment of institutional repositories. It’s a required step on the checklist at Syracuse for completing degree requirements, complete with the handy reminder to “have your credit card handy so that you can pay Proquest online.”

It may be that most people wouldn’t care about their work being used on Turnitin, but this is a secondary use enabled by that anti-privacy bias, one that ProQuest is apparently double-dipping, by requiring scholars to pay them for the privilege of having their work sold to a third party. That third party usage has been determined to be “fair use,” so copyright isn’t violated, and yet, there’s something about it that makes me feel uncomfortable. Part of it, I think, is that, however mild, it’s a violation of trust, and that’s a theme that runs across all of these examples. Kim offers several high-profile examples of the ways that the academy has violated the trust of certain populations, such that researchers shouldn’t reasonably expect to be trusted without explicit consent. The idea that FB or dating sites are filtered doesn’t negate the good faith assumption on the part of its users that this filter operates consistently–we shouldn’t have to opt out of manipulation or surveillance.

Part of my interest here is that it’s a trust erected upon a false foundation: a particular understanding of the relationship between public and private, and of the relationship among different media substrates, that no longer holds. Or, at least, it’s one that’s changing rapidly. ProQuest has a monopoly that’s based upon a much different media landscape than our current one, and they’re exploiting that. The difference in degree may be substantial when we compare it to the way that privacy and IP laws lagged behind genomic research in the case of Henrietta Lacks–the latter strikes me as deeply wrong in a way that these other examples here do not–but I’m not sure that it’s a difference in kind.

I’m still thinking through these issues–my position on them is by no means settled. But I had a conversation yesterday where we talked a bit about technology, ethics, and networks, and it got me to thinking about an ethics of friction. Selinger argues that

This is Silicon Valley dogma: friction is bad because it slows people down and generates opportunity costs that prevent us from doing the things we really care about; minimizing friction is good because it closes the gap between intending to do something and actually doing it.

I think that this is a little reductive. That is, it’s true in a very general sense, particularly when it comes to things like ownership of data, but I think that there are examples of frictional decisions (FB’s “real name” policy, Twitter’s decision to require mobile phone numbers in order to use developer tools, e.g.) that make friction a more complex issue than simply yes or no. But I take Selinger’s broader point, and I think it connects with Kim’s essay as well. I do think there are ethical questions that need to be raised, particularly when our old sources of friction (like informed consent regulations) are slow to keep pace with contemporary sites and methods. I’m just not sure myself how I’d answer those questions.