Embargoes, BrouhAHAs, and the rhetoric of scale

a little comic about the AHA embargo fiasco

a little comic about the AHA embargo fiasco

I am not an historian, nor a member of AHA, nor an early-stage scholar, nor a publisher, nor am I responsible for library acquisitions. But then, the same can be said of plenty of folk who have weighed in on the decision by the American Historical Association to release a statement allowing for (and by implication, perhaps, endorsing) the “embargo” of history dissertations. As Rick Anderson notes (in a Scholarly Kitchen post that provides a pretty strong overview), the AHA “smack[ed] the hornet’s nest.” I follow enough Digital Humanities and Open Access inclined historians on Twitter that this statement, and the furor that ensued, registered substantially throughout my feed. And over the past week or so, the discussion has trickled upwards to the usual suspects (and beyond!) and sideways to other disciplines. At least it has to my own, based on listserv discussions and retweets.

And it should spread, because it’s not just an issue for historians. Times for university presses and for academic libraries are tough all over, and that affects every discipline. As someone who routinely advises late-stage graduate students and untenured faculty, I think that the questions raised by the AHA statement are ones that everyone in the humanities should be thinking about, not just members of that particular organization. For a good cross-section of the various positions and issues, my best recommendation is Open History, a project that began as part of the backlash against the AHA statement, but one that I’ll be watching with interest. They’ve got a pretty thorough collection of the responses to date, and a mechanism for adding others (addressing a weakness of the link round-up posts I’ve seen). I’m thinking about using that “issue” in my DH course for next spring, incidentally.

I have a couple of thoughts, neither of which necessarily addresses the core issues at play in the Statement or the responses to it. The first is that, put simply, I think discussions like these reveal their scale-free status, or at least raise the question of scale. As I’ve been sorting through what I want to do with my network rhetorics project, I’ve been returning to some of the terms and ideas that I’ve been taking for granted, and one of those is this idea of scale. We describe random networks as having scale when you can select any of the nodes and treat them as roughly representative of each of the nodes in the network; in other words, the behavior of a given node “scales up” as typical to the network itself. In a scale-free network, however, you can’t generalize from the behavior of a single node to the network. Easy enough, right?

Whether we treat the system addressed by the AHA Statement as a single network, or see it as a set of overlapping networks of various stakeholders (graduate students, faculty, universities, publishers, libraries, et al.), those networks are scale-free, which makes framing any discussion problematic. Insofar as we can speak of a system here, it’s difficult for me to see any particular node or group of nodes as typical. I don’t think there are many among us in the academy who would be arrogant enough to describe their own writing process, hiring process(es), job histories, departmental and college relationships, publication records as typical of all of their colleagues, not in academia, their own field, nor even their own department. Part of how scale-free networks function is iterative–the status of a given network affects its future development. So, for example, highly trafficked websites are more likely to attract additional traffic–our experience of the web isn’t random (or scaled). Each stakeholder in this conversation has a variety of factors to balance, and the ratio among them is a decision that happens locally, based on circumstance, and that ratio necessarily shifts over time.

I know that this sounds painfully self-evident (different Xs behave differently!), but you can see a certain amount of incommensurability that creeps into the discussion. We can’t speak in generalized terms about the policies of academic presses, so we push for specific evidence. But because those specifics don’t scale, they can never function beyond the level of anecdote, and ultimately make it more difficult to speak in generalized terms. I work in a field where perhaps the best-selling book of all time (Cross-Talk in Comp Theory, now in its 3rd edition) is not published by a university press but a professional organization, 95% of which appeared in print prior to its inclusion in that volume, and is almost entirely available to anyone for free provided they have access to JSTOR. And that tells you exactly nothing about NCTE’s own editorial policies, much less the other presses in our field, much less academic presses in general.

In the absence of representative anecdotes, what can be done? Do arguments like this get to a point where we should just throw up our hands? Not really. I do think, though, that at a certain point, networks of arguments hit a threshold where incommensurability sets in. When that happens, a different kind of argument takes place. And maybe this is my second point: lemons are to lemonade as scale-free arguments are to scale (as in “when life gives you X, you make Y”). While a number of people have responded to the AHA Statement as a particular kind of intervention (endorsing a rapidly deteriorating and increasingly obsolete model of scholarly communication), I wonder if it’s more appropriate to think of it as an attempt to introduce scale into the discussion, to establish some kind of baseline that allows us to say certain things about the typical behavior of that system. Bear with me.

It’s an imperfect analogy, to be sure, but think about professional sports, and the effects achieved through salary caps. It’s still hard to speak of the typical franchise in, say, the NBA (the Celtics and Lakers are not representative, e.g.), but a salary cap mitigates the scale-free quality of the network of franchises, allowing Oklahoma City to compete with Los Angeles, even if the two take very different approaches to team development, talent acquisition, etc. Even though teams may appear to ignore the cap (hello, Brooklyn!), the penalties for doing so are not insubstantial (increased luxury taxes, the inability to sign players for more than the league minimum, etc.), and can’t be maintained indefinitely. The salary cap doesn’t homogenize the league nor guarantee any sort of sustained success for a franchise, but it does level the field in terms of opportunity. There are still strategies and tactics involved, but there’s sufficient scale in the network (we might argue) to allow every team to compete.

Part of the AHA Statement, then, is about scale: all graduates should be able to choose what to do with their dissertations. In this sense, the AHA Statement isn’t so far away from guidelines offered by the MLA regarding the value of digital work:

Institutions and departments should develop written guidelines so that faculty members who create, study, and teach with digital objects; engage in collaborative work; or use technology for pedagogy can be adequately and fairly evaluated and rewarded. The written guidelines should provide clear directions for appointment, reappointment, merit increases, tenure, and promotion and should take into consideration the growing number of resources for evaluating digital scholarship and the creation of born-digital objects.

If your department or discipline falls under the MLA umbrella, then this provides a pretty clear statement about the importance and value of digital work–whether or not every single department follows this Statement to the letter, MLA provides a baseline for the responsible evaluation of tenure/promotion cases that include such work. It doesn’t say, however, that you should not be tenured unless you do such work, or that folks who don’t work digitally are less likely to be tenured. If I were forced to name the difference between these two sets of claims, perhaps I’d call them policy statements and position statements. The MLA here is setting policy, or at least offering a Statement that itself can be adopted and adapted to policy on the local level.

Where the hornet’s nest gets smacked with the AHA Statement, I think, is precisely those moments when it drifts away from policy towards position:

an increasing number of university presses are reluctant to offer a publishing contract to newly minted PhDs whose dissertations have been freely available via online sources….online dissertations that are free and immediately accessible make possible a form of distribution that publishers consider too widespread to make revised publication in book form viable…

I don’t think it’s particularly controversial to note that, if your interest is in crafting policy, the phrase “tangible threat” is not the best choice. :) If nothing else, that phrase alone functions deictically, calling attention to the specific circumstances surrounding the Statement and inviting the polarization that occurred in response. It treats those circumstances not as variable or conditional (which they surely are, if others’ responses are to be trusted), but as given. And it turns the policy into an outgrowth of a position that is conjectural (and which proved to be controversial). I don’t know if I’d go so far as to say that it’s simply a matter of tone, but it’s hard for me to imagine that there would have been nearly as much outcry, if the Statement had presented the policy as a necessary update given the shifts in scholarly communication/technology. I think that the case (and the policy) could have been made without implying that OA advocates’ beliefs constitute a (tangible!) threat to their newer colleagues.

If I were to put a bow on this, I think I’m slowly articulating an idea for myself about network rhetorics: this notion of making an argument to scale is something that I find coming up again and again, one that only really emerges with clarity when you think about arguments ecologically or as networked. I still don’t quite have the vocabulary for it yet, but most networks don’t occupy either the random or the scale-free end of the spectrum; they’re semi-scaled, maybe, and maybe I’m thinking about scale as a consequence of rhetoric. That may be the next puzzle piece for me as I work on my next book. I wish I had a witty remark to insert here about my failure to embargo this post, but let your imagination run wild…

That’s all.

 

 

Fat-Shaming

[I wasn't sure whether or not I really wanted to share this piece of writing. Perhaps you'll understand both why I was hesitant to do so, and why I have.  -cgb]

If you were on FB or Twitter this weekend, and are associated with academia, you probably caught a glimpse of a tweet from an evolutionary psychologist who suggested that “obese PhD applicants” should save themselves the trouble of applying for doctoral programs, since their obvious lack of willpower will keep them from being able to write a dissertation. I’m not going to link in any way to Geoffrey Miller’s work, but this Jezebel story will tell you most of what you need to know. Miller himself has progressed quickly through the life cycle of denial: he initially defended his statement, then deleted it, then apologized for it, then disavowed it, and finally, when pressed by his university, claimed that it was part of a “research project.” My guess is that Miller has managed to damage himself pretty seriously; it wouldn’t shock me to hear that his home institution will have nothing more to do with him.

Like a lot of people, my first response to that tweet was both outrage and rage. It was a shitty thing to say. The more I thought about it, though, the more layers I found. Some of them were prompted by others’ comments about Miller’s tweet, but I’ve been thinking a lot about my own embodied response as well. If you’ve never met me in person, then one thing you need to understand, first off, is that I’m obese, fat, overweight. That’s not something I talk about much, and I never write about it. I was a big kid growing up–my father played football and rugby, and I inherited his size. When I was a kid, I was pretty active: I played a lot of different sports, except the one (football) for which my body was probably best suited. All through college and into graduate school, I think that a lot of people assumed that I played football. Anyhow, as I got into college and grad school, my life became more sedentary (reading will do that for you), while my eating and exercise habits declined. While you might have charitably described me as “big” in high school, by the time I graduated from college, I was overweight. And that hasn’t really changed.

The odd thing about Miller’s remark isn’t that society treats an excess of body mass as a deficit of willpower or self-discipline; frankly, he’s saying out loud there what plenty of people believe. The odd thing is that he thinks that there’s just one kind of willpower, and that “evidence” of its absence is somehow universal. This was my experience: as a fat academic, I was thrilled to be in a field where (ostensibly) I would be judged for the quality of my mind rather than the “failures” of my body. With blind peer review, no one can see that you’re fat. And so, if I lacked self-discipline when it came to carbs, I could throw all of my effort into writing (I’m doing it right now) and be disciplined there. I wrote my 250-odd page dissertation in less than 3 months, and my lack of willpower regarding exercise and healthy eating had nothing to do it; if anything, my willingness to focus like a laser on writing, and not worry about my body at all, helped me. If we imagine that willpower, like attention, is a networked phenomenon, spread amongst a variety of objects, then there was/is a sense in which my lack of physical willpower helped to feed my intellectual willpower. I’m sure that it’s not that simple, of course, nor should my experience somehow be generalized to “disprove” Miller’s prejudice. My own experience, more than 20 years in academia, tells me that there’s no formula here–successful academics come in all shapes and sizes. To imagine otherwise, as Miller does, seems to me to be stupid.

When I say that Miller’s just saying out loud what many people already believe, I say this because I believe it too, at least on some level. The thing that folks who aren’t overweight don’t typically understand is that our experience of the world is different from theirs, in a range of ways. I rarely fly, in part because being above average in both height and width means that airplane seats don’t fit. When I was at my heaviest, they were physically painful to wedge myself into. And don’t get me started on the number of comedic scenes and/or commercials about being condemned to sit next to the fat person on the airplane–I feel that shame every time I walk onto one. I don’t fit into smaller cars, and there was a time where I had to suck in my stomach to get the seat belt to fasten, when I got a ride from someone else. For several years, I couldn’t sit at the molded desks in the classrooms where I taught. When I went home for holidays, I had to make sure to sit in chairs without armrests, because again, they were a tight fit at best. And even when I can fit on a chair, it might not be able to support my weight without creaking, or god forbid, breaking. I couldn’t walk past someone on a tight staircase without pressing against the wall. The floors in an old house are always a little more aware of my presence than they are of anyone else. The world around me tells me that I’m the wrong shape and size, that I don’t fit. Faced with a constant stream of small indications that there’s something wrong with my size, I am amazed and inspired by those who are better able than I am to accept themselves. If there’s a place where I feel my own lack of willpower, it’s there.

And if there are those among you who doubt the idea of non-human rhetorics, let me introduce you to the suasive force of the clothing industry. When you are the wrong size, as a man, there’s really only one place where you can buy clothes, the big-and-tall store, usually located in a strip-mall. For a long time, big-and-tall clothing was constructed according to the principle that there was only one true body shape, and that you were just taller or wider. Even when the clothes “fit,” they often didn’t. The ratios among my various measurements are not the same as those of a “normal” person, and so buying clothes to fit one part of my body often meant ignoring others. “Tall” clothes often assume basketball-player sized people, and thus a smaller waist, but dress shirts also often have additional length and an extra button, making them easier for a fat person to keep tucked in–so I often had to choose between shirts that were tight around my midsection or much too large for my shoulders/arms. Both options served as a constant reminder that I was malformed, though. And fat people aren’t allowed to care about fashion–”if they really cared about how they looked…”

Big and tall clothing has improved in recent years, but decades of shame over “trying on new clothes” is hard to overcome. And I think about all these things knowing that they’re not intentional. Although I do sometimes think that it was a room full of skinny assholes who came up with the idea of the television show The Biggest Loser (“no, they’re ‘losers’ because they’ll be losing weight–ha ha ha!”), I know that the world is the world. There are millions of people out there who suffer from prejudices far more intentional and pernicious. Partly, this is the shame talking, but I do have more control over my weight than many people have over their own embodied circumstances, and so I don’t tend to think publicly about my size. Compared to what many other people go through as a result of circumstances they can’t control, claiming or emphasizing my own struggles has always felt presumptuous.

I did want to make one more point, though. While I was gratified to see the speedy, collective outrage over Miller’s tweet, it made me think back a couple of weeks to a conversation that happened on Twitter about how academics should dress. Once upon a time, I was told (quietly) that if I expected to receive tenure, I would need to dress better. The thing is, when you’re overweight and wearing clothes that aren’t tailored to your body’s shape, your body puts different stresses on those clothes. Dress clothes in particular tend to assume the “norm,” and while it’s funny to watch Chris Farley split a jacket or the rear seam on a pair of pants, imagine doing it while you’re teaching a class bending over to retrieve a pen or a piece of chalk. And then imagine that the simple act of dressing one’s self every day carries with it that extra layer of anxiety over whether today will be the day that your body betrays and humiliates you. Most ties are manufactured with certain assumptions about the size of the neck around which they will be worn; for a fat person, a regular tie often doesn’t fit. Sports jackets often assume a particular shoulder to waist ratio. I normally teach in jeans, because for a variety of reasons, they tend to be manufactured to handle more stress and wear than dress pants. In this Twitter conversation, however, the idea of teaching in jeans was one of the things that was considered unprofessional among faculty of a certain age. I don’t mean to call anyone out about this, but I will say that I felt no less shame seeing this conversation than I did seeing Miller’s remarks. I didn’t see all the responses to the thread, but I’m pretty sure that most people didn’t think of it as fat-shaming, or respond to it with the same outrage. It probably didn’t register to them.

I guess my point is this: my wish would be to take a small piece of the outrage, and apply it to awareness. Try to be a little more conscious of the ways that our assumptions about the world, whether it’s dress codes or the way we arrange our spaces, subtly reinforce the fat-shaming that Miller was engaging in explicitly. Even if it’s something as simple as not assuming that everyone has the same relationship to clothing as you, or understanding that not every seat in the restaurant is equally comfortable for someone who’s overweight. It can be tricky to be more interventionist without also shaming, but it’s possible to invite someone for a walk rather than a cup of coffee, or to have them over for a healthier meal than you’re likely to find at a restaurant. It should probably go without saying that we should all, myself included, try harder to catch ourselves when we make assumptions about people based on their appearance (not just their size or shape), but it’s worth reminding ourselves precisely when stuff like this happens. My gut reaction was outrage, but my second thought was to ask myself if I’d been guilty of that prejudice myself.

***

There’s a lot more to say about this, I’m sure. I’ve alluded several times to the fact that I’m not as heavy as I used to be. Far from being a story about the triumph of the Collin will, the fact of the matter is that I came kind of close to dying a couple of years ago, partly for my unhealthy ways, and partly because my shame over it kept me from getting the help I needed to get more healthy. Neither of those things is easy to admit for me, and they’re what makes Miller’s tweet particularly cruel. Most of us don’t have the level of “control” over ourselves that his comment implied–I know I still don’t. It took major surgery and more than a year’s worth of recovery for me to break through even a part of my own complex of shame and guilt and habit to find a healthier place. I’m fortunate to be healthier now physically, although it’s something that I have to work at constantly.

The implication that “fat” is a problem easily solved through the application of willpower is laughable to me, though, and that’s the biggest part of what I find objectionable in that tweet. It takes a partial truth (we do have some control over our body’s health) and twists it to rationalize a prejudice that itself works against that truth through shame. And that’s pretty evil.

I think I’m done now. Time to go for a walk…

Publishing as a Graduate Student #gradpub #cwcon

So, Jim put out this call for advice this week:

 

It’s been a while since I last posted here, and Jim’s tweet got me to thinking, so I figured I might write a few thoughts down. They’re not necessarily complete, because I do think that discipline and venue matter quite a bit, as does the student’s progress, work habits, and readiness. While it might be nice if there were a simple 10-point listicle that provided us all we ever needed to know about publishing, the fact of the matter is that it’d be pretty horoscopic. I’m not sure my advice will be any better, but it’s generally worked for me.

There are a few essays that I hand out to graduate students on a semi-regular basis, pieces that I’ve found really useful to have and to revisit every so often for my own writing. In honor of the listicle, I present to you my Top 5 Must-Read Essays for the Aspiring Scholarly Writer:

* C. Wright Mills, “On Intellectual Craftsmanship” (PDF) — It’s dated, and it’s from the social sciences, but it’s worth every graduate student’s time to read and adapt Mills’ advice:

By keeping an adequate file and thus developing self-reflective habits, you learn how to keep your inner world awake. Whenever you feel strongly about events or ideas you must try not to let them pass from your mind, but instead to formulate them for your files and in so doing draw out their implications, show yourself either how foolish these feelings or ideas are, or how they might be articulated into productive shape. The file also helps you build up the habit of writing. You cannot `keep your hand in’ if you do not write something at least every week. In developing the file, you can experiment as a writer and thus, as they say, develop your powers of expression.

Mills’ piece is a new one for me–I picked up a used copy of The Sociological Imagination years ago, but only happened to read its appendix recently. It may seem overly simple to imagine that there is someone who doesn’t realize that writers must “write something at least every week,” but it took me a long time to figure this out. I no longer assume that it’s something that goes without saying. Publishing is the tip of a massive iceberg of writing.

* Joseph Williams, “Problems into PROBLEMS” (PDF) — This is a long read, the academic equivalent of a novella, longer than an article but shorter than a book. Again, this may seem like obvious stuff, but I assure you, it can be really helpful to use the framework that Williams supplies to look at one’s own writing. The putative topic of Williams’ book is learning how to stage introductions effectively, and that in itself is worth the price of admission. But I use this text less as a means of helping me write my introductions than I do as a way to help me crystallize the point of whatever I’m working on at the time.

…posing and solving PROBLEMS is what most of us do, but most of our students, both undergraduate and graduate, seem unaware of not just how to pose a PROBLEM, but that their first task is to find one. As a consequence, they often seem just to “write about” some topic, and when they do, we judge them to be not thinking “critically,” to be writing in ways that are at best immature (Berkenkotter, Huckin, and Ackerman), at worst incompetent. Yet many of our students who do not seem to engage with academic PROBLEM-solving, in fact, do. Their problem is that they are ignorant of the conventional ways by which they should reveal that engagement; ours is that we have no systematic way of demonstrating to them the rhetoric of doing so.

The first time I read this work (the first of many, many), I was a little resistant to the idea that everything could be “reduced” to problem-solution; I’m not sure I feel that way any longer. I don’t think that it’s always necessary to make that framework explicit in one’s writing, certainly, and I think that there are times when we invent the “problems” we are solving, particularly in the humanities. On balance, though, it has helped me to think through my work in terms of this framework. I return to Chapter 1 frequently.

* Richard McNabb, “Making the Gesture: Graduate Student Submissions and the Expectations of Referees” (PDF) — This may be the single best essay for the aspiring graduate student that you’ve never heard of. It was published in Composition Studies in 2001, and is based on a study of graduate student submissions to Rhetoric Review over the course of nearly a decade.

The typical graduate manuscripts I saw as an associate editor suggest that the success of one’s argument depends on the appropriation of the correct gestures, that is, the discursive conventions that govern the ways of arguing and evaluating that define the language of the field. As I have tried to illustrate, writing for publication goes beyond producing a coherent, effective, well-supported argument; a writer has to be able to negotiate the publishing system by making the right gestures. I have identified two such gestures present in the scholarship (22).

“Gestures to a Rhetorical Mode” draws on Goggin’s taxonomy of description, testimony, history, theory, rhetorical analysis, and research report. “Gestures to a Problem Presentation” draws on MacDonald, Swales, and others to differentiate between epistemic and non-epistemic presentations. I don’t think I’m giving away any secrets to say that McNabb sees many graduate student submissions that rely on testimony and present themselves non-epistemically. What’s interesting about this piece is that it’s a rare study of a category of submissions that isn’t defined in terms of success, a problem that we run into if we only look at published writing when we talk about how to publish–it’s instructive to see the differences.

* Carol Berkenkotter and Thomas Huckin, “Gatekeeping at an Academic Convention” (from Genre Knowledge in Disciplinary Communication) — Speaking of differences: once upon a time, our national conference made its submissions, both those that had and those that hadn’t been accepted, available to researchers. Right towards the tail end of that time (the early 90s, I think), B&H examined a fairly large random sample of CCCC abstracts pulled from three years’ worth of submissions, “in hopes of getting a more comprehensive picture of the genre” (102). As with the other pieces on this list, you can’t take too literally the results of a study of conference abstracts, one from 20 years ago at that, but at the same time

“In this chapter we have illustrated at least two of the principles laid out in Chapter 1, namely those of form and content and of community ownership. The former states that “Genre knowledge embraces both form and content, including a sense of what content is appropriate to a particular purpose in a particular situation at a particular point in time” (13). It is clear from our study, we think, that the ability to write a successful CCCC abstract depends on a knowledge of what constitutes “interestingness” to an insider audience, which in turn depends on timeliness, or kairos. The principle of community ownership states that “Genre conventions signal a discourse community’s norms, epistemology, ideology, and social ontology” (21). Here, too, we think our study provides some insight in to the intellectual constitution of the rhetoric and composition community” (115).

Much of this book is worth reading, if no other reason that to think carefully about what B&H call “genre knowledge,” and to learn how to recognize and to internalize it throughout one’s graduate career.

* [Insert Role Model Here]: This is not as tongue-in-cheek as you might think. When I watch a show or movie that I really like, I end up internalizing pieces of the characters, and the same goes for academic writing that I find particularly inspiring. One of the best things you can do is to locate your own role models for writing, and to read and reread them on a regular basis. I don’t do so in order to imitate them, necessarily, but I find that part of what I find inspiring about them is the way that they write, not just what they have to say. Don’t share your models with anyone–they are yours and yours alone. As soon as you start choosing your models according to what you think others expect from you, you’re sort of missing the point.

One of the common threads among all of the pieces I’m recommending here is the idea of genre knowledge–we tend to overemphasize “originality” of content at the expense of timeliness of contribution when it comes to scholarly communication. And timeliness is not something that can be planned out ahead of time, or captured in a listicle. It requires us to engage with the conversation, to see what others have to say, to think about where we might contribute, to account for the context of the discussion, and to make it worth reading in both form and content.

___

My words here are hardly the last ones on the subject, but these are the things that I’ve found helpful in my own work. Good luck!

UPDATE: Just as you find the perfect citation only after you send that article out for review, hitting publish helped me to remember a variety of texts that I could very well have included on this list. I first taught a grad course in 2005 that was a combination of genre studies and EAP, where I used these and many other readings. Some of the other books I could have easily recommended include:

 

and so on. Please feel free to add your own recommendations in the comments–I’m aware of how partial my own list is…

 

Backwards, Bookwards, Burke Words, Brooke Works

I.

I want to wish everyone a happy Burkeday — Kenneth Burke was born on this day in 1897, making today as good a day as any to celebrate rhetoric.

KB is part of my origin story: When I returned to graduate school for my PhD, my first course wasn’t actually official. The summer before I started, I sat in on Victor Vitanza’s Kenneth Burke course. For me, it was like a homecoming, and only partly because I was glad to get back to academia. I was a fairly half-hearted rhetoric and composition person, having done a concentration in my MA program on the counsel of our graduate advisor. I’d originally gone to graduate school thinking to study Irish literature, and I was possessed of a fondness for critical theory. While I could see some connections with rhet/comp, they were weak ties at best, and it may not have been an accident that I ended up taking a couple of years after my first attempt.

Anyhow, reading Burke was a revelation for me. It wasn’t always easy reading, nor would I say that I agree with everything he wrote, but I’ve always felt a resonance with his work. I don’t doubt that it shows up in my own writing from time to time. But reading Burke was one of the things that made me feel (finally) like I’d made the right decisions to go back to graduate school and to stick with rhetoric and composition. One of Burke’s passages that has always appealed to me comes from the Afterword to the 3rd edition of Attitudes Toward History, revised a bit for an interview he gave later on:

Remember the big traffic jam in New York when the subways stopped? That’s when I learned the word gridlock. Gridlock means you can’t go any way. The traffic is so jammed, it can’t go forward, backwards, or sideways. What I had was counter-gridlock….So, I’d write six or seven pages; then another tangent would seem needed, and I’d start over again, with the same baffling outcome. Instead of no way out, there was a clutter of ways out, each in its own way running into something that cancelled it.

Kenneth Burke, “Counter-Gridlock”

 I don’t know if other people’s minds work that way, but mine sure did. I think that’s part of what drew me to hypertext originally, and eventually to blogging and social media. Along the way, I’ve learned tricks to help tame my own counter-gridlock (cut the first 5 pages, work on multiple parts at once, etc.), but it’s always there, making it harder for me to force my ideas into the shapes that I know they need to take.

II.

There’s another piece of Burke that always appealed to me secretly. Burke was raised on the work of Mary Baker Eddy (who founded Christian Science), and while he turned away from those ideas to an extent, there is a sense that runs throughout his work that language is not simply representational but material, that the ideas we hold affect us physiologically. The idea of literature as “equipment for living” is a mild expression of this. There’s a story about him that says that one of the reasons why he never published the third volume of the Motives trilogy was that he would be “finished,” and not just in the intellectual sense.

You might imagine how even the hint of this would appeal to a kid who grew up reading and gaming in worlds where language did have that power. There’s a “not really…but maybe” quality to it all in my head that sometimes crosses over the line separating figurative and literal. If you were to connect this idea to a passage from an academic text like this one, published in the fall before I started back to graduate school–

After all, anyone the least bit familiar with the workings of the new era’s definitive technology, the computer, knows that it operates on a principle impracticably difficult to distinguish from the pre-Enlightenment principle of the magic word: the commands you type into a computer are a kind of speech that doesn’t so much communicate as make things happen, directly and ineluctably, the same way pulling a trigger does. They are incantations, in other words, and anyone at all attuned to the technosocial megatrends of the moment — from the growing dependence of economies on the global flow of intensely fetishized words and numbers to the burgeoning ability of bioengineers to speak the spells written in the four-letter text of DNA — knows that the logic of the incantation is rapidly permeating the fabric of our lives.

Julian Dibbell, A Rape in Cyberspace, Village Voice, December 1993 

–well, then, you might begin to tease out some of my own motives and interests. In The Philosophy of Literary Form, Burke writes, “The magical decree is implicit in all language, for the mere act of naming an object or situation decrees that it is to be singled out as such-and-such rather than as something other” (4).

III.

Here’s where it gets even less rational. Imagine that you’re a person for whom writing has never been a struggle, but who does struggle with putting it into a straight line. And imagine further that you’ve got a secret fascination with what Dibbell calls that “logic of the incantation.” You force your work into those shapes, article after article and conference papers galore, and eventually, you even manage to craft your own little snow globe, your first book.

I shouldn’t continue in 2nd person here. I started deflecting before I even realized that I’d done it. I am those things, and have done those things. The process of taking Lingua Fracta from initial manuscript to published volume, however, took almost 5 years. If you’ve read my book, you’ll know that my father passed away before he had a chance to see it published. What you may not know is that my grandfather did as well, about a year later. And my grandmother’s health at the end of her life was such that she probably only caught a glimpse.

Part of me loves my book, and part of me blames my book. It makes no sense, and even sounds silly to me as I write it down like this. But the fact of the matter is that I stopped wanting to write for a long time. The gradual fade of my first blog took place over about 3 months following my father’s death, and the loss of my grandparents sealed the deal. In my brain, I know that this is a story (events happen) that a tiny part of me has turned into a plot (events are connected!)–that’s the very definition of superstition–but sometimes all it takes is a tiny part 

IV.

The tiny thing that helped me come out of this, to the degree that I’m out of this, came last summer. Since my book was published, I’ve been thinking on and off about what I’ll do my next book on. I’ve had several possibilities in mind, but I’m fairly sure that when I hit a certain level of detail in the planning, something in me just shut down. It was too easy to turn to something else and just forget about it. I’ll spare you the long stories of my self-distraction.

Last summer, though, I realized that I don’t have to write another book. Ever. I say this fully aware that this is a luxury; I am in an incredibly privileged position to be able to say it. But I don’t mean it in the sense that I no longer have to work: I’ve been writing articles and chapters for collections, supervising students, teaching and designing courses, mentoring as best as I’ve been able–I overfill my time (sometimes) with the work that I’m obliged to do and the work that I enjoy doing. What I mean by this is that I can continue my work, my reading, my writing, my teaching, my mentoring, my participation–and none of those things have to take the particular material form of a book.

Is this distinction clear enough? Because it’s made all the difference for me. In that deep part of me that associated my book with loss and grief, the idea that it could be the book as formal obligation rather the specific incantation I wove to meet that obligation shook something loose in me that’s allowed me to start relearning how to write. I know that this might sound like “Aha! It wasn’t my fault after all, but the evil institution that made me do it!” But that’s not quite right. Writing had become this thing that forced me against my inclinations and ended in heartbreak. It wasn’t a matter for me of finding someone else to blame; rather, it was working my way through to a place where “blame” didn’t quite work to capture the full range of possible relations. It’s not like it doesn’t still occupy me, but I no longer feel locked in by it.

I don’t know if this quite makes sense. It does in my head.

V.

Here’s a last little odd fact about me. When I was young, I was fascinated by writing backwards and writing upside down. To this day, I can read text upside down almost as quickly as right side up. I would practice backwards cursive with a mirror–something about inverting and reversing the shapes of letters felt like magic to me. I loved codes, non-Roman alphabets, letter substitutions, all that stuff. Our daily paper had a cryptoquote next to the crossword that I would try and solve in my head. Palindromes, ambigrams, word ladders, snowball poems, I have always been fascinated by the extravagant capacities of language. So add that fascination to my discomfort with the book as form and my fascination with the logic of incantations, mix it together with a little technology expertise, and it makes perfect sense that what I should do is to do it backwards, to announce the “publication” of my new “book.”

Believe it or not, I’m not joking.

My next project is called Rhetworks, and I’m publishing it today, even though it hasn’t been written yet. It may or may not become a book; I’ve toyed with the idea of describing it as a BOOC, a Book-Sized Open Online Colloquium. I’ve been thinking about the relationships between rhetoric and networks for close to 10 years now, and I think I’m going to start writing something big and sprawling on the subject.

Over the next 2 years, starting today, I’m going to write it online, using a PBWiki installation. That means that I’m going to write in public, which scares the heck out of me, but not nearly as much as it used to. I’m going to make mistakes and I’m going to have to trust in the generosity of my readers. At the end of two years, if I feel like I have enough material to justify publishing it as a book, I may do so. But I’m equally prepared for the possibility that I won’t. In either case, I’ll be writing under a Creative Commons License and it will stay up there, freely available to anyone who’s interested, regardless of any subsequent form it might take.

I have a hypothesis, a fairly grand one, that I want to work through, and I even have a set of keywords that may someday provide me with the chapter structure for a book. But neither of those things will drive this project. I am interested instead in giving reign to my counter-gridlock, without knowing ahead of time whether or not it will actually work. But at its most basic level, this is an experiment. It may not catch on, I may grow bored with it, other people may find it stupid or silly or self-indulgent–I can imagine a hundred different ways that this could fail. And that’s why I’m going to do it.

Oh, but there’s more. My first idea was to write Rhetworks on a private wiki, and invite people to visit it once I’d gotten “enough” of it going to feel comfortable sharing. What I’m doing instead is to invite you to participate in it from the get go, and to contribute to it as much as you’re comfortable with. For some, this may mean correcting a typo or two, asking some questions in the comments, or adding a work or two to my bibliography. And that’s fine. But that’s just the start of what’s possible. I’m willing to collaborate with you on sections. I’m willing to list you as co-author. As long as you’re comfortable, I’m willing to let you publish your own work on the site, and even in the pages of the book, if it comes to that. My only request is that you make your own work as available and editable and shareable as I’m making my own. Does that mean that I’d be willing to include a chapter written by someone else entirely in the book version of this? Or include entire sections or chapters that disagree with me? Yes. Yes, it does. I’m also open to the possibility of using the site as an invention space and breaking off pieces of it to publish collaboratively in other venues–I know that not everyone can afford to invest time and effort as open-endedly as I can.

And yes, I can imagine that this project could be derailed by edit wars, or that someone might get it into their head to try and ruin it. I’ll be restricting editing access to registered users, so that I can exercise some minimal amount of supervisory influence. But I have thought about a lot of different ways that people might make use of the site, add to it in ways that I cannot predict, and even disagree with me in fundamental ways, and I find that I’m surprisingly okay with that.

A scholarly project sits at the heart of a network by nature. The traditional model of publication, though, encourages us to mediate that network ourselves, often out of fear of what would happen if we let others see before it was complete. William Germano described it as a snow globe in the Chronicle a couple of weeks ago:

Within the realm of the snow globe, every authority on the subject has been cited or pacified. Look inside and find a perfect, tidy, improbable world where no questions are asked, or invited. Scholarly books, especially first ones, are a paranoid genre—their structure assumes that someone is always watching, eager to find fault. And they take every precaution against criticism.

He asks if we dare write for readers–what I want to do here is write WITH readers, with you. I want to create a book-sized network of scholarship that itself is the product of the network. It’s not coincidental that it’s about networks, too.

VI.

I go back and forth about this. On the one hand, it feels like the next step, or maybe a leap of faith: the idea that scholarship can locate itself somewhere that’s part text, part connectivist MOOC, part community. Germano suggests that maybe “the best form a book can take—even an academic book—is as a never-ending story, a kind of radically unfinished scholarly inquiry,” and part of me believes that enough to give it a try. Maybe what I’m describing is actually a 2-year online course on networks and rhetoric, open to anyone who’s interested. (I will almost certainly use it to some degree in the digital humanities course I teach next spring, and I hope others will take it up that way, too.) It pushes the idea of public, online review even further, and maybe it will ultimately push at our ideas of what acceptable (and accessible) online scholarship can look like.

And then there are days where I imagine that I’m so crazy to even think of this that I can’t see outside of the crazy. Even if I manage to summon the effort, time, and energy to do this successfully, it feels insanely risky, when I could just sit down, open my books, fire up my browser, and bang out a publishable manuscript.

And then I think about the “clutter of ways” that I want to give voice to.

I think about how maybe if I cast my spell backwards this time, something magnificent might happen.

And I think about Clay Shirky’s incantation–publish, then filter–and how much more sense it makes to me, even if it sounds upside down.

And then, one day in early May, I publish a book that doesn’t yet exist, and invite you to write it with me. I wonder what could possibly happen next.

Bookshelf Bingo!

a picture of the bookshelves in my living room

Last night, in lieu of watching television, getting caught up on my work, or doing any number of other, more productive things around the house, I let myself get sucked into rearranging some of my bookshelves. As I mentioned on Facebook afterwards, one of the things I started thinking about was how I would arrange them if I were going to play Bookshelf Bingo, a game I invented while I was shelving.

Bookshelf Bingo is not all that different from Hipster Bingo or SXSW Bingo. My first idea was that you could play it during a keynote address at a conference, although I would think a Twitter feed might work as well (although it’d be more difficult, as I explain below). Here are the rules:

1. Each player needs to start with roughly similar shelving units. I’m a huge fan of the cube shelves myself, which have the added advantage of providing the bingo grid. Each cube holds around 15 books, give or take. The grids should be the same for each player, and the grid sizes as well, to keep it fair.

2. Each player can arrange books on and among the shelves in any way he or she sees fit.

3. Each player then takes a high resolution photo of their shelves and prints it out. This is the Bingo card.

4. Then, at a keynote address (or panel), a player gets to cross off a square each time a book in that square is cited by the speaker(s).

5. First player to complete a row, column, or diagonal wins! And is crowned the biggest Nerd in the audience! (Jumping up and yelling “Bingo!” during the talk itself is not recommended.)

Variations: It occurred to me at first that you could do this with a Twitter feed in lieu of a keynote address, probably because A1 contains a bunch of books that I’ve purchased as they’ve come across my feed in the past month (Jockers, Golbeck, Hofstadter, Morozov, et al.). That would require folks to subscribe to roughly the same feeds, although you could create a Bingo list in Twitter and share it with the other players.

Since doing the shelf version requires that the players own all these books, I thought too about just putting 1 book per square–it’d be easy to pull covers from Amazon, arrange them in a 5×5 table on a page, and do it that way. That would be the much less expensive version, and might make an interesting pedagogical exercise for a graduate course. If they’d read some of a speaker’s work, and then created a book-per-square (or even 1 author per square) grid, it’d be a fun way to watch a streaming keynote, perhaps. It’d be a novel way of thinking about which thinkers and sources a particular speaker was most likely to rely upon for his or her work.

So that’s Bookshelf Bingo, coming soon to an academic conference near you! :)

Stock, Flow, Field, Stream

Every once in a while, everything just seems to flow into one large conversation full of resonances, connections, and it’s like striking a tuning fork. This is a post about the challenges of graduate education, and perhaps, by extension, academic work for those of us who identify with the digital humanities. Let me see if I can gather the threads together.

There’s a little history. Jokingly, I tell people that one of my biggest academic regrets is a paper I delivered at CCCC a few years back (2010). Our session took place in a huge ballroom (the size of our audience did not do it justice), and rather than a projector and portable screen, we had like a 30-foot monitor. It was colossal, and one of the things I regret is that, not knowing about it ahead of time, I didn’t prepare a full slide deck. Instead, I gave the only talk I’ve ever given that had just one, solitary slide. Don’t get me wrong, I was proud of that slide, and I wish that I hadn’t lost it in the Great Laptop Crash of 2011. It was a screen capture of a cover of an old issue of Field & Stream magazine, lovingly Photoshopped to reflect the topics in my talk, which was called “Writing Retooled: Loop, Channel, Layer, Stream.” Keep in mind that this was 3 years ago, when Twitter was still relatively exotic for academics, but what I was arguing was that

For those of us who engage with the field through social media, though, that engagement may seem more shallow in the short term, but it is constant and ongoing. We are setting foot in the river every day, rather than waiting for the occasional, official “event” to do so.
Think of it this way: who is more likely to shape the field? The person who sits in the audience for a presentation or reads a journal article that’s already been written, or the one who participates in weblog or Twitter conversations about that writing as it is being done? And yet, if you asked 100 people at this conference whether they’d rather publish an essay in CCC or have a couple of hundred followers on Twitter, I’m pretty sure most people would choose the first option.

A couple of hundred. Heh. Anyways, I suggested that, rather than focusing exclusively on the “field” of writing studies, we needed to be building the tools and habits necessary for dealing with the “stream.” I was arguing and, not or, but my talk was certainly weighted towards the stream, given where the field was (is?) at the time.

Anyhow, someone reminded me of that talk this year at CCCC, my first trip back since I gave it, so I’ve had cause in the past month or so to remember it fondly. Over the past couple of days, it’s connected for me with a few different links. First, there’s Anil Dash’s talk yesterday at the Berkman Center on “The Web We Lost.” There are a number of things in there worth thinking about, but Doug Hesse pointed out in my FB comments something that I’m not sure we’ve all really processed:

We built the Web for pages, but increasingly we’re moving from pages to streams (most recently-updated on top, generally), on our phones but also on bigger screens. Sites that were pages have become streams. E.g., YouTube and Yahoo. These streams feel like apps, not pages. Our arrogance keeps us thinking that the Web is still about pages. Nope. The percentage of time we spend online looking at streams is rapidly increasing. It is already dominant.

In Writing Studies, I think that we still think of ourselves as being in the business of writing pages. Think about all of the infrastructure we have, from page counts to citation formats, that make this simple assumption about the “object” of our practices. Or about how vital .PDF has been in finally getting people to accept that scholarship isn’t necessarily inferior because it’s online. (None of these are particularly thrilling examples to me.)

As part of my own stream, I just came across a tweet from Jay Rosen that provides some nice overlap as well:

Screen Shot 2013-04-03 at 5.48.41 PM

Yes, that’s the same Robin Sloan who wrote Fish and Mr. Penumbra’s 24 Hour Bookstore, which I happen to be reading at the moment. :) Sloan writes about stock and flow:

But I actually think stock and flow is the master metaphor for media today. Here’s what I mean:

  • Flow is the feed. It’s the posts and the tweets. It’s the stream of daily and sub-daily updates that remind people that you exist.
  • Stock is the durable stuff. It’s the content you produce that’s as interesting in two months (or two years) as it is today. It’s what people discover via search. It’s what spreads slowly but surely, building fans over time.

I feel like flow is ascendant these days, for obvious reasons—but we neglect stock at our own peril. I mean that both in terms of the health of an audience and, like, the health of a soul. Flow is a treadmill, and you can’t spend all of your time running on the treadmill. Well, you can. But then one day you’ll get off and look around and go: Oh man. I’ve got nothing here.

If you push on, as I did, and read the Rushkoff interview, then you’ll see Sloan’s treadmill metaphor writ large, and translated into “present shock.” This is a line from the book that the interviewer quotes:

When we attempt to pack the requirements of storage into media or flow, or to reap the benefits of flow from media that locks things into storage, we end up in present shock.

I realize here that I’m making my own talk appear far more prescient (and perhaps more sophisticated) than it actually was. I was in good shape just identifying the difference between what I was calling field and stream, I suspect.

Another thing that I talked about with several people at this year’s CCCC was how I was sometimes struggling with the presentism of social media. It’s particularly acute for me as I dip into conversations around the digital humanities, as so much of that discussion seems to happen on Twitter. You could argue variously that this is a symptom of its relative novelty but also of its dynamic energy, and even perhaps a combination of the two. Talk to me in five years, I suppose. It’s sometimes become difficult for me, though, to step back from social media and to focus instead on the page-oriented commitments that I have. The virtue of being in my position is that, if I want, I can just tone down the commitments and focus instead on more short-form work of the sort that social media energizes and provokes from me. I’m conscious that not everyone has that luxury, though.

This is not a post where I want to scold anyone. Rushkoff has a particular position that he’s promoting, to be sure, and there are hints of it in Dash and Sloan, I suppose, but my own interest is in thinking about how the balance that I was arguing for back in 2010 has so radically shifted in the other direction. But only in certain places. I’m slated to teach our Rhetoric, Composition, and Digital Humanities graduate course next spring, and already I’m thinking about how I can hack the curricular and conceptual space of my classroom to allow for a more dynamic and distributed course experience. But now I find myself in the odd position of thinking about whether that kind of course will provide enough field, enough stock, for students who (as I was arguing three years ago)

are more likely to rely on bookmarking than bookshelving. They are more likely to read an article that has well‐developed keywords than one with page numbers. And they are more likely to follow citation trails than to sit still and read a paper journal cover‐to‐cover. They are more accustomed to managing the flows of information, sorting them, and assembling them for their own uses. In short, they are much more likely today to be what  Thomas  Rickert  and  I  have  described  as  practitioners  of  ambient  research.

I’ve been deeply committed to making over my pedagogy in ways that help students work with flow, but as a colleague and I were talking about today, those students still have to go through a comprehensive exam process and to write a dissertation. Believe me when I say that I know all the arguments for reshaping those requirements, and that I agree with them. But I have to reconcile them with my own ethical beliefs about graduate education and whether it prepares students adequately for what follows. I’m not so full of myself as to think that a single graduate course with me will make the difference in a student’s ability to finish or not; however, years spent as a graduate director have made me keenly aware that every course is itself a blend of stock and flow, with obligations both to itself and to the ongoing curriculum that it is a part of.

So while the blogger in me celebrates the short-form and the streams, the academic in me starts to wonder if the shift away from more traditional academic practices doesn’t ultimately do my students a disservice–I think about whether or not I’m responsibly modeling the kind of balance they’re going to need in their own careers. I say that fully aware that it sounds like the first step on the road to rationalization, but it’s not. Really. I think that it means that I’ll think more carefully about how I hack my course next spring, not whether or not I’ll do so. It’s an issue that I’ll likely grapple with for some time, and this is really just the beginning of that process for me. That’s all.

(ps. If you’ve read the above and thought, “why isn’t he doing something about this in his research?” or some variation on the hack/yack question, then you’ve happened upon one of the driving forces behind my next major project. About which, more soon. :) )

 

On the un/death of Goo* Read*

It feels like a particularly dark time around the Interwebz these days–I don’t think it’s just me. I’ve been studying and working with new media for a long time now, and so you’d think that I’d be sufficiently inured to bad news. For whatever reason, though, it feels like the hits have kept coming over the last month or so. There’s plenty to feel down about, but since this song is about me, I wanted to collect some of my thoughts on the changes that are happening with two pieces of my personal media ecology: the demise of Google Reader and the recent purchase of GoodReads by Amazon.

I’ve been soaking in a lot of the commentary regarding Google Reader, ever since it happened while I was at CCCC, and while there’s a lot of stuff I’m not going to cite here, there were a couple of pieces in particular that struck me as notable. I thought MG Siegler’s piece in TechCrunch was good, comparing Google Reader’s role in the larger ecosystem to that of bees. Actually, scratch that. We’re the bees, I think, and maybe Reader is the hive? Anyway, my distress over the death of Reader is less about the tool itself and more about the standard (RSS/Atom) it was built to support. I understand why there are folk who turned away from RSS because it turned the web into another plain-text inbox for them to manage, but as Marco Ament (of Instapaper fame) observes, that was less a feature than user misunderstanding. As more folks turn away from RSS, though, Ament suggests, we run the risk of “cutting off” the long tail:

In a world where RSS readers are “dead”, it would be much harder for new sites to develop and maintain an audience, and it would be much harder for readers and writers to follow a diverse pool of ideas and source material. Both sides would do themselves a great disservice by promoting, accelerating, or glorifying the death of RSS readers.

The larger problem here is that, in our Filter Bubble world of quantified selves and personalized searches, this kind of diversity is an ideal that is far more difficult to monetize (a word I can barely think without scare quotes). Most folk I know now use FB and T as their default feed readers; I’m old-fashioned, I think, in that I tend to subscribe to sites through Reader if the content I find through FB/T is interesting to me. It may be equally quaint of me to like having a space where the long tail isn’t completely overwhelmed by advertisements, image macros, and selfies. I get that.

Where the Reader hullabaloo connects for me with the recent Amazonnexation of GoodReads is Ed Bott’s discussion of Google’s strategy at ZDNet–again, the point isn’t so much that Reader itself is gone, but that Google basically took over the RSS reader market, stultified it, and is now abandoning it. It’s hard not to see the echo of “embrace, extend, extinguish” operating in Amazon’s strategic purchase. The folks at Goodreads deserve all the good they’re getting, and they’re saying the right things, of course, but this is Amazon’s third crack at this, if you count Shelfari and their minority stake in LibraryThing, and it’s hard for me not to see a lot of Goodreads value in terms of their independence from Amazon. Amazon’s interest in Kindle has kept them from any kind of iOS innovation, and it’s almost impossible for me to imagine them keeping their hands out of the “data” represented by the reviews and shelves of Goodreads members. While Amazon’s emphasis on “community rather than content” was revolutionary once upon a time, their reviews are now riddled with problems (and that’s when they’re not being remixed as admittedly hilarious performance spaces). It was the absence of commercialism and gamification that made Goodreads a somewhat valuable source of information for me–despite their best intentions, I doubt that will last.

If I had to wager, I’d guess that within a year or so, the G icon will turn into an A, and the app will be retuned to provide mobile access to the broader site primarily. GR’s model of personal shelves will be integrated onto the site with A’s wish lists, and people who liked that will also like 50 Shades of DaVinci Pottery. Startups will focus their energies on softer markets, as they did when Google “embraced” RSS, and we’ll probably be the poorer for it.

Back in the day, I remember feeling the full cycle of excitement when Yahoo absorbed sites like Flickr and Delicious, hope that they would help them go to the next level, and disappointment as they proceeded to neglect them into has-been status. I feel like we’ve been burned often enough now to be able to move pretty much straight from the purchase announcement to the disappointment. And I still don’t begrudge these folks the rewards they’ve earned. But, I’ve got to be honest–this “do no harm” stuff sounds a lot like the squeaky hinges on the barn door closing after the horses have bolted.

Peer to Peer

I should be doing many other things, but every once in a while, there’s a bundle of ideas in my skull that gathers together and sets up a resonance field, and there’s really nothing for it but to write it out. So this is more suggestive than it would be had I the time to really write through it all.

The piece that clicked it all together for me was Jon Udell’s recent post on networks of first-class peers, which has its roots (I think) in the recent announcement of the demise of Google Reader, the death knell for which happened while I was in Las Vegas at CCCC, our annual conference for all things compositional and rhetoricky. I don’t want to project my own affect onto Jon’s post, but there was a sadness there, a nostalgia for the days when the weblog was the undisputed chief of social media. Jon closes his discussion with a look back:

What some of us learned at the turn of the millenium — about how to use first-class peers called blogs, and how to converse with other first-class peers — gave us a set of understandings that remain critical to the effective and democratic colonization of the virtual realm. It’s unfinished business, and it may never be finished, but don’t let the tech pundits or anyone else convince you it doesn’t matter. It does.

He’s responding in part to the “has Google decided that blogs are dead?” portion of the hullabaloo over Reader here, I think, and also hearkening back to something that I think Kathleen Fitzpatrick was getting at in her discussion about civility a couple of months ago. I’m struck by the difference that Udell is articulating between first-class and second-class peers (although that language is freighted in ways that a more networky “first-degree” and “second-degree” might not be). Services like Twitter and Facebook, the argument might run, allow us to treat our discourse as more disposable, less our own.

This cross-blog conversational mode had an interesting property: You owned your words. Everything you wrote went into your own online space, was bound to your identity, became part of your permanent record. As a result, discourse tended to be more civil than what often transpired in Usenet newsgroups or web forums.

Jon cites Dave Winer’s mantra, “Own your words,” and that resonated for me with Ryan Cordell’s “mea culpa” entry that followed Kathleen’s post, about the ethics of conference tweeting and its effect on community. For me, Ryan’s post is a lovely example of all the best thoughts that “own your words” suggests.

There’s another thread to all this for me, one that comes directly out of some of the discussions at CCCC with respect to textbook publishers, and the role (or control) they have when it comes to our field. Because of the vagaries of the conference process, folks had committed to particulars panels, papers, and topics largely before the big publicity rollout last summer about MOOCs, and as a result, other than at ATTW (whose deadlines are later), there was little formal discussion that I heard about. That’s not to say, however, that there weren’t goings-on. In particular, the spectre of MOOCs haunted (for me at least) the annual focus groups, publisher lunches, etc. I don’t participate in those events, but I’m deeply sympathetic to those who finance their trips to CCCC in part by reviewing textbooks, attending focus groups, and/or eating (let’s not call them “free”) meals provided by the publishers.

It’s hard, though, not to see such “partnerships” as more nefarious, or to imagine that these companies aren’t basically doing preliminary research for their own MOOC experiments, whatever they end up looking like. It’s similarly difficult for me not to see these companies, whose budgets underwrite much of our annual conference, acting in the role of shepherd, as KB expressed it:

The shepherd, qua shepherd, acts for the good of the sheep, to protect them from discomfiture and harm. But he may be ‘identified’ with a project that is raising the sheep for market. (Rhetoric of Motives)

It is not easy to suggest to my colleagues, old and new, that they need to go about owning their words here. That sounds a lot like blaming the sheep for the variously scaled loyalties of the shepherd. It does make me think more carefully, though, about what my role and responsibilities are as a member of the discipline, the community, my department, etc. It’s a deeply complicated set of issues; all I know for sure is that I can’t really see my way out of it at this point.

In each of these scenarios though, I’m conscious of nostalgia in my approach to them, the feeling that “if we knew then” which often accompanies the despair of inevitability for me. If only we’d fought hard to keep the blogosphere going. If only I could be more mindful in my approach to social media. If only we could operate at a disciplinary scale that didn’t require the implicit quid pro quo of the textbook companies.

I wish I had an easy answer for all of this, one that offered more promise than each of us simply needing to think our way forward. That’s what I’ll be doing, just as soon as all those other things are done.

Networked Humanities @ UKentucky (#nhuk), Spring 2013

What follows is a fairly rough approximation of the talk that I gave at the UK Networked Humanities Conference (#nhuk) in February, 2013. I don’t usually script out my talks in quite the level of detail that I have below, but this time out, I struggled to get my thoughts together, and scripting seemed to help. As usually happens with me, though, I went off-script early and often.

Also, I use a lot of slide builds to help pace myself, so I’m not providing a full slide deck here. Instead, I’m inserting slides where they feel necessary, and removing my deck cues from the script itself. I’m also interspersing some comments, based on the performance itself.

I’ll start with the panel proposal that Casey Boyle (@caseyboyle), Brian McNely (@bmcnely) and I put together:

Title: Networks as Infrastructure: Attunement, Altmetrics, Ambience

Panel Abstract:  In his early 2012 discussion of the digital humanities, Stanley Fish examines a number of recent publications in the field, and arrives at the conclusion that DH is not only political but “theological:”

The vision is theological because it promises to liberate us from the confines of the linear, temporal medium in the context of which knowledge is discrete, partial and situated — knowledge at this time and this place experienced by this limited being — and deliver us into a spatial universe where knowledge is everywhere available in a full and immediate presence to which everyone has access as a node or relay in the meaning-producing system.

Fish connects this diagnosis of DH with Robert Coover’s 20-year-old call for the “End of Books,” itself once a clarion call to practitioners and theorists of hypertext. While there is perhaps an implicit promise in some digital humanities work that we will be able to move beyond and/or better our current circumstances, we would argue that Fish’s dismissal of that work is misguided at best. The digital humanities broadly, and this panel more specifically, offers a careful revaluation of our current practices, treating the social, material, and intellectual infrastructures of the academy as objects of inquiry and transformation. Rather than seeing these long-invisible networks as given or as something to be “ended” and transcended, we follow Cathy Davidson’s call to engage with them critically, “to reconsider the most cherished assumptions and structures of [our] discipline[s].”

And here’s my contribution to this panel, titled “The N-Visible College: Trading in our Citations for RTs”

I want to start today by referencing two essays that I published last year. The first is an essay called “Discipline and Publish: Reading and Writing the Scholarly Network.” It appears in a collection edited by Sidney Dobrin called Ecology, Writing Theory and New Media, published by Routledge.

The second essay is a bit shorter, and appeared on my blog in July of last year. “Cs Just Not That Into You” was a response to the acceptance notifications from 4Cs, the annual conference in rhetoric and composition. In the space of roughly 24 hours, this essay received over 500 views, 100 comments on Facebook and my blog, and at least 20 retweets and shares.

Both essays grew out of my interest in the intellectual and organizational infrastructures of disciplines, and what network studies can tell us about those structures, but when it comes to my department and my discipline, only one of these essays really counts. I don’t think I’m revealing any great secrets when I say that only one of them appeared in my annual review form this year.

As folks who were there can attest, I actually scrapped this intro on the fly, since Byron Hawk’s presentation actually referenced “Discipline and Publish.” So instead I went meta, talking about how I was going to open the talk, right up until the point that Byron ruined it for me by citing the very essay I was going to claim would never be cited.

I’d be lying if I didn’t admit that I find disappointing the fact that more people read and engaged with that blog post than will likely ever read “Discipline and Publish.” Routledge has library-priced the collection at a cool $130, meaning that it won’t be taught in courses, at least not legally. As much as I like the collection, I can hardly recommend it to friends and colleagues at that price. From the perspective of my institution, “Discipline and Publish” was probably my best piece of scholarship last year, but it’s also the one with the least value to me personally and professionally.

This question of relative value is one that has frustrated those of us who work closely in and with new media for years. There has been some progress in the form of organizational statements about the value of technology work, and some of us have pushed at the boundaries of our individual tenure and promotion requirements. But that progress has been slow, not the least reason for which is the patterned isolation that separates our disciplines, our institutions, and our departments. But I’ll get to that in a couple of minutes.

First, I want to offer one more example that raises this question of value. When Derek Mueller (@derekmueller) and I worked on the online archive for College Composition and Communication, one of the things we did was to track the internal citations of that journal. That is, whenever one essay from the journal cited another, we linked them together. So this list represents the 11 most frequently cited essays from CCC spanning roughly a 20-year period, ranked by the number of citations.

Compare that to the list of Braddock Award winners from the same time period. The Braddock Award is the prize for the best essay published in CCC in a given year. As you can see, there is some overlap between the lists, but there are substantial differences as well.

 

Lists of Internal Cites vs Braddock Awards

[beyond my scope today, but it’s interesting to think about how and why an essay might show up on one list and not the other, or on both lists]

For my purposes today, one of the most important differences between these lists is that our institutions are far more likely to recognize the Braddock Award as a sign of value than a handful of citations.

All of this is not to say that one of these lists is better than the other, but rather that we have different measures for value. If we think about this in terms of culture industries, this idea makes complete sense. There’s a substantial difference between the books that win literary prizes and those that top the New York Times bestseller lists. The top grossing movies for a given year don’t usually receive Academy Award nominations. We’re both comfortable and conversant with the different definitions of value that these lists and awards represent.

I had several slides here to slow myself down, with book covers and movie posters to help the contrast below between Best and Most. Copyright violations, every last one of them. So use your imagination. :)

This is horribly overreductive, I know, but I’ve been thinking about these two models of value as Best and Most. And much of our intellectual infrastructure in the humanities is targeted towards the idea of Best:

It operates on the principles of scarcity and selectivity, with the goal of establishing a stable, centralized core of recognizable activity. This model is a particularly strong one for fields in the early stages of disciplinarity–it creates shared values and history, as well as an institutional memory. On top of that, it’s a pretty easy model to implement.

But this model has some weaknesses as well. The larger a field becomes, the harder this model is to maintain. The center becomes conservative, slow to recognize or respond to change, and less and less representative of the majority of its members.

In some ways, what I’m calling Most runs counter to these principles. This model of value is collective rather than selective, focusing on the aggregation of abundance instead of scarcity. This model scales well with the size of a discipline–the more people and texts there, the better the information becomes.

If there’s a weakness to this model, it’s that it requires a lot more infrastructure and information to be productive. And it’s something of a chicken/egg problem: it’s hard to demonstrate the value of this model and to advocate for it without the model already in place.

Over the past couple of years though, we’ve seen a push towards something called altmetrics. Spearheaded by scholars in library and information studies, altmetrics is a term with several different layers. On the one hand, it refers to opening up traditional measures of value for alternative research products, like datasets, visualizations, and the like. On the other, altmetric advocates have also been pushing for measures other than the traditional (and flawed) idea of journal impact factor.

But there’s another layer to this term at well, one that gets at the questions of approved genres and recognizable metrics from a more ecological perspective. That is, altmetrics is only partly about reforming our current system; it’s also encouraging us to rethink the system itself. One of the leading voices in altmetrics, Jason Priem (@jasonpriem), gave a talk at Purdue last year where he talked about something called the Decoupled Journal.

I’m embarrassed to admit that, running out of time, I swiped three of the diagrams from Jason Priem’s slideshow. I’m hoping that the links here serve as sufficient apology. I’d hoped to adapt them to my own purposes and disciplinary circumstances, but I was putting the final movement together on the road, and took the shortcut. I also quoted (and attributed) the line that we’re currently working with the best possible system of scholarly communication given 17th century technology, which got both a chuckle from the room and a fair share of retweets.

Traditional journal system, where each journal is responsible for each stage of the publication process.

This traditional process is a masterpiece of patterned isolation — very little interaction from journal to journal — except when journals are owned by the same corporation. There’s tremendous duplication of effort, and that effort is stretched over a long period of time in the case of most traditional journals.

Thanks to blind peer review, most reviewers are isolated from each other as well, as are the contributors. It isn’t until you reach the top of the pyramid, with the journal editor, that there’s any kind of conversation or cross-pollenation of ideas and research. It’s a top-down model of scholarly communication.

I’m pretty sure I was completely off-script through here. I did remember to emphasize the idea that our current model is a “masterpiece of patterned isolation,” but I’m pretty sure that I forgot to mention at any point that PI is a phrase I’ve pulled from Gerald Graff, who borrows it from Laurence Veysey. Pretty sure I never defined it, either.

What Priem suggests is that we think about the various layers and practices independently of how they’ve traditionally been distributed. Social media in particular have opened up a number of possibilities for engaging with and assessing scholarship; rather than waiting for our journals to adopt these practices, altmetricians would have us begin exploring and applying them ourselves.

Considering that many journals treat promotion and search as somehow accomplished by their subscriber list and a conference booth or two, there’s a great deal of room for innovation. Priem describes this decoupled model as publishing a la carte. He offers an example of a writer who might place an essay in a journal, but choose a number of alternative methods for promoting it, making it findable, having it reviewed, etc.

Cheryl Ball (@s2ceball) called me out in Q&A for being too cavalier about the shift in my talk from the “Decoupled Journal” to DH Now, and she was absolutely right. What I was trying to get at here was the way that DHNow starts from a different set of questions, and thus “couples” the process in a way that accomplishes both “Most” and “Best” without buying in to the traditional model of what a journal should look like. This is also where my script simply turns into talking points.

DH Now

  • Aggregation of tweets, calls for papers, job announcements, and resources
  • Monitor the network for spikes in activity
  • Cross-post and RT the things that folks are paying attention to
  • Added layer of publication: Editors’ Choice
  • EC texts are solicited for the quarterly Journal of Digital Humanities
  • Publication is the final step of a fairly simple, but widely distributed process

Why it’s useful as a model

  • Process produces much closer relationship between the map and the territory – DH Now vs DH 2 years ago
  • Driven by engagement and usefulness to the community rather than obscure standards administered anonymously
  • That engagement occurs much more quickly, and at all stages of the process — publication becomes evidence of engagement rather than a precondition for it

Conclusion

 

Deleted Scenes

You might notice that there’s no reference at all here to my original title. As I was planning the talk, I’d intended to frame it with a three-part movement from invisible college to discipline to what I was calling the “n-visible college” (network-visible), a model of scholarly communication that preserved disciplinary memory but not at the cost of innovation and circulation. My CCCC talk from several years ago (this link is to QT movie of the talk) gestures towards that model, but to my mind Priem’s work in particular gets at it much more concretely.

Also, I didn’t emphasize enough the degree to which our traditional model has as a default the silos or smokestacks that separate us all (journals, disciplines, individuals) from each other. This may be part of my underemphasis on patterned isolation. Our current model keeps us silo’ed by default–to my mind, open access isn’t just a matter of taking down those walls, but in reimagining the system that defaults to silo in the first place. Nate Kreuter (@lawnsports) talked a little bit about this the following day in terms of considering both material and cognitive accessibility. We can make our work open materially, but without an infrastructure that makes it accessible cognitively, we’re not doing as much as we should be.

I spent way too much of my inventional time trying to figure out how to incorporate an essay from Noah Wardrip-Fruin on interface effects. It fit with what I wanted to say, which is that the “interface” of the Braddock essays fails us in that it gives very little sense of the complexity of the field that it represents. Love this line from NWF: “Just as play will unmask a simple process with more complex pretensions, so play with a fascinating system will lack all fascination if the system’s operations are too well hidden from the audience.” My point is that, at a certain disciplinary scale, holding to notions of the “Best” ends up occluding more of our scholarly activity than it reveals. It’s a point I like, but I just couldn’t get there from here. And it may be too intricate for the kind of talk that I give.

Finally, Nathaniel Rivers (@sophist_monster)asked a good question in Q&A that went something like “what happens if you succeed?” In other words, have we really thought through what it would mean if Facebook likes or retweets acquired currency. My answer was that it was less about replacing our current system with a better one than it was about opening the system up to the competing models of value that are currently neglected.

And that’s really all that I can remember. Like I said, this is a fair approximation, although not likely a perfect one. If you’ve read this far, thanks! And thanks to all those who chatted with me before, during, and after.

 

Will blog for tenure?

It’s been a while, but I need to get back into my blogspace. I’ve got some thoughts to share about the brand new semester, but I thought I’d start the year by retrieving an old post from the first incarnation of my blog. I shared a link on Twitter this morning to an entry where I shared a draft of the statement I included in my tenure portfolio, about the role that blogging played in my academic work and why it should be considered in my tenure case. While it made me a bit anxious to think of my colleagues across the college reading some of my sillier and/or snarkier posts, I certainly believed what I wrote about the value of academic blogging. In the wake of the push towards the digital humanities, it only makes more sense now.

For what it’s worth, I did indeed receive tenure, and the committee took this statement seriously. What follows was a draft of what I included in my portfolio–I did revise it a bit, in part according to comments that I received both in the entry and privately.

__

“We’ll see how this flies”
Collin vs Blog, 26 August 2006

I’ve spent the past few days finishing up the overview document for my tenure case, known affectionately across the campus as “Form A.” The form closes by asking for “additional information” that might be helpful in evaluating one’s work. Here’s what I put:

In a conversation with one of the members of the search committee that recommended my appointment at Syracuse, after I arrived in the Writing Program, I learned that this particular committee member had three criteria for each of the candidates. This person explained that each candidate was expected to make technology their primary area of scholarly inquiry, to be able to apply it in and to their pedagogy, and, just as importantly, to be a practicing user of technologies. While I believe that this form documents my achievements in the first two areas, I want to discuss that third area briefly.

In the field of rhetoric and composition, a field devoted to the study and teaching of writing, there is a sense in which we are practitioners of that which we study. But for those of us who choose to specialize further in the study of information and communication technologies as they impact writing, practice is not only essential, but it brings added pressures as well. In addition to staying abreast of developments in our field, we are obligated to remain familiar with developments outside of academia, to be practicing technologists as well as scholars, pedagogues, and colleagues. However, the criteria by which tenure and promotion are determined do not easily admit this fourth category, partly because it is a difficult one to measure. The proficient use of technologies does not fit into any of the three categories, but it is not entirely separable from them, either. I have spent hours learning software in order to write multimedia essays, familiarized myself with various research and productivity tools in order to help students become more proficient at online research, and drawn on my understanding of spread sheets, databases, and web design in order to improve the performance of the graduate office. But I also engage in activities that cannot easily be reduced to scholarship, teaching, and service.

It is in this context that I wish to call attention to my activity as an academic blogger. I started a weblog (Collin vs. Blog) in August of 2003, and in the three years I have spent writing and maintaining it, it has become an integral part of my academic practice. I use it as a place to work through ideas that will eventually be turned into published scholarship, to reflect upon teaching practices, and to connect with colleagues both local and distant. In roughly 20 months of tracking site traffic, my site has received close to 75,000 unique visits and over 100,000 pageviews, averaging 144 visits and 199 views daily since January of 2005. In the summer of 2005, I received my discipline’s award for Best Academic Weblog. In short, maintaining a weblog has raised my profile, both within my discipline and beyond it, far more than any course I might teach or article I might publish. And in doing so, it raises the profile of Syracuse and of the Writing Program in a fashion that I believe to be positive.

In recent years, there have been high-profile tenure cases where applicants have offered their technological work in lieu of activity more easily categorized in traditional terms; that is not my intent here. I feel that my scholarship, teaching, and service stand on their own. But in a year where Syracuse is actively pursuing and promoting the idea of “scholarship in action,” it strikes me as particularly important to include this form of public writing as part of my activity as a member of the Syracuse University faculty. At a time where much of the discussion surrounding academic weblogs focuses on the risks of representing one’s self publicly as anything more than the sum total of items on a vita, I feel that it’s important to acknowledge the positive, productive impact that blogging has had upon my academic career. My weblog is not a strictly academic space, any more than my life is consumed with purely academic concerns. But it adds a dimension to my contributions here at Syracuse, both as a writer and as someone who studies technology, that would be difficult to duplicate within the categories articulated in this form.

—–

I’ll be sure to let you know how it goes.

Back to top
© 2014 Collin Gifford Brooke - A MeanThemes Theme