Archives: if i were in charge

In-Comprehensive-le

bluebookSo I’ve got a question, but it’ll take me a while to get to it.

A few years back, here at Syracuse, we overhauled our graduate curriculum, paying particular attention to the core curriculum and comprehensive examinations. When I arrived at Syracuse, our comps process was a fairly traditional major-minor-minor set up. The exams themselves were varied, in a well-intentioned attempt to account for a variety of writing processes, but the idea of generating 3 lists, writing, and being examined over the results is fairly standard practice. There are virtues to this model, not the least of which is that it gives students experience assembling bibliographies, time to read, and ideally, a more focused mastery over the field than perhaps can be achieved in coursework.

Nevertheless, there were problems. The written examination is a genre that only bears tangential resemblance to the writing that we do in our careers. If anything, it intensifies the event model of writing that seminar papers habituate in us (read, read, read, read, then write in a very short burst of eventfulness), rather than moving students towards a more integrated process of research/writing. We also found that, as an unfamiliar genre, the exams themselves produced no small amount of anxiety, and often took up more time than they perhaps deserved in a program where funding was limited. Finally, those kinds of exams are difficult to articulate with coursework–what exactly is being “tested” if the lists are student-generated and potentially have little to do with their prior 2 years of study?

We made some changes, and you can visit the CCR webpage on the topic if you’d like to see more detail than I’ll provide here. We shifted our comprehensives to more of a portfolio model, where the students complete a 2-part major exam on a shared list, produce an article that is publication-ready, and assemble a minor list’s worth of texts into an annotated bibliography that will ideally form the core of their dissertation’s bibliography. The faculty agreed that, when they are scheduled to teach core courses, they will include at least a few of the works from the shared list in their syllabi, so that the exams cover readings and ideas that students have been grappling with during their coursework. The exam questions are written and approved by the graduate faculty, students take the major exam just prior to the beginning of their third year, and their exams are read by a committee of those faculty from whom they’ve taken their core courses.

Speaking solely for myself, and as someone who played a role in this revision, I’m pretty happy with our new model, for a number of reasons. My question isn’t about the model itself, however, just one piece of it. You see, the first iteration of the shared “major” list was assembled pretty hastily. Students receive the list when they start the program, so that they know ahead of time what will be covered when they take their exams. Our overhaul took place in the early summer, and so we needed a list quickly, in time for that year’s cohort of incoming students. Since portions of the list would partially dictate the content of our six core courses, we simply asked the six faculty who had most recently taught those courses to suggest 7-8 works each (3-4 articles/chapters = 1 book = 1 work). It was not an ideal approach, but necessity was in this case invention’s parent. A few years later, having seen first-hand some of the weaknesses with this approach, we’re going to be revising our list (a link to which is available at the page linked above). But it’s really a tricky proposition. To wit:

Hypothesis 1: The shared, “major” list should provide graduate students with a representation of the discipline.

Hypothesis 1a: The major list should reflect the field, but especially the varied expertises of the faculty who will be rotating through the core courses, and evaluating the examinations themselves.

Hypothesis 2: The major list should reflect the work students undertake in their courses, and needs to do so insofar as faculty draw from the list for those course readings (Our current core: Composition, Ancient Rhetoric, Contemporary Rhetoric, Methods, Technology, Pedagogy).

Hypothesis 3: Works on the major list should lend themselves to intertextual conversations, insofar as most exam questions ask students to discuss multiple texts in their answers.

Hypothesis 4: Even as we acknowledge that the “categories” represented in the core are neither permanent nor stable, works on the list should help students draw connections among those categories and to areas of inquiry that aren’t explicitly covered in the core.

Hypothesis 5: The list should strive towards a balance of the canonical and the contemporary, and undergo regular revision as those categories shift. It should be responsive to the discipline.

Taken individually, each of these hypotheses seems eminently reasonable to me. Each operates under certain constraints, but I don’t look at any of these and say to myself that they don’t make sense in the context of our current model. We tried quite hard to reflect program standards as well as the immediate and long-term needs of our students, and to work towards an exam process that carefully managed the transition from coursework to dissertation.

Taken as a group, though, these hypotheses are making my brain hurt.

Intellectually, I know that the multiple functions of our exam process operate according to different sets of criteria, and that those criteria are not consistent among each other. The list is necessarily a reduction of a reduction (the core courses), but it’s not simply a question of representation because the list is also a tool for thinking that we ask our students to do. And more abstractly, it’s a site for shaping (and for revisiting) the values of our program in the form of debates over its contents. So it shouldn’t be easy.

My question isn’t really about what belongs on this list. Instead, I’m perplexed about how we can even begin. Seriously. When we designed the original list, it was divided into a set of much more constrained questions. My own task at the time was to provide a short list of works that would be appropriate for our course in contemporary rhetoric, texts from which I could teach that course. I didn’t think about anything other than the specific task. Easy, breezy. Approaching it as a global question, though, is a real conundrum to me.

What the heck should we do?

I’ll tell you what my semi-serious suggestion was: I suggested a variation on the desert island question, a variation that I used to use with my FYC courses. I would have them list what they thought were the five most important events to happen in their lifetimes, and then I would ask them to make a second list of the five most important events in their lives. It’s kind of a fun exercise that raises questions of personal vs public significance. So I said that we should ask all the faculty and graduate students to make 2 lists: first, list five texts that you assume just about everyone in the field has read; second, list five works that have been absolutely crucial for your own thinking. Then, aggregate those lists, and see where you’re at. I have no earthly idea what the next step would be, but we’d have something to work with and perhaps build on.

There’s something that feels a little wonky to me about proceeding this way, but at the same time, I’m struggling to come up with a better strategy.

 

Dossiers (sigh)

FilesWinter is coming, and around the country, a host of applicants are diligently applying their talents to the assembly of materials that they hope will demonstrate their suitability for what few tenure-track positions are available. That means that anxiety is on the rise, as are the blog posts that castigate the academy for its inconsistencies, its capriciousness, and above all, its indifference.

The problem: partly because of their relative rarity, partly because of the varied institutional expectations, and partly because they are conducted by volunteers who themselves have other full-time positions, academic job searches are often more complicated than they need to be. To be honest, I think the single most relevant reason for this, however, is not cruelty but inexperience. Every year, I’d guess that there are a lot of programs (and search committee chairs) who basically find themselves reinventing the wheel. They rely upon their own (often limited) experience, local history, and whatever advice they’re able to track down. And the result is that it can seem like every single job ad comes with slightly different expectations, particularly at the dossier stage, from every other one.

Each of us doubtless has a story about that weird 1-2 page document that we had to write for one school, answering a question that no one else in the country was asking. And because the other applications we have to assemble all blur together, we remember the outliers, and tend to think of them as representative of a broken system. And that’s true to a degree, I suppose, especially if the others each have their one weird document. But really, search committees are just doing what they think they need to in order to find the best person. It’s not a massive plot to make our holiday conversations more awkward.

Rather than waiting for every department in the country to become, spontaneously and simultaneously, just like every other one in its expectations, it seems to me that we ought to be moving up in scale to our professional organizations. Said organizations could easily put together a committee composed of experienced faculty from a range of institutions, whose job it would be to survey the field, then develop and publish a guide for job searches. That guide could be a valuable resource for inexperienced search committees in a couple of ways.

First, it might establish a consensus, baseline expectation for things like dossiers, and what it’s appropriate to ask for at the different stages of a search. Second, it would provide documentation that could be deployed if and when outside forces (from well-meaning colleagues to HR departments) think to insist on non-standard expectations. It wouldn’t have binding force, certainly, but sometimes all that’s needed is documentation of “field standards” to place the burden of proof on those who would complicate the process unnecessarily. Third, it would provide some needed structure/guidance for applicants, those people for whom the process both has the highest stakes and is least familiar.

The two recommendations that I’d make are fairly simple ones. First, the whole idea of the dossier as a stage in the application process developed at a time when our most efficient communications network was the US Postal Service. It made a great deal of sense to ask for everything at once, rather than having candidate submit material piecemeal. Hell, my first search, I had no access to a central service–I had to send stacks of stamped envelopes to my recommenders along with a sheet of pre-populated mailing labels. Ugh. But my point here is that a great deal of what goes into the dossier (writing samples, sample syllabi, research and teaching statments, e.g.) can be stored online, and downloaded directly by those institutions interested in additional materials. If a candidate is set up on WordPress, she might easily create Pages for each institution that links specifically and solely to those materials they request. Nothing could be easier (nor less expensive). While the confidentiality of recommendation letters would still require a service like Interfolio, adding that service as an additional layer (of potential mishap) for something that candidates could attach to an email or for which they could just as easily provide a simple URL makes little sense to me.

The ease with which most of these documents can be submitted is connected to my second suggestion, and that’s that we take a close look at the dossier and restrict its domain to those items that are necessary for us to do our work. For every tenure-track search I’ve ever been a part of, the dossier is a tool with which the committee moves from its initial shortlist to the shorterlist of candidates whom they will interview. That’s it. Rather than imagining the dossier as “everything you could possibly ask for,” we should be asking, “what information do we need to decide whom to interview?” For me, that list is actually pretty short: I feel like I can get a pretty good sense of most candidates from a writing sample, 1-2 sample syllabi (perhaps one writing course, one graduate-level content course), a research statement, a teaching philosophy statement, and their recommendation letters. (I already have their letters and vitae.) I would want to know more before I hired somebody, but that’s what interviews, additional potential materials, followup contacts with references, and campus visits are for. And that’s a lot of additional contact, interaction, and information. I’ve never heard of someone being hired directly from a dossier, so I’m not sure why it’s gotten to the point where it’s this sprawling, no-two-the-same mountain of make-work. The searches I’ve been a part of have all had 4 stages (application, additional materials, interview, campus visits), and three of those stages are pretty standardized. I think it’s well past time the 2nd one was as well. It just needs a little leadership.

One final note: I can only speak to my own experience here, and I work in a field that’s not been hit as hard by the decline in tenure-track positions. When I was prioritizing my applications, though, I was much less likely to apply to jobs whose requirements involved uniquely specific documents; when I had to choose between working on a research statement that was going to be a part of 20 applications and crafting a 2-3 page snowflake that was only required by one application, I chose to optimize what time I had. Not everyone has that luxury, I understand that. But those kinds of requirements are just as likely to chase off qualified applicants as they are going to unearth some deeply hidden jewel of insight that couldn’t have been gleaned otherwise.

Image Credit: “Rusty’s Files” by Flickr user Kate Haskell (CC)

Reading Notes

There’s a phrase that I got from Laurence Veysey, via Gerald Graff (although it appears in other places as well): “patterned isolation.” Veysey uses the phrase to explain the growth of the modern university and the way that disciplines grew without engaging each other, but I tend to apply it on a more “micro” scale. That is, there are many things we do as teachers and scholars in patterned isolation from our colleagues, tasks that call upon us to reinvent wheels over and over in isolation from one another. Fortunately, with the Internet and all, much of that is changing, as folks share syllabi, bibliographies, and the like online.

But I’m constantly on the lookout for ways to short circuit patterned isolation. For me, reading notes are one of those sites. I’m not a great note-taker and never have been–I’m too reliant on visual/spatial memory and marginalia. Disconnecting my own notes from the physical artifacts that I was processing didn’t make sense. Now, of course, I’m lucky sometimes if I remember having read a book, much less what I scrawled in its margins, so I wish that I’d been better about taking notes and I admire those people who have already internalized the lesson that it took me 20+ years to figure out. So one of the things that I like to do in my graduate courses is to aggregate the note-taking process. Rather than asking or expecting each student to take a full set of reading notes for the course, I rotate the function among them. There’s nothing to stop a student from taking more detailed notes on his or her own, but I want all my students, when they leave the class, to be able to take with them a full set of notes that they can refer back to later.

For me, the trick to this is making the notes relatively uniform–there’s a part of me that resists this, because different folks/strokes and all that, but I think it’s important to make the notes themselves scannable and consistent. Also, I think that the process needs to be sustainable–part of the challenge of reading notes is that they tend to shrink or expand based on available time, and they shouldn’t. The notes should be brief enough that one can execute them quickly (when time is short) but elaborate enough that they’re useful 5 or 10 years down the road when the text has left one’s memory. For me, this means keeping them to about a page, and doing them in a format that should take no more than about 15 minutes for an article or chapter. So here’s what I’ll be asking my students this semester to do for their reading notes:

  • Lastname, Firstname. Title.
  • Full MLA citation of article/chapter (something that can be copy/pasted into a bibliography)
  • Abstract (50-75 words, copy/pasted if the original already has an abstract)
  • Keywords/tags (important terminology, methodology, materials, theoretical underpinnings)
  • 2-3 “key cites” – whose thoughts/texts does this article rely upon
  • 2-3 “crucial quotes” – copy/paste or retype the 2-3 most important passages from the essay
  • 1-2 questions – either questions that are answered by the text, or questions it raises for further exploration

And that’s it. I’ve futzed around with different categories, but these are the ones that have stuck with me through multiple iterations of this assignment. The notes aren’t meant to be a substitute for reading, but they should provide the basic info about the article as well as some indication of how it links to others. And it’s meant to be quick.

This seems really obvious to me now but I can tell you that, when I was in graduate school, it wouldn’t have been. I can’t tell you how happy it would make me now to have taken notes like these on everything I’ve read since grad school. Especially for all those things that I’ve forgotten I’ve even read.

Getting Wet

I posted to FB yesterday about the MLA’s proposed ‘redistricting,’ to be found at An Open Discussion of MLA Group Structure, observing that the tactical nature of my relationship to MLA made it impossible for me to comment there. Partly because the discussion was picked up in a number of other places, and partly because it’s hard for me to snark without putting at least equal effort into more constructive ends, I’ll say a few words here, and post it for MLA folk to see.

The title for this post came to me as I thinking about the viability of MLA as an “umbrella organization,” which of course sent me to song lyrics about umbrellas. While we might indeed say that it’s raining now more than ever, I eventually settled on a classic, from ‘Every Little Thing She Does is Magic’ by the Police:

Do I have to tell the story
Of a thousand rainy days since we first met
It’s a big enough umbrella
But it’s always me that ends up getting wet

I always liked that song. Anyway, the TLDR here is that the proposed change to group structure for MLA has not gone as far as it might to “reflect changes in curricula and scholarly commitments among our members.” Despite a disproportionate number of jobs, some 90+ PhD programs, and dozens of journals, rhetoric and composition (not to mention a range of other areas both coordinate and subordinate) remains buried at the lower levels of the redesigned groups, a sub-speciality among many (smaller) others, without any acknowledgment of its own conceptual breadth. It is tucked there under “Language Studies,” a category that could easily cover the entire rest of the groups, alongside several categories of linguistics work.

The problem is something of a chicken-egg situation: does MLA valuation of R/C reflect its membership? Or does the tactical membership of R/C folk (many of whom, like myself, only member up when we absolutely must) produce or result in that valuation? Perhaps a little of both. We can say for certain, though, that this map sorely underestimates the “scholarly commitments” represented annually in the job list. Over the past decade, according to the MLA reports on the job list, some 50% of the available positions every year come in Rhetoric and Composition, Business and Technical Writing, and Creative Writing. R/C floats at around 30%, give or take. And the number of panels relevant to those fields at MLA probably runs at about 1-2%. As a consequence, as someone in R/C, MLA is where I must go to interview but little else. We might also cite the percentage of MLA books about R/C, or the number of articles appearing in PMLA on R/C topics, etc.

For some of us, I suspect that this doesn’t matter much. And really, for the most part, it has little effect on my day-to-day life. For our last search, we bypassed MLA altogether, moving from Skype interviews to campus visits–it will not shock me if that soon becomes the norm. I’ve been interested in and grateful for the MLA’s embrace of digital humanities lately, but we’re rapidly reaching the end of the “captive audience” era for MLA. Once R/C job applicants and faculty no longer have to attend for job-related functions, MLA will slide off a lot of people’s radar. As someone in a freestanding writing program, that doesn’t really affect me. But I do understand the frustration of those who are effectively required to attend a conference, at great personal expense, that is willing to charge them money without making any sort of effort to represent their scholarly commitments. It’s a big enough umbrella.

If MLA still has those umbrella aspirations, then when stuff like this comes up, they need to range a bit. It’s hard not to look at the members of that committee that “redesigned” the groups and see the continued neglect of R/C as anything other than self-fulfilling prophecy. Here are the where’s and when’s: Cornell ’91, Yale ’90, Yale ’74, Yale ’84, Brown ’75, Stanford ’75, Columbia ’09, and Yale ’80. I’m sure that the task was daunting, and the work thankless, but really, most of us could have predicted what a “map” drawn by this committee was going to look like. And in putting that kind of committee together, MLA should not be surprised at the many, many of us who don’t see ourselves on it.

 

 

On the un/death of Goo* Read*

It feels like a particularly dark time around the Interwebz these days–I don’t think it’s just me. I’ve been studying and working with new media for a long time now, and so you’d think that I’d be sufficiently inured to bad news. For whatever reason, though, it feels like the hits have kept coming over the last month or so. There’s plenty to feel down about, but since this song is about me, I wanted to collect some of my thoughts on the changes that are happening with two pieces of my personal media ecology: the demise of Google Reader and the recent purchase of GoodReads by Amazon.

I’ve been soaking in a lot of the commentary regarding Google Reader, ever since it happened while I was at CCCC, and while there’s a lot of stuff I’m not going to cite here, there were a couple of pieces in particular that struck me as notable. I thought MG Siegler’s piece in TechCrunch was good, comparing Google Reader’s role in the larger ecosystem to that of bees. Actually, scratch that. We’re the bees, I think, and maybe Reader is the hive? Anyway, my distress over the death of Reader is less about the tool itself and more about the standard (RSS/Atom) it was built to support. I understand why there are folk who turned away from RSS because it turned the web into another plain-text inbox for them to manage, but as Marco Ament (of Instapaper fame) observes, that was less a feature than user misunderstanding. As more folks turn away from RSS, though, Ament suggests, we run the risk of “cutting off” the long tail:

In a world where RSS readers are “dead”, it would be much harder for new sites to develop and maintain an audience, and it would be much harder for readers and writers to follow a diverse pool of ideas and source material. Both sides would do themselves a great disservice by promoting, accelerating, or glorifying the death of RSS readers.

The larger problem here is that, in our Filter Bubble world of quantified selves and personalized searches, this kind of diversity is an ideal that is far more difficult to monetize (a word I can barely think without scare quotes). Most folk I know now use FB and T as their default feed readers; I’m old-fashioned, I think, in that I tend to subscribe to sites through Reader if the content I find through FB/T is interesting to me. It may be equally quaint of me to like having a space where the long tail isn’t completely overwhelmed by advertisements, image macros, and selfies. I get that.

Where the Reader hullabaloo connects for me with the recent Amazonnexation of GoodReads is Ed Bott’s discussion of Google’s strategy at ZDNet–again, the point isn’t so much that Reader itself is gone, but that Google basically took over the RSS reader market, stultified it, and is now abandoning it. It’s hard not to see the echo of “embrace, extend, extinguish” operating in Amazon’s strategic purchase. The folks at Goodreads deserve all the good they’re getting, and they’re saying the right things, of course, but this is Amazon’s third crack at this, if you count Shelfari and their minority stake in LibraryThing, and it’s hard for me not to see a lot of Goodreads value in terms of their independence from Amazon. Amazon’s interest in Kindle has kept them from any kind of iOS innovation, and it’s almost impossible for me to imagine them keeping their hands out of the “data” represented by the reviews and shelves of Goodreads members. While Amazon’s emphasis on “community rather than content” was revolutionary once upon a time, their reviews are now riddled with problems (and that’s when they’re not being remixed as admittedly hilarious performance spaces). It was the absence of commercialism and gamification that made Goodreads a somewhat valuable source of information for me–despite their best intentions, I doubt that will last.

If I had to wager, I’d guess that within a year or so, the G icon will turn into an A, and the app will be retuned to provide mobile access to the broader site primarily. GR’s model of personal shelves will be integrated onto the site with A’s wish lists, and people who liked that will also like 50 Shades of DaVinci Pottery. Startups will focus their energies on softer markets, as they did when Google “embraced” RSS, and we’ll probably be the poorer for it.

Back in the day, I remember feeling the full cycle of excitement when Yahoo absorbed sites like Flickr and Delicious, hope that they would help them go to the next level, and disappointment as they proceeded to neglect them into has-been status. I feel like we’ve been burned often enough now to be able to move pretty much straight from the purchase announcement to the disappointment. And I still don’t begrudge these folks the rewards they’ve earned. But, I’ve got to be honest–this “do no harm” stuff sounds a lot like the squeaky hinges on the barn door closing after the horses have bolted.

Networked Humanities @ UKentucky (#nhuk), Spring 2013

What follows is a fairly rough approximation of the talk that I gave at the UK Networked Humanities Conference (#nhuk) in February, 2013. I don’t usually script out my talks in quite the level of detail that I have below, but this time out, I struggled to get my thoughts together, and scripting seemed to help. As usually happens with me, though, I went off-script early and often.

Also, I use a lot of slide builds to help pace myself, so I’m not providing a full slide deck here. Instead, I’m inserting slides where they feel necessary, and removing my deck cues from the script itself. I’m also interspersing some comments, based on the performance itself.

I’ll start with the panel proposal that Casey Boyle (@caseyboyle), Brian McNely (@bmcnely) and I put together:

Title: Networks as Infrastructure: Attunement, Altmetrics, Ambience

Panel Abstract:  In his early 2012 discussion of the digital humanities, Stanley Fish examines a number of recent publications in the field, and arrives at the conclusion that DH is not only political but “theological:”

The vision is theological because it promises to liberate us from the confines of the linear, temporal medium in the context of which knowledge is discrete, partial and situated — knowledge at this time and this place experienced by this limited being — and deliver us into a spatial universe where knowledge is everywhere available in a full and immediate presence to which everyone has access as a node or relay in the meaning-producing system.

Fish connects this diagnosis of DH with Robert Coover’s 20-year-old call for the “End of Books,” itself once a clarion call to practitioners and theorists of hypertext. While there is perhaps an implicit promise in some digital humanities work that we will be able to move beyond and/or better our current circumstances, we would argue that Fish’s dismissal of that work is misguided at best. The digital humanities broadly, and this panel more specifically, offers a careful revaluation of our current practices, treating the social, material, and intellectual infrastructures of the academy as objects of inquiry and transformation. Rather than seeing these long-invisible networks as given or as something to be “ended” and transcended, we follow Cathy Davidson’s call to engage with them critically, “to reconsider the most cherished assumptions and structures of [our] discipline[s].”

And here’s my contribution to this panel, titled “The N-Visible College: Trading in our Citations for RTs”

I want to start today by referencing two essays that I published last year. The first is an essay called “Discipline and Publish: Reading and Writing the Scholarly Network.” It appears in a collection edited by Sidney Dobrin called Ecology, Writing Theory and New Media, published by Routledge.

The second essay is a bit shorter, and appeared on my blog in July of last year. “Cs Just Not That Into You” was a response to the acceptance notifications from 4Cs, the annual conference in rhetoric and composition. In the space of roughly 24 hours, this essay received over 500 views, 100 comments on Facebook and my blog, and at least 20 retweets and shares.

Both essays grew out of my interest in the intellectual and organizational infrastructures of disciplines, and what network studies can tell us about those structures, but when it comes to my department and my discipline, only one of these essays really counts. I don’t think I’m revealing any great secrets when I say that only one of them appeared in my annual review form this year.

As folks who were there can attest, I actually scrapped this intro on the fly, since Byron Hawk’s presentation actually referenced “Discipline and Publish.” So instead I went meta, talking about how I was going to open the talk, right up until the point that Byron ruined it for me by citing the very essay I was going to claim would never be cited.

I’d be lying if I didn’t admit that I find disappointing the fact that more people read and engaged with that blog post than will likely ever read “Discipline and Publish.” Routledge has library-priced the collection at a cool $130, meaning that it won’t be taught in courses, at least not legally. As much as I like the collection, I can hardly recommend it to friends and colleagues at that price. From the perspective of my institution, “Discipline and Publish” was probably my best piece of scholarship last year, but it’s also the one with the least value to me personally and professionally.

This question of relative value is one that has frustrated those of us who work closely in and with new media for years. There has been some progress in the form of organizational statements about the value of technology work, and some of us have pushed at the boundaries of our individual tenure and promotion requirements. But that progress has been slow, not the least reason for which is the patterned isolation that separates our disciplines, our institutions, and our departments. But I’ll get to that in a couple of minutes.

First, I want to offer one more example that raises this question of value. When Derek Mueller (@derekmueller) and I worked on the online archive for College Composition and Communication, one of the things we did was to track the internal citations of that journal. That is, whenever one essay from the journal cited another, we linked them together. So this list represents the 11 most frequently cited essays from CCC spanning roughly a 20-year period, ranked by the number of citations.

Compare that to the list of Braddock Award winners from the same time period. The Braddock Award is the prize for the best essay published in CCC in a given year. As you can see, there is some overlap between the lists, but there are substantial differences as well.

 

Lists of Internal Cites vs Braddock Awards

[beyond my scope today, but it’s interesting to think about how and why an essay might show up on one list and not the other, or on both lists]

For my purposes today, one of the most important differences between these lists is that our institutions are far more likely to recognize the Braddock Award as a sign of value than a handful of citations.

All of this is not to say that one of these lists is better than the other, but rather that we have different measures for value. If we think about this in terms of culture industries, this idea makes complete sense. There’s a substantial difference between the books that win literary prizes and those that top the New York Times bestseller lists. The top grossing movies for a given year don’t usually receive Academy Award nominations. We’re both comfortable and conversant with the different definitions of value that these lists and awards represent.

I had several slides here to slow myself down, with book covers and movie posters to help the contrast below between Best and Most. Copyright violations, every last one of them. So use your imagination. :)

This is horribly overreductive, I know, but I’ve been thinking about these two models of value as Best and Most. And much of our intellectual infrastructure in the humanities is targeted towards the idea of Best:

It operates on the principles of scarcity and selectivity, with the goal of establishing a stable, centralized core of recognizable activity. This model is a particularly strong one for fields in the early stages of disciplinarity–it creates shared values and history, as well as an institutional memory. On top of that, it’s a pretty easy model to implement.

But this model has some weaknesses as well. The larger a field becomes, the harder this model is to maintain. The center becomes conservative, slow to recognize or respond to change, and less and less representative of the majority of its members.

In some ways, what I’m calling Most runs counter to these principles. This model of value is collective rather than selective, focusing on the aggregation of abundance instead of scarcity. This model scales well with the size of a discipline–the more people and texts there, the better the information becomes.

If there’s a weakness to this model, it’s that it requires a lot more infrastructure and information to be productive. And it’s something of a chicken/egg problem: it’s hard to demonstrate the value of this model and to advocate for it without the model already in place.

Over the past couple of years though, we’ve seen a push towards something called altmetrics. Spearheaded by scholars in library and information studies, altmetrics is a term with several different layers. On the one hand, it refers to opening up traditional measures of value for alternative research products, like datasets, visualizations, and the like. On the other, altmetric advocates have also been pushing for measures other than the traditional (and flawed) idea of journal impact factor.

But there’s another layer to this term at well, one that gets at the questions of approved genres and recognizable metrics from a more ecological perspective. That is, altmetrics is only partly about reforming our current system; it’s also encouraging us to rethink the system itself. One of the leading voices in altmetrics, Jason Priem (@jasonpriem), gave a talk at Purdue last year where he talked about something called the Decoupled Journal.

I’m embarrassed to admit that, running out of time, I swiped three of the diagrams from Jason Priem’s slideshow. I’m hoping that the links here serve as sufficient apology. I’d hoped to adapt them to my own purposes and disciplinary circumstances, but I was putting the final movement together on the road, and took the shortcut. I also quoted (and attributed) the line that we’re currently working with the best possible system of scholarly communication given 17th century technology, which got both a chuckle from the room and a fair share of retweets.

Traditional journal system, where each journal is responsible for each stage of the publication process.

This traditional process is a masterpiece of patterned isolation — very little interaction from journal to journal — except when journals are owned by the same corporation. There’s tremendous duplication of effort, and that effort is stretched over a long period of time in the case of most traditional journals.

Thanks to blind peer review, most reviewers are isolated from each other as well, as are the contributors. It isn’t until you reach the top of the pyramid, with the journal editor, that there’s any kind of conversation or cross-pollenation of ideas and research. It’s a top-down model of scholarly communication.

I’m pretty sure I was completely off-script through here. I did remember to emphasize the idea that our current model is a “masterpiece of patterned isolation,” but I’m pretty sure that I forgot to mention at any point that PI is a phrase I’ve pulled from Gerald Graff, who borrows it from Laurence Veysey. Pretty sure I never defined it, either.

What Priem suggests is that we think about the various layers and practices independently of how they’ve traditionally been distributed. Social media in particular have opened up a number of possibilities for engaging with and assessing scholarship; rather than waiting for our journals to adopt these practices, altmetricians would have us begin exploring and applying them ourselves.

Considering that many journals treat promotion and search as somehow accomplished by their subscriber list and a conference booth or two, there’s a great deal of room for innovation. Priem describes this decoupled model as publishing a la carte. He offers an example of a writer who might place an essay in a journal, but choose a number of alternative methods for promoting it, making it findable, having it reviewed, etc.

Cheryl Ball (@s2ceball) called me out in Q&A for being too cavalier about the shift in my talk from the “Decoupled Journal” to DH Now, and she was absolutely right. What I was trying to get at here was the way that DHNow starts from a different set of questions, and thus “couples” the process in a way that accomplishes both “Most” and “Best” without buying in to the traditional model of what a journal should look like. This is also where my script simply turns into talking points.

DH Now

  • Aggregation of tweets, calls for papers, job announcements, and resources
  • Monitor the network for spikes in activity
  • Cross-post and RT the things that folks are paying attention to
  • Added layer of publication: Editors’ Choice
  • EC texts are solicited for the quarterly Journal of Digital Humanities
  • Publication is the final step of a fairly simple, but widely distributed process

Why it’s useful as a model

  • Process produces much closer relationship between the map and the territory – DH Now vs DH 2 years ago
  • Driven by engagement and usefulness to the community rather than obscure standards administered anonymously
  • That engagement occurs much more quickly, and at all stages of the process — publication becomes evidence of engagement rather than a precondition for it

Conclusion

 

Deleted Scenes

You might notice that there’s no reference at all here to my original title. As I was planning the talk, I’d intended to frame it with a three-part movement from invisible college to discipline to what I was calling the “n-visible college” (network-visible), a model of scholarly communication that preserved disciplinary memory but not at the cost of innovation and circulation. My CCCC talk from several years ago (this link is to QT movie of the talk) gestures towards that model, but to my mind Priem’s work in particular gets at it much more concretely.

Also, I didn’t emphasize enough the degree to which our traditional model has as a default the silos or smokestacks that separate us all (journals, disciplines, individuals) from each other. This may be part of my underemphasis on patterned isolation. Our current model keeps us silo’ed by default–to my mind, open access isn’t just a matter of taking down those walls, but in reimagining the system that defaults to silo in the first place. Nate Kreuter (@lawnsports) talked a little bit about this the following day in terms of considering both material and cognitive accessibility. We can make our work open materially, but without an infrastructure that makes it accessible cognitively, we’re not doing as much as we should be.

I spent way too much of my inventional time trying to figure out how to incorporate an essay from Noah Wardrip-Fruin on interface effects. It fit with what I wanted to say, which is that the “interface” of the Braddock essays fails us in that it gives very little sense of the complexity of the field that it represents. Love this line from NWF: “Just as play will unmask a simple process with more complex pretensions, so play with a fascinating system will lack all fascination if the system’s operations are too well hidden from the audience.” My point is that, at a certain disciplinary scale, holding to notions of the “Best” ends up occluding more of our scholarly activity than it reveals. It’s a point I like, but I just couldn’t get there from here. And it may be too intricate for the kind of talk that I give.

Finally, Nathaniel Rivers (@sophist_monster)asked a good question in Q&A that went something like “what happens if you succeed?” In other words, have we really thought through what it would mean if Facebook likes or retweets acquired currency. My answer was that it was less about replacing our current system with a better one than it was about opening the system up to the competing models of value that are currently neglected.

And that’s really all that I can remember. Like I said, this is a fair approximation, although not likely a perfect one. If you’ve read this far, thanks! And thanks to all those who chatted with me before, during, and after.

 

Walking the walk

You may have caught the news over Facebook or Twitter late this past week: I accepted an offer from the Rhetoric Society of America to become their inaugural Director of Electronic Resources (DER). First, I want to thank everyone who congratulated me over email, FB, Twitter, etc. I appreciate all the positive feedback and good wishes.

By the time we got to the point of the offer, it wasn’t a difficult decision at all. I really like how RSA has framed the position, imagining it as on an organizational par with a journal editorship, an ex officio board membership, etc. And there were several things, I think, that recommended me for the position–my familiarity with most things digital as well as the fact that I’m tenured/established, the support that SU was willing to provide me upon accepting the gig, and the fact that I have a good working relationship with the person who’ll be running the conference in 2014. All of those will make the position a manageable one for me.

I did spend some time this summer really thinking carefully about whether or not I wanted to dip my toe back into the pool of organizational service, though. After last year’s surgery, my health and energy levels are still somewhat precarious. I feel pretty solid now, but it takes less to incapacitate me than it used to, and taking on extra duties was something I really had to think about. I’m also in the process of getting my next major project off the ground, so that was another factor I had to take into consideration. I’m not always the greatest at long-term planning, as I have the bad habit of extrapolating my short-term free time unrealistically, without remembering the other commitments I’ve made.

I think one of the things that pushed me into applying and accepting the position, though, was that I still believe that there are a lot of changes that could take place within our professional organizations, many of which are related to the work I do in rhetoric and technology. I’m more than happy to write about those changes on my blog, and do with some frequency, but there’s also a degree to which I feel like I should be doing what I can to implement them. Some of them are small and gradual, while others will be fairly major.

I like the fact that I’ll have a chance to do this work in a context like RSA, an organization with a richer history than I think most of us truly know but one that may be a little more agile than the Big 2 (NCTE/CCCC & MLA). RSA’s been growing quite a bit over the past few years, in terms of the size of the conference, the creation of local affiliates, and efforts at internationalization (did you know that there’s an RSE?). It’s also in a fairly unique position as a well-established organization with both leadership and membership drawn from English and Communications, a relationship that varies a lot from institution to institution. All of those factors put RSA in an interesting position, one that (to me, at least) is ripe for the kind of work that the DER is meant to accomplish.

There are a lot of possible first steps, but I’m starting right now with social media (follow RSA at @rhetsoc!), with the idea that those structures will take some time to develop and take hold. They’ll also function as a loose framework for a lot of the other developing that we’ll end up doing, I suspect. And I expect that I’ll be posting updates here periodically, asking for feedback on site features, wishlisting, etc.

So yeah, prepare to be directed, you electronic resources.

Migrating the MLA JIL from list to service

I left a comment over at Dave’s excellent discussion of the MLA Job Information List, and part of it was picked up at Alex’s equally worthwhile followup, so I thought I’d expand on it here. Here’s the comment I left:

I was going to make the same point that Alex makes vis a vis the costs of the JIL vs. the costs of the conference itself for both interviewers and interviewees, especially all those years that we were forced to compete with holiday travelers for both plane seats and hotel rooms. The list is the tip of a very lucrative iceberg that has supported the MLA for a long time.

I wanted to second your comments about opening up the job list database, which for all intents & purposes is the same (inc. the crappy interface) that they used in the mid-90s. A much richer set of metadata about the jobs could be gathered by MLA (and made available to searchers) if the arbitrary scarcity of the print list is set aside and MLA were to take their curative obligation seriously.

There’s been no small amount of buzz lately surrounding the MLA JIL, our fields’ annual posting of open academic positions. For a long time, that list has been proprietary to MLA. At one time, institutions paid to have their positions appear in the list, and prospective applicants paid for a print copy of it. I don’t remember the exact year that the online database version of the JIL went live, but I’m tempted to place it at around 1996 or 1997. At that time, database access was granted to those who’d purchased the print version. It’s never been an especially elegant solution, as some of us would gladly have paid less money and forgone the print version. And as many people have observed in this discussion, the JIL is a monetary imposition on those who are least able to bear it, (often debt-ridden) graduate students and/or contingent faculty, at a time where the costs of application (mailing, copying, dressing, traveling, boarding, eating) are already substantial.

MLA appears to be taking steps to mitigate some of these costs (open JIL, Interfolio), which is good. But for me, there’s an additional layer that needs to be addressed. One of the arguments that MLA made in opening up access to their publications is that they provide curation, such that allowing authors to make copies of individual articles available doesn’t damage the brand. I don’t disagree with this. But I would love to see them extend the same logic to the JIL, and see them take a more active role in curating that information.

For several years now, there have been other, free alternatives to the JIL (departmental & institutional sites, disciplinary listserves, word-of-mouth, et al.). The MLA JIL is not strictly speaking, necessary any longer, and as more programs move to Skype or phone interviews, the monopoly the organization once held on the interviewing process may also be in danger of fading. Frankly, I don’t think that’s a bad thing, although it’s going to be a long time before those changes affect the overall process. There’s little difference in cost if you’re attending MLA for one interview or for twenty, but once applicants can get away with not attending at all, that will change things. (I’m interested to see if offering a Skype interview as an alternative to MLA will someday soon become an institutional mandate along the lines of equal opportunity/access.)

Anyways. If I were in charge (a phrase I think so often that I’ve made it a new category on my blog) of the JIL, the first thing I would do would be to update its horrific interface, which hasn’t changed substantially since the days of its first appearance.

I can’t imagine the cringes that this interface elicits from my colleagues in “Technical and business writing,” particularly insofar as they have experience with usability testing and/or identify with terminology a little more recent than “business writing.” There are some basic tips for searching at the end of the help link, but really, there’s so little to work with that those tips are obvious. (want more results? check more boxes.)

Okay, I lied. The problem with this interface is actually just symptomatic of the curation problem. If the JIL is just a text file, then really, all you can do is text matching. And if you charge departments by the word (which they used to do, iirc), then you’re actually incentivizing a lack of information, and crippling your service. And that’s the point that I want to make here: the MLA should be thinking of this as a service rather than a list, a service they provide (and can and should improve) rather than a list that they sell in both directions (pay to get on it, pay to get it).

Part of what MLA should be doing is standardizing the categories of information that institutions provide to prospective applicants, and encouraging members to supply that information. I don’t have a full list to hand, but I think it’d be helpful for any number of people to know things like institutional type, salary range, replacement vs new hire, teaching load (both numbers and distribution), administrative expectations, course caps, service duties, startup costs, travel/research funding, sabbaticals, mentoring/advising expectations, and to have specializations separated out into primary, secondary, tertiary, and so forth. I don’t know that we would all agree on this list, and institutions would certainly be able to opt out, but imagine being able to sort the JIL data by all the variables that inevitably go into our hiring decisions (from either side).

On the search side of things, the ebb and flow of our various fields are a lot more complicated than the small list on the JIL database lets on. When we hire someone here (a freestanding writing program with a doctoral program), it goes without saying that our primary area for the position is rhetoric and composition. Often, we also have particular subcategories in mind, without which an application wouldn’t be considered. But then, we also have a wish list of areas not currently covered by our faculty–those might not be as high a priority as others, but they are still potentially valuable for applicants to have. Here’s an example. Several years ago, we were concerned about whether or not our program was preparing students enough in terms of methodology. No one hires a “methods person” per se, but when we hired in a different area, one of the questions we asked all of our candidates was how they would approach our graduate course in methods. I think it very likely that digital humanities will be an area like that over the next several years. Departments may prioritize that in pools of applicants that list the content areas they “need.” There should be a way to communicate that kind of nuance, through some form of tagging, multi-faceted categories, etc. As Alex points out, “Of the 56 results that return when searching for rhet/comp and asst prof, 10 are not rhet/comp specialist jobs of any kind, but maybe want someone who has some rhet/comp training.” For me, that’s a pretty obvious (and ongoing) failure of the interface. (I should also mention that the only way currently to find this out is to read through 6 separate pages of 10 entries each, since there is no way to filter results or choose how many entries per page to display. Ugh.)

Some schools already provide information like this (and like the litany above) in their ads, cost and space be damned. And others go to great lengths elsewhere to provide context for their positions on listservs or their own pages. The existence of these workarounds (which have been common for years now) should be a sign to MLA that the “service” they currently provide is little more than a text file and the Find command from the Edit menu of a word processor. It could be so much more: Dave’s ideas of an open API and a Google Maps mashup are brilliant, and they’re the kind of things that I’d love to see MLA take some lead on. At the very least, though, it’d be great to see them take some responsibility for the service that so many of us have paid for, over and over and over.

UPDATE: fwiw, the first job ad I saw after I wrote this post contained the sentence, “Interviews will take place at the MLA convention in Boston, though video conferencing is an option for those not attending the conference.” I’d be willing to bet that, within 3-4 years, this will become standard practice, as hiring schools continue to look for ways to cut costs. Not sure when it’ll tip to the point where it’s safe to pass on MLA as an applicant, but that day is sooner than most of us think.

UPDATE2: following a suggestion from Bill Hart-Davidson, Jim Ridolfo did a quick map/mash of the JIL data at http://rhetmap.org/.

 

Back to top
© 2014 Collin Gifford Brooke - A MeanThemes Theme