Archives: tools

Relearning to Write

my new writing set-upI would describe myself as a deep writer, not in the sense that what I have to say is any more profound than anyone else’s thoughts, but in the “deep sleeper” sense. That is, when I write and it’s going well, I’m pretty able to shut the rest of the world out and focus on little else. For most of my life, this has included my body itself. I haven’t had to think about posture, arm angle, or things like that, unless they happen to impinge upon my ability to focus.

That changed with my back surgery last fall. I’ve discovered, to my dismay, that there are certain seats in my house that are worse for my back than others, and chief among the offenders is my desk chair, or maybe my desk more broadly, since I’ve tried multiple arrangements and chairs there. Among other things, I’ve learned that when I focus to write, I have the bad habit of wrapping my ankle around a table leg, or wedging my left arm a certain way, and when I do that for more than about 15 minutes, I pay the price in the form of hours, if not days, of subsequent pain. And woe betide me if I find myself in a position where I have no choice but to push through and do desk work despite that pain. I spent most of RSA using a cane to recover from the time I had to spend putting the online program together.

I’m perfectly able to read normally now; my habits include shifting from one area of the house to another (or to a coffeeshop) on a fairly regular schedule, and that seems to do the trick. But writing has still been a challenge. After yet another summer where I didn’t feel like I got the writing done that I wanted to, I decided that I was going to try something a little more radical. The picture above is the result: I ordered myself a GelPro standing mat, and a couple of 2×1 cube shelving units.

I don’t know if it’ll work. I already have a bad habit of wandering off and doing a lap of my office in between sentences, and I don’t feel especially comfortable yet, but this is my first blog post written entirely from a standing position. And I hope that, this semester, I can learn to write again in a way that doesn’t make my back or my knees feel like someone has been smacking tennis balls at them. We’ll see how it goes.

Metadata, Procedurality, and Works Slighted

Whenever I put together a course, I like to imagine that there’s some sort of narrative thread running through, whereby early topics and readings lead to the ones that follow. Sometimes that thread is brute chronology, but most often, it’s thematic, and I suspect that more often than not, the thread is one that only I can see, although I do try to suggest it at various points during the semester. In the case of RCDH, this has been a little tricky, not least because DH is still emergent, somewhat interdisciplinary, and my own field’s engagement with it is uneven. In my head, though, after we’d gotten an obligatory week of definitions out of the way, the first “unit” of the course was a trio of weeks gathered under the headings of database, archive, and metadata. (Here’s the schedule, if you haven’t seen it.)

We’re turning now to a week that didn’t necessarily fit that well as I was originally putting the course together, a week that combines Stephen Ramsay’s Reading Machines, some work on procedural literacies, and a few pieces/performances of algorithm. It’s an ambitious little week in its own way, but as we were working our way through a discussion of metadata last night, it got me to thinking about the transition between this week and next. Some of this I raised in class somewhat tentatively, but I wanted to write through it a bit today, partly for my own memory, and also because my guess is that there are others who have written about these ideas in more detail, whose work I may not have come across. So if any of this happens to resonate with other texts, please leave a note/suggestion in that regard.

If I have a hypothesis here, it’s that there’s an important connection between metadata and proceduracy (or procedural literacy) that I haven’t thought through as carefully as I want to. My own background/familiarity with the scholarship on metadata is a little spotty, so I worry that this is obvious to everyone other than me. I’ve read some of the classics, like Sorting Things Out, and I came to the topic through the Web2 discussions (like Everything is Miscellaneous), enough so that when I was in charge of the CCC Online Archive, we billed ourselves as an archive of journal metadata with a mix of approaches (mixing established classification schemes with emergent tagging, etc.). If I had to pin down a couple of dominant themes in the literature I’ve read, there was a focus in the mid-2000s on taxonomy vs folksonomy, and I think that’s an ongoing conversation. More recently, given mass digitization efforts, the quantified self movement, revelations of deep surveillance, and the proliferation of online archives, my sense is that there’s also been a turn towards the more basic question of accuracy, both in terms of getting it wrong and getting it too right. For example, last night we discussed Geoffrey Nunberg’s critique of Google’s Book Search, which chronicles some of its (many) egregious metadata failures and Jessica Reyman’s piece in College English on user data and intellectual property (PDF). Nunberg is a pretty straightforward critique of the consequences of getting metadata wrong, and Reyman (following Eli Pariser’s Filter Bubble) might be described as an exploration of the hazy line between data and metadata. Reyman closes by explaining that

The danger presented is that the contributions by everyday users will potentially be transformed into increasingly exclusive forms of proprietary data, available to the few for use on the many.

FB and others have become so adept at collecting and analyzing metadata that “privacy settings” are increasingly an empty gesture, a point that’s also illustrated charmingly by Kieran Healy’s “Using Metadata to Find Paul Revere,” another of last night’s readings. The connective tissue for me here is referentiality–the degree to which metadata presents us with an accurate representation of the data it is purportedly about. Referentiality is also one of the goals (among others) of various metadata standards, like the work that Cheryl Ball and the Kairos folk are doing.

One of the themes that emerged for me, though, during the discussion was the degree to which there’s a tension between (to borrow the subtitle of Sorting Things Out) classification and its consequences. That’s certainly a theme in Tarez Graban’s “From Location(s) to Locatability: Mapping Feminist Recovery and Archival Activity through Metadata,” (no link, sorry) which works from the assumption that metadata makes certain activities more visible and others less so, and that recovery work can proceed from questioning and complicating the categories that we often internalize about academic work.

I think, though, that I want to push that tension into the heart of metadata itself, something more like what Curtis Hisayasu gets at in one of his contributions to “Standards in the Making“:

What becomes absolutely transparent in these intersections is the actual work of historical knowledge-making which involves not simply “digging up” artifacts and placing them accordingly in the container of linear time, but making self-conscious decisions about how that artifact is to be organized alongside others to produce a narrative and an argument about the present.  “Tagging,” in instances such as these does not so much “describe” digital artifacts as much as it appropriates them and composes them in the name of more general logics of history and identity.

I don’t think that I’m after the claims, made variously throughout our readings last night, that metadata is necessarily rhetorical, ideological, and/or political, although I do think that it is all those things. In its nature as description, all of those qualities follow for me with respect to metadata. Maybe I’m pushing at something that is obvious or implicit in those adjectives, but I feel like there’s also space to think about procedurality in ways that might not be immediately apparent. Over the past few days, a couple of Twitter “events” helped this coalesce for me. First was Anil Dash’s essay about his “experiment” in retweeting, where he made the decision to tweet only women, and the second was Sara Ahmed’s discussion of gender and citation (in my head, I think of this as an essay on “Works Slighted”):

Sara Ahmed on the gender politics of citation

Sara Ahmed on the gender politics of citation

Ahmed’s not the first to point this out, nor are academic bibliographies the only venue where this (important) discussion is happening, nor is this solely a gendered concern, but her post(s) happened to coincide for me with this sense that metadata contains, at its heart, a ratio between description and procedurality. That is, there is no description degree zero, no purely descriptive metadata, and I think that sometimes we fall into the trap of imagining that there is. It’s not enough to simply say “it’s both,” though–I think that claim is easy enough to accept. The procedural literacy of metadata lies perhaps in figuring out that variable ratio and tuning it to the task at hand.

The exercise with which we began class last night was to take a couple of pages from an old MLA job list, and to develop a set of metadata categories based upon the entries, and one of things that crystallized for me was the degree to which this ratio varied from term to term. We also were able to lean on the ratio a bit, such that a seemingly descriptive phrase like “residential campus” could be read as a subtle (seductive) means of distinguishing an institution from others (vs. commuter campuses, online delivery, et al.). And on Sunday, I’d already taken an otherwise procedural term like “postmark” and used it as a descriptor for the impact of technology on the process. The deeper into the exercise we got, the more I think we were thinking both in terms of what metadata represent and how they functioned.

That same ratio lurks at the core of the bibliography, which is both descriptive (here are the works I have cited) and procedural (presented in a consensual format for their location), but Ahmed, Dash, and many others call attention to the procedural consequences of the bibliography. What we know now about network effects and filter bubbles should attune us to those consequences, even if we haven’t personally run afoul of the disciplinary fatalism that Ahmed describes in her followup to the Twitter conversation.

With respect to bibliographies, there’s an additional, material, fatalism–the reason we have traditionally constrained bibliographies to the works directly consulted for a given piece of writing is for reasons of space of the printed page. With the notable exception of the bibliographic essay, whose works cited sometimes also concerns itself with field coverage, there is an assumption that the bibliography must be primarily descriptive–these are the texts named directly here. But that list of citational attachments is, of course, implicitly preferential. As Ahmed puts it:

There is a ‘good will’ assumption that things have just fallen like that, the way a book might fall open at a page, and that it could just as easily fall another way, on another occasion. Of course the example of the book is instructive; a book will tend to fall open on pages that have been most read.

Maybe what I want to say is that there’s a similar assumption at play within metadata, or more precisely, within the way I’ve traditionally thought about metadata. As our work migrates slowly away from the printed page, it might open up opportunities for tuning the ratio away from description, for embracing the compositional implications of tools like bibliographies. Not that there aren’t important issues of preservation and persistence if we were to replace static bibliographies with links to a public Zotero folder, for instance, but I find myself thinking more and more about experimenting this way.

Geez. There’s plenty more to say, but this is long enough as it is. I don’t suspect I’ll have the time, energy, or inclination to do this every week. And please, if you’ve made it this far, and have suggestions in mind for other things I might stir into this mix, feel free to add them below…

Mining the JIL

This week, in Rhetoric, Composition, and Digital Humanities, we’re reading a series of essays about metadata, so that’s where my mind has been as of late. And one of the things that I’m asking my students to do each week is to imagine projects that they might do based upon the readings and resources for that week. So I spent a little time this afternoon messing around with the fabulous dataset that Jim Ridolfo has shared, the OCRed archive of MLA Job Information Lists.

I wanted to do something that had some kind of hypothesis, but also that I could do fairly quickly, without too much technological overhead. I settled for the question of how the job search process has changed over the past 10-15 years with respect to technology. When I was on the market for the first time, in 1997 (!!), I don’t recall whether the online version of the JIL had been introduced yet. But certainly the job seeker’s experience, even allowing for the online JIL, was predominantly paper-based.

I don’t think it’s particularly earth-shattering to suggest that this has changed. But can we find that change reflected in the JIL itself? I tried a couple of angles. First, I tracked all mentions of the word “postmark” (including postmarked). I was working mostly with basic pagecounts, and while I tried to be good about eliminating those few occasions where it appeared twice in one ad, I almost certainly missed some of them. I also combined the October and December JILs, so if an ad appeared in both with the word, I counted it twice, I’m afraid. This is a blogpost.

A line graph charting the use of the word 'postmark' in MLA JIL ads from 2000 to 2012

A line graph charting the use of the word ‘postmark’ in MLA JIL ads, 2000-2012

The only surprising thing about this graph to me is the little uptick in 2012, after references to postmarking had dropped into the single digits rather quickly from 2009-2011. Otherwise, though, this was about what I’d expect. As more institutions set up online HR forms and began to accept materials delivered electronically, the idea of indicating a separate postmark (as opposed to a simple deadline) has been on the decline.

The second approach I took was to look into how many job ads made reference to URLs, either in the form of departmental homepages or online (HR) submission forms. Given the steep climb from 1998 to 1999, you might forgive me for deciding not to count after a certain point. Instead, what I’ve done here is in terms of percentages. The JIL has front matter that stays pretty constant, so there are no URLs on those pages, making the maximum somewhere in the 90% range. I did a basic search count for “http,” then divided the # of page hits by the total number of pages in the JIL to arrive at these percentages. I also went backwards until I found a year (1994) with zero.

JILurls

A line graph charting the percentage of JIL pages with at least one URL on them, 1994-2003.

This graph is a little misleading, though, going as it does from around one-third of the JIL pages in 1998 to nearly all of them in 1999. If I remember correctly, 1999 was the first year where participating departments were prompted to supply a homepage for their program and so, even if the ad copy doesn’t reference any URLs, the ad itself contains one. Were I to strip out those headings, the graph might be more gradual, and more accurate in its representation of the diffusion of the web into the JIL.

I know you’re curious. The five institutions that supplied URLs in 1995 were Clemson, Georgetown, Oregon State, Western Michigan, and an organization in Austria called the International Institute for Applied Systems Analysis.

Neither of these graphs necessarily lead to grand claims about the effect of the internet on the job search process. The second does suggest a point at which it was taken for granted that programs would have some rudimentary web presence. The first suggests that we’ve seen a gradual tailing-off of certain print- and postal-based assumptions about how materials circulate. And both suggest that if I wanted to claim anything more specific, I’d need to do much finer-grained work first.

That’s all.

 

Reading Notes

There’s a phrase that I got from Laurence Veysey, via Gerald Graff (although it appears in other places as well): “patterned isolation.” Veysey uses the phrase to explain the growth of the modern university and the way that disciplines grew without engaging each other, but I tend to apply it on a more “micro” scale. That is, there are many things we do as teachers and scholars in patterned isolation from our colleagues, tasks that call upon us to reinvent wheels over and over in isolation from one another. Fortunately, with the Internet and all, much of that is changing, as folks share syllabi, bibliographies, and the like online.

But I’m constantly on the lookout for ways to short circuit patterned isolation. For me, reading notes are one of those sites. I’m not a great note-taker and never have been–I’m too reliant on visual/spatial memory and marginalia. Disconnecting my own notes from the physical artifacts that I was processing didn’t make sense. Now, of course, I’m lucky sometimes if I remember having read a book, much less what I scrawled in its margins, so I wish that I’d been better about taking notes and I admire those people who have already internalized the lesson that it took me 20+ years to figure out. So one of the things that I like to do in my graduate courses is to aggregate the note-taking process. Rather than asking or expecting each student to take a full set of reading notes for the course, I rotate the function among them. There’s nothing to stop a student from taking more detailed notes on his or her own, but I want all my students, when they leave the class, to be able to take with them a full set of notes that they can refer back to later.

For me, the trick to this is making the notes relatively uniform–there’s a part of me that resists this, because different folks/strokes and all that, but I think it’s important to make the notes themselves scannable and consistent. Also, I think that the process needs to be sustainable–part of the challenge of reading notes is that they tend to shrink or expand based on available time, and they shouldn’t. The notes should be brief enough that one can execute them quickly (when time is short) but elaborate enough that they’re useful 5 or 10 years down the road when the text has left one’s memory. For me, this means keeping them to about a page, and doing them in a format that should take no more than about 15 minutes for an article or chapter. So here’s what I’ll be asking my students this semester to do for their reading notes:

  • Lastname, Firstname. Title.
  • Full MLA citation of article/chapter (something that can be copy/pasted into a bibliography)
  • Abstract (50-75 words, copy/pasted if the original already has an abstract)
  • Keywords/tags (important terminology, methodology, materials, theoretical underpinnings)
  • 2-3 “key cites” – whose thoughts/texts does this article rely upon
  • 2-3 “crucial quotes” – copy/paste or retype the 2-3 most important passages from the essay
  • 1-2 questions – either questions that are answered by the text, or questions it raises for further exploration

And that’s it. I’ve futzed around with different categories, but these are the ones that have stuck with me through multiple iterations of this assignment. The notes aren’t meant to be a substitute for reading, but they should provide the basic info about the article as well as some indication of how it links to others. And it’s meant to be quick.

This seems really obvious to me now but I can tell you that, when I was in graduate school, it wouldn’t have been. I can’t tell you how happy it would make me now to have taken notes like these on everything I’ve read since grad school. Especially for all those things that I’ve forgotten I’ve even read.

Telescopic Text

So I’ve been slowly reading S by Abrams and Dorst, and slowly expanding my Twitter horizons with respect to bots, and today, I came across a really interesting app/tool that crossed the streams, so to speak.

It’s called Telescopic Text. Not unlike Tapestry, it’s an application that lets you write and store texts. Those texts, though, are like that word game where you create a ladder of words by adding a letter at a time (a, an, pan, plan, plane, planet, etc.). You start with a tweet-length sentence, highlight particular words, which then “unfold” as they’re clicked on. It’s like drilling down into a text to find more and more details.

The TT site itself starts with an example:
http://www.telescopictext.com/

The tools for building one, and saving it, are at:
http://www.telescopictext.org/
(registering for an account is free, which you’ll need to do if you want to save your efforts)

I ended up finding the site from a link to Tully Hansen’s “Writing,” which is located here (you’ll need to scroll down):
http://overland.org.au/previous-issues/electronic-overland/

It reminds me too of Jon Udell’s classic screencast about the WIkipedia entry for the heavy metal umlaut:http://jonudell.net/udell/gems/umlaut/umlaut.html

I’m not entirely sure how I’ll be using this, but it’s been a lot of fun to play with this afternoon…

(x-posted from Facebook)

On the un/death of Goo* Read*

It feels like a particularly dark time around the Interwebz these days–I don’t think it’s just me. I’ve been studying and working with new media for a long time now, and so you’d think that I’d be sufficiently inured to bad news. For whatever reason, though, it feels like the hits have kept coming over the last month or so. There’s plenty to feel down about, but since this song is about me, I wanted to collect some of my thoughts on the changes that are happening with two pieces of my personal media ecology: the demise of Google Reader and the recent purchase of GoodReads by Amazon.

I’ve been soaking in a lot of the commentary regarding Google Reader, ever since it happened while I was at CCCC, and while there’s a lot of stuff I’m not going to cite here, there were a couple of pieces in particular that struck me as notable. I thought MG Siegler’s piece in TechCrunch was good, comparing Google Reader’s role in the larger ecosystem to that of bees. Actually, scratch that. We’re the bees, I think, and maybe Reader is the hive? Anyway, my distress over the death of Reader is less about the tool itself and more about the standard (RSS/Atom) it was built to support. I understand why there are folk who turned away from RSS because it turned the web into another plain-text inbox for them to manage, but as Marco Ament (of Instapaper fame) observes, that was less a feature than user misunderstanding. As more folks turn away from RSS, though, Ament suggests, we run the risk of “cutting off” the long tail:

In a world where RSS readers are “dead”, it would be much harder for new sites to develop and maintain an audience, and it would be much harder for readers and writers to follow a diverse pool of ideas and source material. Both sides would do themselves a great disservice by promoting, accelerating, or glorifying the death of RSS readers.

The larger problem here is that, in our Filter Bubble world of quantified selves and personalized searches, this kind of diversity is an ideal that is far more difficult to monetize (a word I can barely think without scare quotes). Most folk I know now use FB and T as their default feed readers; I’m old-fashioned, I think, in that I tend to subscribe to sites through Reader if the content I find through FB/T is interesting to me. It may be equally quaint of me to like having a space where the long tail isn’t completely overwhelmed by advertisements, image macros, and selfies. I get that.

Where the Reader hullabaloo connects for me with the recent Amazonnexation of GoodReads is Ed Bott’s discussion of Google’s strategy at ZDNet–again, the point isn’t so much that Reader itself is gone, but that Google basically took over the RSS reader market, stultified it, and is now abandoning it. It’s hard not to see the echo of “embrace, extend, extinguish” operating in Amazon’s strategic purchase. The folks at Goodreads deserve all the good they’re getting, and they’re saying the right things, of course, but this is Amazon’s third crack at this, if you count Shelfari and their minority stake in LibraryThing, and it’s hard for me not to see a lot of Goodreads value in terms of their independence from Amazon. Amazon’s interest in Kindle has kept them from any kind of iOS innovation, and it’s almost impossible for me to imagine them keeping their hands out of the “data” represented by the reviews and shelves of Goodreads members. While Amazon’s emphasis on “community rather than content” was revolutionary once upon a time, their reviews are now riddled with problems (and that’s when they’re not being remixed as admittedly hilarious performance spaces). It was the absence of commercialism and gamification that made Goodreads a somewhat valuable source of information for me–despite their best intentions, I doubt that will last.

If I had to wager, I’d guess that within a year or so, the G icon will turn into an A, and the app will be retuned to provide mobile access to the broader site primarily. GR’s model of personal shelves will be integrated onto the site with A’s wish lists, and people who liked that will also like 50 Shades of DaVinci Pottery. Startups will focus their energies on softer markets, as they did when Google “embraced” RSS, and we’ll probably be the poorer for it.

Back in the day, I remember feeling the full cycle of excitement when Yahoo absorbed sites like Flickr and Delicious, hope that they would help them go to the next level, and disappointment as they proceeded to neglect them into has-been status. I feel like we’ve been burned often enough now to be able to move pretty much straight from the purchase announcement to the disappointment. And I still don’t begrudge these folks the rewards they’ve earned. But, I’ve got to be honest–this “do no harm” stuff sounds a lot like the squeaky hinges on the barn door closing after the horses have bolted.

Migrating the MLA JIL from list to service

I left a comment over at Dave’s excellent discussion of the MLA Job Information List, and part of it was picked up at Alex’s equally worthwhile followup, so I thought I’d expand on it here. Here’s the comment I left:

I was going to make the same point that Alex makes vis a vis the costs of the JIL vs. the costs of the conference itself for both interviewers and interviewees, especially all those years that we were forced to compete with holiday travelers for both plane seats and hotel rooms. The list is the tip of a very lucrative iceberg that has supported the MLA for a long time.

I wanted to second your comments about opening up the job list database, which for all intents & purposes is the same (inc. the crappy interface) that they used in the mid-90s. A much richer set of metadata about the jobs could be gathered by MLA (and made available to searchers) if the arbitrary scarcity of the print list is set aside and MLA were to take their curative obligation seriously.

There’s been no small amount of buzz lately surrounding the MLA JIL, our fields’ annual posting of open academic positions. For a long time, that list has been proprietary to MLA. At one time, institutions paid to have their positions appear in the list, and prospective applicants paid for a print copy of it. I don’t remember the exact year that the online database version of the JIL went live, but I’m tempted to place it at around 1996 or 1997. At that time, database access was granted to those who’d purchased the print version. It’s never been an especially elegant solution, as some of us would gladly have paid less money and forgone the print version. And as many people have observed in this discussion, the JIL is a monetary imposition on those who are least able to bear it, (often debt-ridden) graduate students and/or contingent faculty, at a time where the costs of application (mailing, copying, dressing, traveling, boarding, eating) are already substantial.

MLA appears to be taking steps to mitigate some of these costs (open JIL, Interfolio), which is good. But for me, there’s an additional layer that needs to be addressed. One of the arguments that MLA made in opening up access to their publications is that they provide curation, such that allowing authors to make copies of individual articles available doesn’t damage the brand. I don’t disagree with this. But I would love to see them extend the same logic to the JIL, and see them take a more active role in curating that information.

For several years now, there have been other, free alternatives to the JIL (departmental & institutional sites, disciplinary listserves, word-of-mouth, et al.). The MLA JIL is not strictly speaking, necessary any longer, and as more programs move to Skype or phone interviews, the monopoly the organization once held on the interviewing process may also be in danger of fading. Frankly, I don’t think that’s a bad thing, although it’s going to be a long time before those changes affect the overall process. There’s little difference in cost if you’re attending MLA for one interview or for twenty, but once applicants can get away with not attending at all, that will change things. (I’m interested to see if offering a Skype interview as an alternative to MLA will someday soon become an institutional mandate along the lines of equal opportunity/access.)

Anyways. If I were in charge (a phrase I think so often that I’ve made it a new category on my blog) of the JIL, the first thing I would do would be to update its horrific interface, which hasn’t changed substantially since the days of its first appearance.

I can’t imagine the cringes that this interface elicits from my colleagues in “Technical and business writing,” particularly insofar as they have experience with usability testing and/or identify with terminology a little more recent than “business writing.” There are some basic tips for searching at the end of the help link, but really, there’s so little to work with that those tips are obvious. (want more results? check more boxes.)

Okay, I lied. The problem with this interface is actually just symptomatic of the curation problem. If the JIL is just a text file, then really, all you can do is text matching. And if you charge departments by the word (which they used to do, iirc), then you’re actually incentivizing a lack of information, and crippling your service. And that’s the point that I want to make here: the MLA should be thinking of this as a service rather than a list, a service they provide (and can and should improve) rather than a list that they sell in both directions (pay to get on it, pay to get it).

Part of what MLA should be doing is standardizing the categories of information that institutions provide to prospective applicants, and encouraging members to supply that information. I don’t have a full list to hand, but I think it’d be helpful for any number of people to know things like institutional type, salary range, replacement vs new hire, teaching load (both numbers and distribution), administrative expectations, course caps, service duties, startup costs, travel/research funding, sabbaticals, mentoring/advising expectations, and to have specializations separated out into primary, secondary, tertiary, and so forth. I don’t know that we would all agree on this list, and institutions would certainly be able to opt out, but imagine being able to sort the JIL data by all the variables that inevitably go into our hiring decisions (from either side).

On the search side of things, the ebb and flow of our various fields are a lot more complicated than the small list on the JIL database lets on. When we hire someone here (a freestanding writing program with a doctoral program), it goes without saying that our primary area for the position is rhetoric and composition. Often, we also have particular subcategories in mind, without which an application wouldn’t be considered. But then, we also have a wish list of areas not currently covered by our faculty–those might not be as high a priority as others, but they are still potentially valuable for applicants to have. Here’s an example. Several years ago, we were concerned about whether or not our program was preparing students enough in terms of methodology. No one hires a “methods person” per se, but when we hired in a different area, one of the questions we asked all of our candidates was how they would approach our graduate course in methods. I think it very likely that digital humanities will be an area like that over the next several years. Departments may prioritize that in pools of applicants that list the content areas they “need.” There should be a way to communicate that kind of nuance, through some form of tagging, multi-faceted categories, etc. As Alex points out, “Of the 56 results that return when searching for rhet/comp and asst prof, 10 are not rhet/comp specialist jobs of any kind, but maybe want someone who has some rhet/comp training.” For me, that’s a pretty obvious (and ongoing) failure of the interface. (I should also mention that the only way currently to find this out is to read through 6 separate pages of 10 entries each, since there is no way to filter results or choose how many entries per page to display. Ugh.)

Some schools already provide information like this (and like the litany above) in their ads, cost and space be damned. And others go to great lengths elsewhere to provide context for their positions on listservs or their own pages. The existence of these workarounds (which have been common for years now) should be a sign to MLA that the “service” they currently provide is little more than a text file and the Find command from the Edit menu of a word processor. It could be so much more: Dave’s ideas of an open API and a Google Maps mashup are brilliant, and they’re the kind of things that I’d love to see MLA take some lead on. At the very least, though, it’d be great to see them take some responsibility for the service that so many of us have paid for, over and over and over.

UPDATE: fwiw, the first job ad I saw after I wrote this post contained the sentence, “Interviews will take place at the MLA convention in Boston, though video conferencing is an option for those not attending the conference.” I’d be willing to bet that, within 3-4 years, this will become standard practice, as hiring schools continue to look for ways to cut costs. Not sure when it’ll tip to the point where it’s safe to pass on MLA as an applicant, but that day is sooner than most of us think.

UPDATE2: following a suggestion from Bill Hart-Davidson, Jim Ridolfo did a quick map/mash of the JIL data at http://rhetmap.org/.

 

Add technology and stir!

Apropos of little other than some background reading that I’ve been doing this week, I think I’m just about finished with the arguments about how we need to be careful about “not letting technology drive our X.” Right now, the arguments I’ve been scanning are about pedagogy, but this would just as easily apply to any other set of practices.

Actually, maybe it’s a little apropos of the whole kerfuffle happening at UVA right now. I was certainly struck by the astounding media illiteracy of the UVA Board of Visitors, whose sense of kairos needs an upgrade from its current early-90s version. The idea of pulling out the old “news cycle” trick–making major decisions/announcements after 5pm on a Friday and hope no one notices until next week–was some pretty dynamite strategery on their part.

One of the undercurrents of the blowback has been their apparent infatuation with other elite institutions’ efforts at staking out MOOC turf, and the dismissiveness with which that infatuation has been described. So maybe it’s that attitude that has been resonating with some of the reading I’ve been doing lately. (And while I’m taking issue with that dismissiveness here, that’s not to say that I condone any part of what happened at UVA–I think the tech stuff is a symptom of a much deeper problem.)

And yeah, I get it. The correct answer is that technology should be adopted cautiously and with sound pedagogical reasons behind that adoption and for better reasons than novelty and/or change’s sake. What bugs me about the correct answer, I guess, is that it assumes some degree zero world of pedagogy, as though our current practices aren’t themselves implicated, imbricated, and often enough, driven by the available technologies. Once you teach a course online, with all of the ups and downs that that brings, you realize that face-to-face conversation is itself a technology with certain affordances.

Just because that experimentation happened 4 or 5 centuries ago doesn’t mean that print technologies didn’t have to go through the same phases of novelty, resistance, adoption, diffusion, invention, and experimentation. The difference is a matter of naturalization and invisibility, not an issue of technology vs an atechnological pedagogy.

Maybe I’m personally coming around to a position that’s actually in favor of an “add technology and stir” approach. I can name a number of occasions where that was basically what I did in the classroom. I would rather experiment with the technology (and explicitly invite my students to do so along with me) than wait for the “tool” that matches my goal. So, a couple of years ago, I asked students to use Tumblr to do an annotated bibliography project (a lot like what Ryan Hoover describes, only he used Prezi, which I wish I’d thought of!), with no other real motive than wanting to see what/if/how Tumblr might contribute to my classes. Sometimes it works and sometimes it doesn’t. Just like every other course and every other assignment I’ve ever designed, technology or no.

The fact is that what pedagogical expertise and commitment I have is going to be present regardless of whether I’ve slowly and carefully adopted something or decided to do it on a whim. Just as there’s no degree zero of technology when it comes to pedagogy, there’s no degree zero of pedagogy when it comes to adopting technology for the classroom. Someone, somewhere, has to do the stirring, and I’m kind of tired of the idea that it shouldn’t be me.

Mecology 2012

It occurs to me that, if I start trying to toss out manifestos on a daily basis, I’m going to burn myself out pretty quickly. So I thought I’d reflect today on some of the changes brewing that led me to resuscitate my weblog.

I really have no idea if the term is mine, but a long while back, I started using the word “mecology” for a couple of different purposes. It crams together the phrase “media ecology,” but also “me” and “ecology,” which put me in the mind of Ulmer’s mystory (among other puns). I’ll have you know that mycology is taken–insert “fun guy” pun here. :)

Anyhow, for about 7 or 8 years, I’ve been occasionally asking students in my (technology-oriented) courses to do what I call either mecology or T+1 assignments. I ask them to take stock of the various platforms, media, tools, software, et al., that they use to process information. It’s something like a literacy autobiography, I suppose, but one that is more focused on their present-day habits and usually their engagement with contemporary ICTs. It’s a nice opening assignment, one that gets them thinking about how they structure their activity, and it’s a great way for me to get a sense of what they’re ready to do in my courses and where we might focus our attention. The T+1 assignment (where T is their current distribution/structure of activities) asks them to choose one application, add it to their mecology for the semester, make a good faith attempt to really integrate it, and then write up a brief reflection at the end of the term. It’s a nice low-stakes way to have them try something new, and it’s an assignment that’s worked well for students along the full spectrum of tech expertise.

When I designed the assignment(s), my own mecology was pretty well established and consistent–this was back in the early days of the first iteration of my blog, and options were far more limited than they are now. My homepage was my base of operations, although it split time with my blog for that purpose. Everything else fed back to one of those two static locations: Flickr, Delicious (back when we had to remember where the dots went!), Bloglines, et al., with a healthy dose of email for keeping track of my activity, participating on discussion lists, and so on. It’s probably less simplistic than I’m making it sound, but I really had the sense of a stable location, from which everything else flowed or attached. And it lasted for a solid four or five years like that.

Cut to today, and things got a lot more fluid and messy. I sat down yesterday to try and actually visualize my mecology a bit, and quickly gave up. Right now, I work from 4 primary devices: phone, ipad, and home & office desktops, with a fifth (my laptop) that’s pretty much used for speciality work (conferences, meeting with specific software needs). My ipad’s become a lot more central, a shift that happened because of my convalescence in no small part. It’s now my primarily email device–and the difficulty of typing on it without a separate keyboard has made me less responsive (and less inclined to hoard). Some applications are cross-device for me, while others that could be tend to be associated in my mind with one or the other (I generally prefer FB on my desktop and Twitter on my ipad, for example). My “home” feels distributed to me among this site, FB, and Twitter, and so I’ve been trying out different arrangements to see how they fit. Right now, I have the sense that they’re working at different scales for me. I’m not as worried about the audiences for each, although that may change. The next circle out would probably include a lot of plug-in style apps: Instapaper, Google Reader, Scribd, Slideshare, Academia.edu, LinkedIn, GoodReads, etc. Some of them are more central than others, but they’re all defined for me in terms of specific tasks, and they generally feed back into the center. There’s another circle outward, where I’m still thinking about what to do regarding social bookmarks–a habit I fell out of (and regret doing so)–and a few other apps that I’m trying out somewhat haphazardly.

I don’t really have any grand conclusion here, except to note that it’s about damn time I took my own advice/assignment, and rethought how everything is fitting together. Also, it’s nice to question that fit every once in a while. The past couple of weeks for me have been about rediscovering a sustainable structure and flow that can help me write regularly and be more present to myself.

The Tweetability Index

wherein I consider the hows, whats, whys of Twitter at academic conferences

I am decidedly pro-Twitter, so I’m not going to spend a lot of time apologizing for it or even necessarily advocating for its use. Though if you push me, I will. I think that Twitter in particular (and FB to a lesser extent) provides an extra social layer of activity for conference goers, much better access for folks who aren’t there, and a crowdsourced guide to the area (making the academic conf less of a non-place a la Augé). And honestly, for those who aren’t interested in using it, there’s no real loss in either direction. It’s not everyone’s cup of tea, but it doesn’t need to be.

RSA is kind of an odd bird in our field, conference-wise, which is part of what’s got me thinking about this:

 

RSA, for those of us on the comp side of things, is the one conference that steadily and selectively publishes conference proceedings. As a result, I think that many people write the “publishable” version of their talks (and subsequently read them aloud), rather than versioning them out. I have to admit, the last thing I have time to do when I’m prepping for a conference is to write a whole separate version. I’m at a place where I simply do the presentation version, without worrying about the published volume. I still have my slides from 2010, for example.

All of this is by way of explaining why I think there are some presentations that are significantly more difficult to tweet than others. And part of that has to do with what I would describe as a “close writing” style: dense, careful, accretive, etc. I like close writing as a term, because I think that it’s possible to write in a specific register for an audience of close readers. And in fact, that’s not an unusual style to find in print. It’s not good or bad; rather, it’s a style tailored to a particular medium and audience. Conference presentations are a different medium, of course, and I won’t rehearse the kajillion people who have critiqued the humanities for their (lack of) expertise when it comes to delivery (“How can you know so much about rhetoric, but not be able to…?!” blah blah blah). In a closely written essay, skipping words or flipping sentences matters. It undermines your credibility, makes your ideas more difficult to follow, and for those of us with serious introversion and/or stage fright, it piles on the anxiety.

And yet. There are advantages to writing for presentation a little differently. One of the things I started doing, even before I finally made my personal shift to speaking from notes rather than reading, was to place an upper limit on the number of words in any sentence. I started paying really close attention to the intricacy and order of my clauses. A lot of closely written essays are like mysteries, whose final payoff doesn’t arrive until the end. But when you listen to such an essay, it’s really hard to hold all the moving pieces in the right places in your head as you’re getting there. You don’t get to go back and read it a second time to make sure you understand. So in writing for listeners (v readers), I started trying to go with a much more pyramid approach–even if it was something as simple as laying out the basic structure of my talk at the beginning, and tossing in a signpost or two. Here’s what I’ll talk about today, here’s where I’m going next, here’s what I’ve told you, etc.

It seems to me that there’s an added challenge when it comes to Twitter. Imagine a talk where the speaker is providing a blockquote from another scholar, and someone in the audience livetweets that the speaker is drawing on Theorist X regarding Topics A, B, and C, and sends it out there. Whereupon our speaker finishes the quote, and then announces that it’s the opposite of what s/he wants to focus on today. That kind of move is innocuous in print; on Twitter, it can misrepresent the speaker’s position, and any number of consequences might flow from that mistake. One of the things I try to be careful about as I’m tweeting is just that sort of issue–I try to be careful to make my tweets genuinely representative of the talk, but also informative for someone who might not be there. It’s tricky stuff, and close writing can make it trickier.

I do think that writing with more signposts is one easy solution, as are shorter sentences, more direct sentence constructions, and less reliance on the epiphany model of print scholarship. In a presentation, the audience is there (barring the occasional rudeness). Instead of worrying about holding something back until the end so they’ll keep reading, it’s important to provide them with some help so that they don’t get lost and stop listening.

This is where the staring at the floor thing doesn’t make lots of sense to me. I think that a closely written essay can be made a lot more accessible to an audience with a slide deck that has nothing in it but words from the presentation itself. I don’t know that folks would want to do the full-on Lessig style presentations, but honestly, even a few slides with outline or section headings and citations (h/t @johnmjones) can provide signposting even if the essay doesn’t. But I’d go one step further, and actually drop in thesis statements onto slides–we already have a model for this in the form of “pull quotes.” I’m a Presentation Zen guy, so I do this already (with big splashy pix), but a deck of 8-10 pull quotes, sized appropriately for tweeting? That’s a matter of 10-15 minutes of copy/pasting from an essay, and all of a sudden, you’ve made your essay easier to follow and much more shareable. And a co-panelist can run the deck from your script, if you don’t want to divide your attention. (You can do something similar with handouts, yes, but then you’ve got to worry about logistics (copying, distribution, last-minute edits, etc.).)

For me, the shareability is a big point. I’ve gotten copies of 5 or 6 presentations (either scripts or vids) over the last few months from people whose work I heard about through conference twitter streams. In a couple of cases, they were from complete strangers at conferences I’ve never attended. On a couple of different occasions at RSA this week, I had people contact me privately to ask someone for a copy of one of the presentations I tweeted–those are connections that simply aren’t going to happen if we have to wait until the published volume comes out (and only then, if a paper is selected).

So I guess, technically, I’m not arguing here that everyone should be using Twitter so much as I’m saying that we should all understand what it entails to present to an audience where Twitter is used. A slightly different point, but worth making, I think. And something I’m thinking about for the talk I’ll be giving in June.

Back to top
© 2014 Collin Gifford Brooke - A MeanThemes Theme