Capitalism, Innovation, and Labor in the Neo-Liberal University

Capitalism, Innovation, and Labor in the Neo-liberal university

download

This really should be my last SIF post, since it returns to the theme with which we began our fellowships, talking with Justin and Brennan last August about the meaning of “innovation.” I have been thinking about that lately in relation to my blog post from last month, and today, those thoughts received stimulation from an unexpected source, President Becker.
I should say right at the outset that I have been and remain suspicious of the word “innovation” which strikes me a kind of capitalist buzzword that means new, shiny, and expensive and of planned obsolescence, especially in relation to technology and to education, which has a track record of chasing “innovations” that often ends up looking rather faddish and only occasionally seems to really accomplish a whole lot other than shifting a lot of money from the public and from students into the hands of for-profit companies.
That, of course is a cynical and probably unfair characterization of a lot of what universities mean when they talk about innovation. But, it’s a way of looking at “innovation” within the context of the neo-liberalization of the university, a broad tendency that is clearly at work across higher education these days and one that I think is on the whole a serious threat to universities, or at least to the project of the humanities on which they have been built.
At this point, it may looking like I am gearing up for a rant, but that is not really where I am headed.
Today, I was at a meeting of the committee overseeing the implementation of the consolidation of GSU and GPC at which, to my surprise, the main topic of the day was the nature of innovation, with specific reference to both higher ed and technology. Among the participants in the little mini-debate that broke out on that topic was President Becker who made the comment that technology itself was not innovation. To illustrate this, he pointed to the strides GSU has made in lowering its number of drop outs and in helping students get their degrees in a shorter period of time*, was based in part on purchasing a piece of software that allowed them to track students and to use advanced metrics to identify students who were falling behind and to help with advising. But, as he pointed out, other colleges and universities who had purchased the same software had not seen the results from it that GSU had. As Becker put it, this is because innovation is the “marriage of process and technology.” He credited GSU’s success less with the software – important as that was — than with building a process to use the technology in efficient ways.
Now, President Becker knows more about higher ed than I ever will. But, I would like to amend his statement just a little, to emphasize that inside of the idea of “process” is the idea of labor. Process if you will, is a fetish in the Marxist sense that it “hides” within it labor-value, human labor.

So my definition of innovation would extend Becker’s just a little bit and emphasis that technology + process + skilled human labor = innovation.
And this brings me back to the SIF. The projects we have been working on often involve fancy technology – the 3d Atlanta project or the Drones are great examples of this. Sometimes they involve “innovative” software that seems likely to go the way of the MOOC rather quickly (I am thinking of you Captivate!), and sometimes it involves rather basic and boring technology that doesn’t rise much beyond the level of microsoft word or the basic html it takes to put together a website. But this is not what makes the SIF project go. Instead, what is “innovative” about the SIF is its relatively large-scale investment in creating skilled human labor out of grad students and honors undergrads and putting us to work.
This by no means exempts the SIF from examination as itself in many ways part of the neoliberal transformations in universities. But I think it provides one answer to the question with which the program began: what is innovation? We are.
*And thus with less debt. Perhaps my favorite line of the whole meeting came from Becker, who commented that the worst student outcome that GSU could be responsible was a student who had nothing to show for college except debt. Even though the college loan economy is intimately tied to the neo-liberalizaton of the university, that is a topic for another time.

 

The SIF and graduate education

Today in the exchange, I overheard Ryan mention to Justin that he often thinks about ways to “market” the SIF, a program which has, as best I can tell, very little in the way of reputation even at GSU. This is not surprising since the program is less than a year old and much of the work that we do is in an ancillary role and/or is long-term work that hasn’t yet shown up in the classroom. My work for the hybrid American history survey is a good example of this. It is trickling into the classroom, but any students who encounter it will have no reason to associate it with the SIF program and my conversations with professors in the department leads me to think that for most part, few are aware of the SIF involvement in the development of content for the course.

Assuming that the SIF funding is renewed (and I would that it is as we have been a ton of good work for the university), time should take care of some of this. Hopefully in years to come more faculty will know about the chance to intersect with our labor and expertise, more students will seek positions in the program, and the general profile of the SIF will increase within the GSU community. Which is not to say that Ryan’s suggestion that some marketing and brand development would not be worthwhile.

I have been also been thinking about the public profile of the SIF of late from a slightly different perspective. As I imagine it, the SIF is on its way to maturing into a kind of mad-cap mash-up of a makerspace, a development vehicle for innovation in instruction, a source of skilled labor to make large scale pedagogical projects possible and as a pipeline for producing graduate students with very specialized skillsets that will equip them to succeed in a higher-education landscape and marketplace that is increasingly oriented towards those who can combine content mastery with technical dexterity.

Right now, the SIF seems to be primarily “marketed” as a means to an end, that end being the education of undergraduates. That is a laudable and important goal and there is a lot of room for the SIF to make a reputation as a resource in that area. But, most of the labor of the SIF, and specialized knowledge that it possible, are coming from graduate students like myself who have been using the SIF either to use and develop disciplinary based skills (e.g. computer programming students getting to build databases), or to build skill sets that are not ordinarily part of their disciplinary training – historians, for example, are not normally trained in XML/TEI, and marketing majors are not normally taught how to produce video or work with Tableau.

The chance to learn these skills, to work across disciplinary boundaries on common and complex projects, and to think at the intersection of technology and knowledge is an exciting part of the SIF and a rare opportunity for graduate students. My imagination of what the SIF could become is admittedly colored by my abiding interest in graduate education and the development of GSU as a research institution, but I can imagine the SIF coming to serve as a major recruiting tool for graduate studies at GSU.

Think about it. Right now the SIF includes grad students from the natural sciences, the humanities and fine arts, the social sciences, the law school, and the business school. All of us are getting an education that complements our disciplinary work while stretching it in new and exciting ways. This is a program, I think, that a lot of graduate students — especially perhaps those who look at the conventional academic market and recognize the need to distinguish oneself from the herd and to prepare for “alt-ac” jobs — would find very interesting and that departments could be pushing as part of their recruitment of students.

In the longer term, the SIF could be a seedbed for all kinds of new programs at GSU. It could help orient individual departments towards the future of academic work. Speaking strictly for the humanities here, this is something that could make GSU’s programs profile better nationally, as there is a growing recognition of the need to reorganize graduate studies (especially the Ph.D.) in the humanities, but not yet a big movement towards doing something about it. The SIF is an asset on this path, and could not only help market programs but also give departments a firm basis on which to start thinking about more substantive and widespread changes in their curriculum and in what it means to train graduate students.

The SIF could also serve as the embryo of a host of even larger initiatives. Already, it helps (albeit mostly through its undergraduate honors component) power the CURVE. It could also serve as a platform for the development of a digital humanities center at GSU, for new media centers, etc. for interdisciplinary centers, etc. In theory, these kind of centers, working groups and what have you, could transform GSU into a school with a national reputation for training Ph.D.’s to be ready for the teaching and research of the 21st century. This might be especially helpful in the humanities, which are arguably the disciplines with the most reinventing to do – and thus the biggest opportunity for a university that can find a way to do so.

As the consolidation with GPC makes clear, GSU is doubling down on its commitment to undergraduate and, now, associate, education. I would imagine that the “new” GSU is going to be even more interested in technical resources as it contends with what will soon be one of the largest undergraduate populations in the country. But, GSU is also making strides towards a national reputation as a graduate school and research university. The SIF program is well positioned, with savvy growth and a good marketing plan, to benefit from, and facilitate, both of these commitments.

A very first, very naive attempt to play around with data visualization

As the Hoccleve project nears our first major milestone, the digital publication of an edition of Hoccleve’s holograph poems, we are beginning to ask questions about how to transform our XML into an HTML display. Thus, we are embarking on a graph design/display phase of our work. One of the things we have been discussing is creating data visualizations of the poems as an ornament to the edition. Most likely, these will be simple. Word clouds for instance. I have been asked to explore some options for this.

This is not something that I have done before, but it is something that I have been curious about as a tool for my own work. Because the plain text versions of the poems weren’t quite ready, I decided to take a little time to begin explore what might be possible, from a historical perspective, with data visualization tools.

I also figured it would make an interesting first blog post of the semester, even if at this point my foray into data visualization and data mining is completely amateurish. Even so, I am reporting on some early experiments using Voyant, a free web-based tool for textual analysis. I want to to see how it worked with early modern texts and with some of the documents I am using for my dissertation. This post is also offered in the spirit of a simple review of the software.

My dissertation is a study of relations of power between the English and Native Americans in colonial Virginia. For this reason, I decided to run William Hening’s old edition of the Colonial Statutes of Virginia into Voyant. Hening’s book prints most of the surviving documents from the legislative branch of Virginia’s colonial government, so this is essentially a book of laws. I was curious to see what textual/data analysis might be able to show about Indians as the object of legal proclamations.
Step one was quite simple. I went to the Internet Archives and copied the URL of the full text version of Vol. 1 of Hening’s Statutes (which covers the period between 1607 and 1660). In Voyant terms, this is a very small sample — only 233,168 total words, and about 21,000 different words. (I have put in several million while playing around, without any problem at all, so Voyant seems capable of handling large samples of text).

If you are using a modern book, with standardized spelling, these numbers should be essentially complete and usable. But the first issue with dealing with early modern texts in Voyant is that we are going to be dealing with texts written before standardized spelling. Voyant is counting “she” and “shee,” “laws” and “lawes” as different words. This is immediately going to make all the data less than 100% accurate.

For those of you not familiar with texts of early modern documents, look at this, for example. This is a randomly chosen page from The Records of the Virginia Company of London, which was published in an essentially diplomatic transcription (one that keeps as close as print will allow to the manuscript sources on which it is based)

20150125_172944

 

You can see first off how difficult it would be to OCR this. Hening’s text is a much more user-friendly semi-diplomatic transcription, which is part of why I am using it as the main text for this blog post, but even it has some irregularities in the OCR full text generated by the Internet Archive based on its irregular textual conventions. But, Hening preserves original spelling, and this poses issues for running it through Voyant.

You can see this, visually as it were, on the word clouds. Here, for example, is a word cloud for Volume 1 of the Records of the Virginia Company of London. In order to screen out very common, but generally uninteresting words, I have turned on Voyant’s so-called “stop words” (words which are screened out from the word cloud).

As you can see, even with those turned on, the cloud is still rather cluttered with basic words, repeated words, and other oddities, because of variations in spelling:

odd word spellings in RVC even with stop words on

 

Now, you can manually add stop words to Voyants list, and slowly purge the word cloud of oddities. That might work for something simple like making a word cloud, but its not going to solve the larger problems that early modern spelling present to data mining/analysis. One possible long term solution to this is another piece of software, called VARD 2, which I obtained this week but have not yet had the chance to learn. VARD 2 is designed to generate normalized spellings for early modern texts, making data mining more possible and accurate. Even with this tool, however, a lot of text preparation work is required in order to end up with a “clean” text.

And that is the first big lesson about data mining/visualization/analysis on early modern texts – they present issues about the ‘text’ that do not arise with modern typefaces, spellings, etc.

For now, though, since I’m just playing, I’m hoping to work around these rather than solve them. So, I go ahead an do something very basic by asking Voyant to track all the times the word “Indians” (in its modern, and most common early modern spelling) appears in the text. By asking Voyant to show me the results as a trend line, I can see the relative frequency of the word rise and fall over the course of Virginia’s statutes. Because the statutes are published in chronological order, they are also, conveneniently, a time line.

With the full text of Hening’s, a trend line of references to “Indians” looks like this –

indians in the full hening vol. 1, raw

Note that references to Indians spike at the end of the text, which in this graph is divided into 10 segments of equal length. This spike at the end is a reflection of another problem with the source text, the index and the rest of the prefatory material that surrounds the actual statutes themselves, which are the part of the book that I am interested in.

The only way around this is cutting and pasting – a lot of it. So, I hand-delete everything in the text that is obscuring my sample. This amounts to a lot of cutting. Without the index and introduction, the text drops from 233,000 words to 183,000.

Using this text, I run the trend line again, and the change is clear. Most notably, the “index” spike no longer appears.

indians with variants hening manipulated vol 1
The relative frequency of the word, however, is largely unchanged. In the ‘full’ version, the word Indians appears 9.74/10,000 words. In my ‘new’ version, 9.96.
What do these charts show? Not a whole lot in isolation. What is more interesting is what happens when you begin to make this a comparative project by importing the second volume of Hening’s Statutes, which covers the period 1660-1682. This period is an important one to my dissertation for historiographical reasons. Most scholarship on Indians in Virginia has focused heavily on the period between 1607-1646, the “contact” era period when settlers and the Powhatan chiefdom were aggressively trying to establish their power over the other. These battles were largely decided by 1646, and with the decline of the Powhatan as a major power, historians of colonial Virginia have generally begun to rapidly loose interest in Indians, who go from central to bit players in narratives of Virginia’s history. This partially because historians become, around mid-century, interested in telling the story of the rise of slavery in Virginia, but partially because there is a sense that “Indian” history in Virginia is largely by that time.

Yet, references to Indians in Virginia Statutes increase in Volume 2, which covers 1660-1682. They now appear 13.53/10,000 words. Less than words like “tobacco” (25.97) and court (24.12), but more than “burgesses” (10.14) and king/kings/king’s (combined 10.23) and servant/servants (combined 11.00).

This is potentially interesting because it doesn’t fit with the impression of conventional historiography. Indeed, it suggests that Indians importance, at least in the eyes and words of the law, is increasing. By themselves, of course, the numbers don’t really mean anything. That’s where deep reading as opposed to computer assisted shallow reading comes into play. By they are an interesting numerical representation of an intuitive observation that helped shape my dissertation, which was initially prompted by a sense that historians had been too quick to turn their attention away from Indians as a subject in Virginia history. This little exercise in Voyant doesn’t itself mean much, but it does allow me to quantify that sense in a relatively easy way, by pointing out that Indians become more prominent in Volume 2 of Hening’s Statutes than in Volume 1.
It would be absolutely fascinating to see if this trend continues after 1682. But, that thought leads me to the next challenge I ran into in my early experiment in data analysis — finding source texts to analysis. As wonderful as Internet Archives and Google Books are, neither seems to have a copy of volume 3 of Henings Statutes. So, my curiousity about whether this trendline would continue is stymied by the lack of access to a text to import.

My aim in this post has been pretty modest. Mostly, it is a simple report of a couple of hours playing around with what for me is a new tool. It also points to the continuing resistance of early modern texts to data mining techniques, and to the reliance that people like myself, who are not going to be readily able to generate text files of huge books for analysis, on the vagaries of even extensive resources like the Internet Archive.

Will any of this factor in my dissertation? Who knows – these kinds of tools are still, I think, more commonly used by literary scholars than historians and as you can see from this post – they are not within the purview of my disciplinary training. At this point, I’m not even sure I’ve learned anything from my several hours of exploration, except that it was fun and that I’d like to spend more time with it, thinking about how to make it do more interesting work.

Have any SIF’s worked with these kinds of tools before and if so, with what kinds of results?

Microfilm, TEI headers and bibliographic metadata

One of the biggest and most urgent issues facing historical scholarship in the next several decades involves the transition to digital archival work, and with it the question of how the materiality of archival sources can be preserved, respected, and communicated in that translation. The importance of archival sources as material objects has become a vital branch of study in the last few decades, as historians and literary critics have begun asking detailed questions about the signification of paper, binding, type, etc., to the meaning of texts. This has occurred partially in response to the mania for textuality that was associated with the critical theory boom of the 1980’s and 1990’s, but it has also coincided with the advent of increasing digitization of the archives. On of the one hand, as anybody who has ever struggled with microfilm can tell you, digitized archives – even very simple ones that just display high quality digital images — can be a major step forward for scholars asking materially oriented questions. Most of the time on microfilm, binds are left unreproduced, paper and watermarks etc. are washed out in the harsh whites of the microfilm, and any sense of the materiality of the original document is lost in the always present awareness of the materiality of the film through which you are viewing it. Some digital archives, such as the EEBO database and a database I’ve been using a ton this semester, the Virginia Company Archives, are simply digitized images of microfilm, and thus perpetuate rather than alleviate the limitations of the microfilm era (which it’s worth noting made archives transportable in important ways – how else could we access significant manuscripts from the British Library at GSU?). Good digital imaging can really help with this – when you look, for example, at a high end database such as State Papers Online or the Cecil Papers, it is at least possible to see what types of paper are being used, and watermarks chainlines and other information occasionally bleed through. On really powerful imaging sights such as the Folger’s Luna database, even more can be accomplished.

But, it is inevitable that questions about the materiality of archival sources will ultimately need to be answered with physical rather than virtual sources. Even so, there are ways to make sure that as much of the physical information about a source as possible are being preserved in its digital form. This is something that TEI, a specialized set of XML tags, makes possible. At the Hoccleve Archives, I have been working on building TEI headers which contain significant bibliographic and material information about the manuscripts that we using to build our database.

By using TEI, I am able to record the provenance of the manuscript, make detailed notes about its binding, clasps, the parchment on which it was written, the numerous inks and pencil marks that have been added to it over time, etc.

This is the part of the post where I had intended to show you the TEI header that I have built. But, word press keeps converting it from code into a display copy that strips out most of the data that I wanted to display. Do any SIF’s out there know how to paste code into wordpress and have it display it as simple text?

 

15th Century Poetry in Buckhead: The Hoccleve Archive hits SAMLA

Last Friday night, the SIF Hoccleve team presented our work at the South Atlantic Modern Language Association conference, held this year in Buckhead. The conference theme this year was sustainability, and our poster highlighted the way that the Hoccleve Archive Project sustains a corpus of texts, and functions as a pedagogical sight for the sustenance of textual scholarship skills. The poster session was very attended, and we got a lot of people interested in our project.

Spreading the word

Spreading the word

Besides the poster, we displayed a slideshow documenting the work we have done transforming the old HOCCLEX files into .TXT and XML formats

Having now cracked the nut of opening the HOCCLEX files, we are now moving on to putting up a TEI enhanced digital edition of the poems of the holograph manuscripts.

the SIF's of the Hoccleve Archive

the SIF’s of the Hoccleve Archive

Collaborative work in the humanities

This weekend, the South Atlantic Modern Language Association is coming to Buckhead, and the Hoccleve Archive team will be there. The last couple of weeks have been spent getting ready for what, for us at least, is the first public roll-out of our work.

For me, this has meant a lot of time doing graphic design work, getting our poster and power-point ready for display. One of the things I have learned in the process is that the Hoccleve project is larger & more institutionally diffuse than I previously knew. I learned earlier this semester that the University of Texas was involved, as the host institution of our digital repository and the home of the general editor of the Hoccleve Archives project, Elon Lang. Robin Wharton has established a hub for the project here at GSU, and as best I can tell, GSU is currently the most active institution involved in the project, largely due to the considerable investment the SIF project has made in it.

But while working with Robin on the poster, I learned the that project also has branches at two Canadian Universities, the University of Manitoba and Concordia University. At Manitoba, a professor in the English department is seeking funding from what I gather is the Canadian equivalent of the NEH to help digitize the Hoccleve Archives large collection of microfilmed manuscripts and to acquire microfilmed copies of the few manuscripts we do not yet have. At Concordia, another professor is using Hoccleve Archive materials to develop a editorial & collating web tool.

It’s pretty exciting to be involved in a project with such widespread roots, especially since Elon and Robin are at work trying to broad them even more. It sometimes feels a little funny, coming from a discipline that is overwhelmingly focused on individual work, to think of myself as working on a common project with people I may never meet. Yet, this is one of the most interesting aspects of my work with the Hoccleve project. On a broader scale, it seems to me that this is a style of work that the humanities at large are going to need increasingly integrate into their training of graduate students and into the conceptualization of research agendas. Collaborative work is central to a great many fields in higher ed, and maybe some of the isolation of the humanities has been the steadfastness with which they have held onto models of what scholarship looks like, the monograph, the lone reader in the rare book room, that place them on the margins of the organization of higher ed. Not to say there isn’t a lot to be said for being the loner in the rare book room!

Consumption vs. Production in the Hybrid 2110

With a lot of help from Ameer, I am finally reaching the point where I can make videos more or less on my own. As I have been making them, I have been thinking about how educationally useful the experience of making the videos is. The countless decisions about what edit out, how to write captions that add to the content of the talking heads, and how to select images that enrich the storyline are really useful exercises in critical thought. The making of the video is more stimulating and engaging than the experience of watching them. Maybe this is simply a reflection of my still-modest chops as a filmmaker, but I think fundamentally it has to do with the difference between consumption and production

800px-Couple_looking_at_tv_screen_@_Museu_da_imagem._Braga,_2011

– or, to use Halverson’s terms, between the kinds of content based technologies educational institutions have often been drawn towards and the learning technologies that have proliferated on the internet.

So, I have been thinking about how to use the hybrid 2110 and its video component in ways that try to capture something of the experience of making the films. I know of at least one person at GSU, a VL named Nicole Tilford over in Religious Studies, who has been teaching students in upper division classes to make video as an assignment. You can see some of the results of her student’s work here. However, Nicole has been doing this with upper division courses, which has several significant advantages for projects of this nature. She has smaller classes, and self-selecting students with deeper background knowledge and an actual interest in the course material. Any attempt to capture something of the experience of making films in the hybrid 2110 would need to be built around a recognition of the unique challenges of large classes and substantial numbers of students with little personal interest in the class.

The simplest idea, at least conceptually, is to have a final project that involves students making their own historical videos. Perhaps you could allow students to choose to either write a final paper or to produce a short video. From an educational perspective, this idea has lots of merit – it’s a very active learning project that at least some students might enjoy. It might also give people an incentive to watch the videos for class with a more careful eye, since in addition to being ‘content’ for the course, they would suddenly also be models for an assignment. On the other hand, it would also require the instructor to commit to teaching video editing. Now, there must be easier software out their then premiere pro, but this is still a substantial additional obligation to add to a course that is already absolutely packed, weighed down by large class sizes, the laudable decision by the university that 2110 needs to be taught as a “writing-intensive” course (which means grading long written pieces, and quite often exercises aimed at developing writing skills), and the much less-laudable decision by the university that 500 years of U.S. history can be taught in one semester.

Maybe some of you know software that students could learn quickly enough to make this viable?

I have also been thinking about more middle-ground solutions that would get students interacting with the video making production in less depth than making their own videos, but in getter depth than merely watching them. One idea I have been kicking around is having assignments which involve asking students to write captions for videos. The idea would be to show them a short video (and one which had been specifically made with the assignment in mind) which had no captions, and ask them — either individually or in small groups — to write captions for it. This strikes me as a useful exercise, especially if you had a WAC or TA in the classroom, who could help students develop the ability to write meaningful captions that add something to the video, or that demonstrate the ability to synthesize from the video. This strikes me as something you could do either on its own, or as a laddered assignment that built towards student generated video.

Any thoughts on other ways to realistically get students engaged with video rather than simply consuming it?

Tweaking the SIF

This afternoon I went to a very interesting talk by Rich Halverson of the University of Wisconsin, which raised two major issues, one about the SIF program in general, and the other about one of my SIF projects, the American History Video project. To keep the post to a manageable length, I’ll save the History video project for a later post and take the SIF-wide issue first.

Let me preface all this by saying that I am enjoying SIF immensely, and have learned a ton. That said, I think the program as it is being run now has a significant, but fixable, flaw, and Halverson’s talk was just the kind of event that I think could fix it. Rather than a narrow technical training, say how to use a specific piece of software, (as many of the ‘normal’ training opportunities available to us are), this was a talk that was simultaneously practical and actionable, yet mostly concerned with big pictures and with deep and broad questions about the role of technology and innovation in higher ed. My time in the SIF has been quite useful so far, well worth the not inconsiderable investment of time that it has required. However, my experience in general has been that because we (maybe I should say ‘I’) are so deep in the details of specific projects and in acquiring the skills that are required to do them, SIF has been a relatively poor forum for thinking about these big questions.

Obviously, SIF is not a graduate seminar, so it is both unsurprising and appropriate that we are engaged primarily in project based work. And, as Halverson pointed out today, real learning is most likely to come about while engaged in concrete problem solving. With that said, my sense of the program to date is that it has yet to develop its potential as a place for thought, for brainstorming, and for giving its fellows and opportunity to think about, talk about, and study, the larger questions about technological and pedagogical change that interested many of us in the program.

Ultimately, finding a way to do this will enrich the program — making it a bit less of an apprenticeship, internship, client-work program and more of what the best fellowship programs achieve, the opportunity for intellectual engagement with both the details and the broad frameworks of our community of interests. One way to achieve this might be in the small groups that Brennan, Joe, and Justin are organizing and which are meeting this week. Another might be to actually assign us regularly weekly hours that can be spent in intellectual pursuits. Even a couple of hours a week that we could devote to learning, to reading some of the intriguing books that Halverson mentioned today, for example, could go a long way towards achieving this. Perhaps we could collectively meet on occasion to discuss shared reading.

Doing this would mean taking some of our hourly allotments away from project-work. This is unfortunate, as many of the SIF projects need huge amounts of labor. However, I think that at is best, the SIF can be more than a team-based maker-space, and blossom into a more well-rounded place where people think and do. Moreover, I suspect that it would make the work that is being done on SIF projects better in the long run, because it would give us a chance to engage in thought, research, reflection, and conversation. And, it would help fulfill one of the goals Justin mentioned at orientation, about the program being a kind of incubation tank for cross-disciplinary conversations and for innovation.

Updates from the Hoccleve Archives

There has been a lot of activity over the Hoccleve Archives projects over the last few weeks, mostly relating to a series of computer files known as the HOCCLEX files. These files, which date from the 1980’s, were originally developed by a team of researchers, led by D.C. Greetham, working on a critical edition of Hoccleve’s magnus opus, the Regiment of Princes. They are careful transcriptions of three holograph manuscripts that contain about three dozen poems. Holograph manuscripts are those written by their author, and one of the things that makes Hoccleve so interesting is these three holograph manuscripts, because very few examples of works actually written by their authors survive from this period (most extant manuscripts were produced by scribes, but Hoccleve was a scribe, so he produced his own manuscripts). The HOCCLEX files took the holograph manuscripts and used an early and now mysterious, computer language to mark the transcripts for grammar and spelling. The original idea was that the HOCCLEX files would provide a lexicon of Hoccleve’s usage, so that editors of the Regiment, which survives in many manuscripts, but none by Hoccleve himself, could use the HOCCLEX files to make editorial decisions about spelling variants and similar discrepancies between manuscripts. Unfortunately Greetham’s proposed edition never materialized, though they were used by Charles Blyth in his 1999 edition of the Regiment.

Since that time, the HOCCLEX files, and the treasure-trove of information they contain about Hoccleve’s Middle English, have not been easily accessible to scholars. Not only were they privately stored, but more importantly, they were developed using a now-lost and unknown piece of software, making them difficult to use in their original format. My computing oriented SIF colleagues have built a custom-script that allows the HOCCLEX files to be translated into .TXT and XML formats. In this new format, the files will serves as the basis for several forthcoming substantive additions to the Hoccleve Archives website.
1. Once a database and HTML display are created, the HOCCLEX files will populate the Hoccleve Lexicon, a fully searchable and browsable guide to Hoccleve’s orthography and diction. In this form, they will serve as a robust and public version of what they originally designed to be, an invaluable source of primary material for users of the crowd-sourced editorial tools we will develop to create a digital critical variorum edition of the Regiment of Princes. Moreover, because the Lexicon will be public and searchable in ways beyond those possible when the HOCCLEX files were created, they will be open to researchers asking a broad variety of questions about Hoccleve’s texts and the development of the English language and its poetry.
2. With the addition of a more detailed XML mark-up and a style-sheet, the HOCCLEX files will allow us to host a digital edition of the poems in the Holograph manuscript.
3. The HOCCLEX files are now an important primary source in their own right, evidence of an early moment in the history of the digital humanities. By making them available, we will be documenting that history, an important step in our larger aim to use the Hoccleve Archive as a hub for preserving and sustaining the history and practice of scholarly editing in the digital humanities.

Update on the US History Survey

Late last week, I observed a session, devoted to the topic of secession, of the hybrid U.S. history survey. It made me more than a little nostalgic for the classroom, in all its gritty and chaotic glory. And, it reminded me of the madness that is the one-semester U.S. history survey — the first week of October, and you have already reached the mid-point of the survey, the Civil War. Absolutely crazy, and a strong argument for GSU to adopt the standard version of the U.S. history survey, which is generally broken into 2 semester length classes. But that is another topic entirely.

If my last post focused on the risk of the hybrid course, my experience watching the class served to remind me of the potential rewards of the hybrid structure. I have taught the survey several times, and the biggest problem I always face is the necessity of providing context. This is, in part, a coverage issue common to any classroom, but I think in history it is particularly important because as history teachers, the main skill we are trying to teach our students is to place events into historical contexts, to see things through the lens of the past. This is really only possible if you know enough about the past to create a context for it. For this reason, depth and breadth are super important to historical understanding. When I teach say Tom Paine or Harriet Jacobs, I am less interested in the kinds of questions students can readily discuss — was Paine right, was slavery bad — then in demonstrating how awareness of context can create historical interpretations. What I want out of a discussion, for example, is for students to be able to see the play of reason and passion in Paine’s work as a product of enlightenment thinking, to catch the references he makes to the wrongs of George III and to be able to see how he is manipulating those wrongs. Doing this requires context, and context if usually provided via the lecture. I know that when I teach, my class in perpetual tension over the need I have to provide context, which requires (or least seems to require) me talking, and my real belief that ultimately my students will learn more if they are talking more and I am talking less.

This is why the video component of a hybrid course seems so promising, worth the risks that accompany its roll out. By helping provide context out of the classroom, they open up very precious in-class time for discussion, small group work, and other kinds of more student-centered learning. If it works, the potential payoff is huge. As I watched class last week, I saw the professor attempting to do just this. However, I also saw the kink in the system. It seemed clear that few students had watched the videos, leaving a lopsided discussion with only a few participants, and an instructor forced back into talking more than he had hoped.

So, I saw the bad as well as the good, but I left reminded of the vast potential this type of approach could offer.