A problem: some preliminary remarks
I’ve been thinking tonight about Joris van Zundert’s post in Humanist 28.1, in which he asks whether (and how much) humanities-oriented academia values the work of software builders. Van Zundert’s post responds to a previous question by list moderator Willard McCarty about how the increasing availability of build-it-yourself coding frameworks are changing the nature of the relationship between digital humanists and the people who build the kind of software that digital humanists use — or, as McCarty puts it, whether “the boundary between scholar and technical builder is moving.”
Van Zundert replies that it is not: “in the reality of projects I see the traditional scholar battling his turf to the bitter end. Anything as long as he does not have to seriously look at new technologies and methodologies.” As a very junior scholar and someone just beginning to dip my toes into … well, into the shadow of the big-DH tent … I want to avoid taking a stand on this particular issue: my own department, the Department of English at UC Santa Barbara, has a specific emphasis on digital humanities, and I’m not yet sufficiently well-established in my field to say that I have a fair perspective on the field as a whole. Rather, what I want to do tonight is to comment on a specific feature of van Zundert’s argument, because I take it to be rather prevalent in the digital humanities and in other fields that have an ambiguous relation to coding. So I think it’s worth quoting and summarizing some parts of van Zundert’s argument so that we can look closely at its features. Van Zundert’s post tells a story (“not my story,” he says explicitly)
about a man who […] is a builder of software. […] Our hero sees a concrete problem in the scientific workflow of the [humanities] scholars. His experience and expertise tell him he can solve it. The solving will involve true research to guarantee the sophistication and validity of the solution.
Once the software is developed,
a working solution is presented that not just solves the problem, but also identifies some aspects of the problem that clearly demarcate the boundaries between what a solvable problem of this type in humanities is and what remains as grounds for interpretation, yielding scholars much information about the current limits of formalization of their epistemics. A concrete problem is solved, effort for a labor-intensive task can be decimated. What used to take weeks, months, can be put forth in mere milliseconds. Only the willfully blind would not recognize the thus created potential to reallocate resources to scholarly research by eradicating an error prone and dull, yet scholarly skilled task.
I take this to be a fair description of the way that humanities scholars have traditionally viewed the development of software that assists them in performing the kinds of tasks that van Zundert has identified. After all, humanities scholars engage in detail-oriented work on massive scales all the time. Looking to my own discipline for examples of this phenomenon, I can find numerous places where massive automated data collection can benefit more traditional textual scholarship: The way that close readings of texts can be contextualized by automated analysis. The way that hypotheses based on nuanced readings of sets of selected texts can be augmented by massive automated analysis of more texts than any scholar could possibly read so as to test hypotheses. The manner in which problems that are difficult to solve for human scholars simply because those human scholars, no matter how diligent and well-trained, are imperfectly suited to these tasks (because of some combination of factors surrounding the tasks: say, tasks that are extremely detail-oriented, boring, time-consuming, and numerically oriented) can often yield to machine-based analyses. I think, for instance, of Ryan Heuser and Long Le-Khac’s automated analysis of 2,958 British novels published during the long 19th century, and their exploration of their starting premise, that “one promise of digital humanities is leveraging scale to move beyond the anecdotal” (4).
But van Zundert’s story does not display the optimism of Heuser and Le-Khac’s analysis: his unnamed protagonist discovers that
[i]f it is hard to value the labor involved with curating information as a scientific task, it is even harder for them to see how automation of such basic tasks would constitute research. Yes, it is important it should happen, no it is not research that we recognize. […] Our respectful scholars are not able to recognize the scholarly merit and quality of the software that our protagonist puts forward. Yes the great effort needed for a scholarly task is sincerely reduced. We see that, they say. But we can not see the work this man has done. We can not establish its scholarly correctness. And besides, this is a primitive task in the greater scholarly work. This man has not given us any synthesis, no broader scholarly perspective, no reasoning and argument on paper in a humanities journal.
Again, I take this to be a fair description of current difficulties involved in evaluating the scholarly value of coding work from the perspective of humanities institutions: How is a humanities department to allocate funding for this type of work? How does it contribute to the coder-scholar’s rise on the academic ladder? How is a department composed largely of people who do not have this specific, highly specialized skill able to evaluate the scholarly correctness of the automation of a basic scholarly task? Do our institutional values support the development of tools that automate the difficult and boring parts of our work so as to free us for the engaging work of analysis, synthesis, interpretation?
These are hard questions, and I do not propose to answer them here, though I do think we need answers. What I would like to examine is a presumption in the post that I take to be typical of those people who walk the line between coding, tool use, and more traditionally oriented humanities scholarship: the assumption that the building of a tool is necessarily a service that unequivocally works for the good of the profession as a whole.
As van Zundert puts it, “we expect our hero to be celebrated, respected, recognized for his scientific interdisciplinary achievement.” I would like to suggest that, for many pieces of software currently enjoying cultural cachet under the big tent that describes itself as “digital humanities,” this is quite a rose-tinted view of the actual accomplishments of software built for the purpose of aiding digital research in the humanities, and it overlooks some important political questions.
This particular way of glancing at software through rose-colored glasses is hardly unique to van Zundert; the digital humanities represents many pieces of software that it uses in similar terms. The website for Gephi, for instance, starts to describe the network visualization software by saying that it “is a tool for people that have to explore and understand graphs,” that it allows its user to “profit from the fastest graph visualization engine to speed-up understanding and pattern discovery in large graphs,” that it is “[u]ser-centric.” Gephi also asks for donations by saying that those who donate “[h]elp us to innovate and empower the community.” Similarly, in discussing the MALLET software for topic modeling, Andrew Goldstone and Ted Underwood write, “we argue that the mere counting of words can redress important blind spots in the history of literary scholarship, highlighting long-term changes […] that were not consciously thematized by scholars” (3).
There are plenty of other examples of software making grandiose claims for its own utility and importance, and even more examples of enthusiastic users making grandiose claims about the software that they use. And I would like to say up front that I believe that both Gephi and MALLET are, in their way (and based on my lamentably limited interactions with them), useful and important pieces of software that have a lot to contribute to digital approaches to literary scholarship. But this laudatory approach to software description—’we have built this gift to the world’—elides some important questions: Who are the people whom Gephi helps to understand and explore graphs? Who profits from the fastest graph visualizations? Around which users is Gephi’s “[u]ser-centric” architecture centered? Who is able to use the software that engages in the “mere counting of words”? Who are these “willfully blind” people who do not recognize the implied value of van Zundert’s coder’s contribution? Who are the members of the “all of us” set in the implicit assertion that these works “benefit all of us” that underlies so many of these claims?
Ted Underwood begins to address this problem in a blog post, in which he writes:
The models I’ve been running, with roughly 2,000 volumes, are getting near the edge of what can be done on an average desktop machine, and commonly take a day. To go any further with this, I’m going to have to beg for computing time. That’s not a problem for me here at Urbana-Champaign (you may recall that we invented HAL), but it will become a problem for humanists at other kinds of institutions. (3)
But I don’t think that this acknowledgment goes far enough toward acknowledging the kind of access problems that accompany DH-oriented software. Yes, there are access problems with being able to practically run topic modeling on a corpus of 2,000 texts that make it difficult for humanities scholars at other institutions to replicate the massive-scale textual experiments that Underwood envisions; but there are also a host of other potential places where access is a problem.
An example: Gephi, a Java-based program
One of these: digital humanities software is quite often difficult to install, configure, and use, and this is often the result of specific coding decisions that developers make. Even more significant, I think, is the way that DH-oriented software often involves particularly troubling trade-offs—of computer security and of computer resources—in order to run at all.
I’d like to take Gephi as a particular instance of this set of problems, because it illustrates a major subset of them particularly nicely. I’ve blogged about trying to get Gephi working on my computer at my personal blog, so I’ll just summarize here what I’ve described in more detail there: Getting Gephi working under Linux is a mess. I haven’t tried to get it working under other platforms, but a quick search for “java” of just the “installation” section of the Gephi forums returns, at the time of this writing, 369 hits in the 96 posts in the “installation” forum, many of which mention either Windows or OS X. That is to say, quite a few people had enough trouble installing Gephi to come to the forums and post a request for help that included the word “java” in the problem description or which included the word “Java” as a proposed solution. A small barrier to access is already apparent: some of these users almost certainly had to take extra steps to register for an account on the Gephi forums before they could post. (In fact, it’s likely that most of them did—people who are trying to install a piece of software are some of the least likely people to have an account already on that piece of software’s support forums.) I would suggest that many more people than these 369 are likely to be having that problem: there will be those who didn’t post a new topic, but just wrote something along the lines of “this affects me too” in an existing thread; there will be those who couldn’t install the software and just gave up without asking for help; there will be those who searched the forums, plowed through the posts already made, and found a solution. There are algorithmic ways to approach some of these questions—say, scraping the results that turn up when a search is conducted and counting the unique users who post in these topics—but what I would like to suggest at this point is that this problem affects many people.
Which means that it’s a real problem. Which means that it is a real barrier to software usage. This is a conclusion that can be reached fairly even without digging into the details of the problem reports, but these are worth looking at, too. Here are some recently active posts that I take to be symptomatic of broader problems:
- Here, someone complains that reinstalling Java broke Gephi entirely. Their solution is to downgrade to a lower version of Java, but another user reading the thread in hopes of finding a solution complains that he cannot downgrade his Java version because other applications with which he works depend on a later version of Java.
- Here, another user is unable to get Gephi running. The forums provide her with enough information to get the program running, but another user says that the solution did not work for her. No further solutions are provided in that thread.
- Here, a teacher is unable to use Gephi in his/her course because the local IT folks won’t install a version of Java old enough to get Gephi to run, because this would introduce security problems into the lab computers.
- Here is a Linux user trying to run Gephi. Nothing happens. No support is provided. There is no indication that the problem is resolved.
There are lots and lots and lots of other posts about other Java-related issues. But even from this set of data, a number of conclusions can be drawn.
For one thing, Java problems seem to be a real barrier to entry for those using Gephi. I will go so far as to say that Java is a bad choice of development environment merely for this reason: the necessity to install and maintain an interpreted-code environment (or a just-in-time compilation environment for code, as modern Java implementations instead provide) adds an additional overhead maintenance burden to the user. Even when configuring specific versions of Java isn’t particularly challenging for the user, it sucks up user time (even in small amounts), and downloading and installing the initial environment along with whatever other software packages are necessary adds a small additional burden to the maintenance of the user’s system. This burden—installing and updating a Java environment—also takes up space on the user’s hard drive and requires downloading updates, taking up the user’s bandwidth. Neither bandwidth nor hard drive space is likely to be particularly tight for a contemporary user in (what I take to be) the program’s likely target group — academics and other professionals running Gephi on laptop and desktop computers in industrialized countries—but building the software with this assumption implicitly restricts the software’s usability, and I think that this is itself a reason for reconsidering the common belief that free and open-source software is a “gift to the world” in a general sense.
After all, Java introduces a real overhead—recompilation time, extra processor resources required to recompile or interpret Java bytecode, extra hard drive space, bandwidth required to install and update the environment that allows the Java program to execute. Again, this is not likely to impact (what I take to be) the intended target group: hard drive space is probably not going to be at a premium for upper-class and upper-middle-class professionals, just as bandwidth is unlikely to be restricted in an inconvenient way, either in terms of total transfer during a billing period nor in terms of very low maximum possible transfer rates. But I’d like to think about people who don’t fit into these categories: what about people who want to perform network analysis with Gephi who aren’t upper-middle-class professionals in higher-education institutions in industrialized countries? What are the opportunity costs for them?
I’m thinking specifically here of the applications that Gephi might have for secondary education and the degree to which funding for technology-purchasing funds are scarce to nonexistent for schools in low-income neighborhoods. While I am not suggesting that Gephi should be developed in such a way that the Apple II, for instance, is a target platform, I do want to ask: What about the Raspberry Pi, the computer that has been touted as an inexpensive solution for computer education for schools on a limited budget? It’s not that it’s not theoretically possible to run Gephi on the Raspberry Pi—but it is the case that hardware limitations restrict what can be done with the software, and that the Java-based overhead required by the software design decisions makes the limitations of the hardware environment even more restrictive. For instance, the extra disk space required to install a Java environment may not be inconvenient for me, with my 2-terabyte internal laptop hard drive, but I suspect that this is a different story for schools in poor neighborhoods running Raspbian Linux from an SD card whose capacity is measured in single- or double-digit numbers of gigabytes, in which case installing a recent Java runtime environment may be a deal-breaker. Similarly, running Gephi on a Raspberry Pi with 128 MB of RAM restricts the program to performing operations on networks with no more than 1000 nodes and edges, which makes the program, practically speaking, merely a toy in these environments, incapable of performing many kinds of serious tasks.
Similarly, what about the possibility of using Gephi in less-developed countries, where recent hardware is less easily available? Notably, Gephi’s system requirements state that
Gephi uses an OpenGL 3D engine to speed up graph visualization. However a compatible graphic card is required. If your graphic card is older than 5 years, or if your laptop doesn’t have a dedicated graphic card, you may have to upgrade your hardware to run Gephi.
It’s worth saying here that this rules out the use of Gephi on older hardware past a certain point: the developers have made the particular decision to trade for increased processing speed for users who have newer hardware at the expense of making Gephi entirely unusable for those using older hardware. This is a particular political choice, favoring the economically privileged in the global economy: it is a choice to make Gephi more convenient for the privileged at the cost of making it entirely unusable for (some of) the underprivileged.
It’s also worth thinking about how Internet service works in other countries: though always-on Internet access with comparatively high data transfer rates and no overall bandwidth caps has become the norm in developed countries, there are still places where dial-up is a common way to get onto the Internet, where Internet access cannot be assumed to be constantly available, where exceeding a bandwidth cap will result in large overage charges, or where other infrastructural challenges are a necessary part of connectivity. In these cases, scheduling and executing the necessary updates to Gephi and its required Java environment are substantial impediments to using Gephi.
Some conclusions
My point here is not to pick on Gephi, though my own experience with it has convinced me that it’s a particularly egregious violator of what I think of as good software-development practices; my point is to take it as a case study of what the unconsidered implications of coding practices are. (A smaller example might have been built from the way that MALLET, a text-analysis toolkit, expects that data files will exist in the same directory as the program files themselves, which is also a bad practice: users should have the ability to organize their data as they see fit, and software should respect that choice. Notably, on many operating systems, programs are installed in locations where unprivileged users don’t have write access, and are therefore unable to put their data files.)
Nor is it the case that I don’t understand the attraction of Java as a development platform: I understand that Java is intended to provide the opportunity to write code that can be run under (more or less) any combination of hardware and operating system. (Though this is becoming less true as more and more users move away from traditional desktop/laptop computer and toward mobile devices, many of which cannot run Java programs. Notable, iOS does not run Java at all) Circumventing the problem that application code normally has to be written and compiled for a particular operating system on a particular hardware platform, Java intends to offer a Write Once, Run Anywhere experience for developers and ameliorate the burden of adapting the program code to different environments. But Java has made some bad decisions over its nearly 20 years as a major language. A number of these are summed up by Eric S. Raymond in his book The Art of Unix Programming (which is freely available online), when he provides a general evaluation of Java as a general-purpose programming language. In part, Raymond says:
Against Java, we can say that (compared to, say, Python) some parts of it appear over-complex and others deficient. Java’s class-visibility and implicit-scoping rules are baroque. The interface facility avoids complex problems with multiple inheritance at the cost of being only slightly less difficult to understand and use in itself. […] While Java’s I/O facilities are very powerful, simple reading of text files is not simple.
I think that it’s worth pointing out that this is precisely the trap that Java development tends to fall into—trading off platform-independence for complex coding requirements—and that I think this is the problem that Java-based projects often run afoul of once their code reaches a size where it’s no longer practically possible to abandon Java as the development environment because of the amount of existing code written in the language: the language’s opacity has made debugging difficult, and the coders have tied themselves to a development environment whose standards are always gradually evolving — and evolving under the aegis of organizations that don’t necessarily take the needs of these particular organizations into account. Java changes a fair amount from version to version (here is a list of incompatibilities between Java 7 and Java 6; those who want a sample of the kinds of arcane problems that Java developers need to deal with may want to dig through Vladimir Roubtsov’s What Version Is Your Java Code?), and end-users of Java are encouraged to update to newer versions of the Java Virtual Machine, the program that runs Java programs, as soon as possible (for many users this is a more or less automatic process), because this also fixes security problems. But these security fixes drag incompatibilities along with them, requiring that existing Java-based applications be updated to continue to be usable. Java developers are thus caught in a trap where they are constantly required to update their code to meet an evolving set of requirements, even though they have little to no input on what those requirements are. (It might be worth noting that many operating systems are more careful about ensuring that applications will continue to run when the operating system is updated than Java itself is; but the need for these compatibility layers is itself an indication of what kind of choice developing in Java is.)
Raymond continues:
There is a particularly invidious problem, resembling Windows DLL hell, with libraries. Java has no method to manage different library versions. This can create huge problems in environments like application servers, where the server might come equipped with one version of (say) an XML library, but the application ships with a different (usually newer) version. The only handle on such problems is the CLASSPATH environment variable, a source of chronic deployment problems.
This is, of course, precisely the problem with Gephi: it requires that the user’s installation fall within a narrow range of version options, and the workarounds for this problem involve setting the CLASSPATH environment variable. This is (to put it mildly) an imperfect solution: it requires that users who have updated to a newer version of Java (perhaps because they use multiple Java-based applications requiring different Java versions) continue to maintain an installation of an older version. This exacerbates the problems already identified—downloading and installing updates for multiple versions takes more time and bandwidth; more storage space is required to maintain multiple environments; end users have to hand-hold an arcane technical process—and makes the maintenance of the multiple environments more complex. Too, it requires that the user engage in comparatively complex system configuration by hand, and it requires that earlier versions of Java be installed, along with the security vulnerabilities that they may include. Java has a spotty security history; Ars Technica has said that “plugins for Oracle’s Java software framework have emerged as one of the chief targets for drive-by attacks,” and uninstalling Java entirely has been recommended by Ars Technica, by Twitter, and by the Computer Emergency Response Team at U.S. Department of Homeland Security; Apple blacklisted Java twice in three weeks in January 2013 in response to multiple security threats. (An argument has also been made that Java’s security model is itself fundamentally flawed.) Though Gephi may be immune to these vulnerabilities itself, its use requires that a vulnerable software environment be installed on the user’s computer, introducing vulnerabilities (which the user may not understand, or even be aware of) even when Gephi is not running.
All of which is to say…
… that it’s worth thinking about what we offer when we offer software, and that users should think about what the trade-offs are when they install software. But this is simplistic: users often have a poor understanding of the technical trade-offs involved in making installation choices, and computers, despite the assumptions that some developers seem to make, are not just for coders — they should be usable for everyone. This implies a number of things: that security problems and other trade-offs should be proactively disclosed, for one (particularly substantial) thing; but there are other real implications: That the needs of the underprivileged should be taken into account in designing software; that open-source software should in fact be a gift to the world, and not merely to those who already experience privilege. That applications should allow users to structure data in ways that the user finds sensible, rather than demanding that the user structures his/her data storage in ways that the application can deal with easily. That configuration options and requirements should be well-documented, instead of relegated to user-run support forums. That coding decisions for open-source software should be made in ways that make future development efforts, including efforts that result in forks of the project, to be done easily and in intuitive ways, with a minimum of fuss.
Underlying all of these recommendations is a belief — my own strong belief — that software should not deform the underlying system to fit its needs, because the underlying system belongs to the user, not to the software. This is precisely the expectation that is violated by malware, which (so often) exploits the underlying system to make it a revenue-generating system in ways that are unacknowledged by the program at installation; but it is also the expectation that is violated by poorly thought-out but sincere open-source applications that represent themselves as gifts to the outside world.
Once again, Eric S. Raymond has anticipated me in the spirit of this demand: at the conclusion of 2003’s The Art of Unix Programming, discussing the challenges that POSIX programmers faced in developing software for a future that allowed for genuinely populist computer use, he wrote,
The problem is that we increasingly face challenges that demand a more inclusive view. Most of the computers in the world don’t live in server rooms, but rather in the hands of those end users. In early Unix days, before personal computers, our culture defined itself partly as a revolt against the priesthood of the mainframes, the keepers of the big iron. Later, we absorbed the power-to-the-people idealism of the early microcomputer enthusiasts. But today we are the priesthood; we are the people who run the networks and the big iron. And our implicit demand is that if you want to use our software, you must learn to think like us.
In 2003, there is a deep ambivalence in our attitude — a tension between elitism and missionary populism. We want to reach and convert the 92% of the world for whom computing means games and multimedia and glossy GUI interfaces and (at their most technical) light email and word processing and spreadsheets. We are spending major effort on projects like GNOME and KDE designed to give Unix a pretty face. But we are still elitists at heart, deeply reluctant and in many cases unable to identify with or listen to the needs of the Aunt Tillies of the world.
Raymond’s question for Unix programmers has, as the open-source and Unix programming communities have grown together and learned from each other since 2003, grown to be a central question for all open-source programmers: who owns the computers, and for whom are they used?
(Selected) (Print) References
Heuser, Ryan, and Long Le-Khac. “A Quantitative Literary History of 2,958 Nineteenth-Century British Novels: The Semantic Cohort Method.” May 2012. < http://litlab.stanford.edu/LiteraryLabPamphlet4.pdf >
While I understand your frustration with the Java solution this developer has selected, I think you’re misunderstanding some of the challenges, both from the perspective of a developer, and an underprivileged user.
For a developer writing a program with limited time and resources, Java has the largest audience of potential scholars, because it can run on both PCs and Macs. Consider the alternative to writing the program with Java. If the developer targets either PC or Mac alone, they eliminate a significant fraction of their market. Macintosh machines are expensive, and while they have a significant penetration in academia, that penetration generally correlates with privilege and money. On the other hand, if they target PCs, many important and influential academics will never use the program, because they only have Macs or iDevices, and don’t have the technical skills to run a windows application on a Mac through virtualization. Developing for both systems is a real challenge, both due to the time/resources it would take to develop two copies of the program, and the challenges of keeping two full-featured versions in-sync.
For the underprivileged, if a developer really wanted to reach that audience primarily, they’d develop a Windows application that ran well in Windows XP (I really need a citation here for prevalence of OS in the developing world, but a quick google didn’t find one), as the majority of Personal Computers in the developing world run pirated copies of XP, or perhaps Windows 7. Yes, Linux is available for free in these countries, and has some penetration, but nothing like XP. Likewise, such a developer would also probably produce a text-based program with limited visualization features, as those are the features that require the most computational work. Of course, that would have a significant impact on take up by low-skill users and users who want or need visualization.
If you want an example of a success in research development, I think the best one right now is R. R is text-based, and runs on Windows, MacOS and Linux easily due to its low graphical and system requirements. Graphical and ease-of-use features are maintained separately from the core application, which allows users at all levels to get benefit from R, even while users with gobs of computing power can run front-ends like RStudio.