Game-Based Learning: Interview with Maura Smale


Maura A. Smale is an associate professor and head of library instruction at New York City College of Technology, CUNY, where she leads the library’s information literacy efforts. Her academic background includes both a Ph.D. in anthropology (New York University) and a Masters of library and information science (Pratt Institute). She is a member of the Steering Committee of the CUNY Games Network, a multidisciplinary group of faculty and staff interested in exploring the use of games and simulations for teaching and learning. She is also involved in a multi-year study of how, where, and when undergraduates at CUNY do their academic work. Her other research interests include open access publishing and new models of scholarly communication, critical information literacy, and emerging instructional technologies.

This interview was conducted by Mike Roy, editorial board member of the Academic Commons. A founding editor of the Academic Commons with long-standing interest in the impact of new technology on the meaning and practice of liberal education, he is currently the dean of library and information services and chief information officer at Middlebury College.


Q: Thanks for taking the time to talk with us about games and education. I’d like to start with a clarifying question: There are many different types of games in the world. What do you mean by game? And of the various sorts of games that are available, which ones seem to lend themselves best to educational applications?

Maura Smale: I’ve always found “game” to be difficult to define. A playful contest involving competition and a winner is one definition, though there are lots of games that are more cooperative than competitive or that don’t clearly end with a winner. Similarly, the line between game and simulation can be blurry; I’m not sure there’s a difference between playing the “learn to dissect a frog game” and dissecting a simulated frog. I think most games do involve play, though only if play as an activity does not always require ease or happiness. There are plenty of good games that are sometimes difficult or make the player unhappy (and sometimes that’s the point of the game).

Any type of game could potentially be used in education, as long as the learning objectives for the topic, class, or course aren’t superseded by the game. That is, the game must be in service of what we’d like our students to learn, not the other way around. And that’s what I’ve found most tricky about using games in teaching—figuring out the best game mechanics to use to teach a concept that will result in an engaging experience for students in which they learn the course material.

A focus on learning outcomes leaves the field wide open for the kinds of games to use in teaching. Quiz games like Jeopardyor Trivial Pursuit perhaps have a natural affinity for the classroom—they can be a public, low-stakes form of the assessments that many educators already use. If the content of the game matches the course, like in many historical or military games, the game could be incorporated into the course as source material along with relevant readings. Students can also play a game and then react or respond to themes in it; the rise in the number of games that address serious topics like privacy issues, immigration, and poverty might be appropriate, but so might a discussion of gender issues in commercial video games.

Q: There are ways in which the content of many courses is really a vehicle for teaching broader, more intangible things often referred to as critical thinking skills that have little to do with the actual subject matter being studied. Can you speak to examples of how games have been used to promote this sort of liberal education?

Smale: I think that many, perhaps even most, games incorporate the goals of liberal education that you describe. All games require players to figure things out: from the rules at the outset (sometimes via reading the instructions but also, in more recent video games, by playing through the first, training level of the game) to the strategy required to have the best chance of winning. Every time a player takes her turn she engages in critical thinking, using all of the information she’s gained in the game to evaluate and complete the best move possible. Games can also provide an opportunity for students to practice solving problems until they arrive at the right answer—often referred to as failing forward (a term that I love). That resiliency in the face of a challenge—the ability to pause, reconsider your actions, and come up with creative solutions to a problem—is another strength of liberal education that games can teach and encourage in students.

I’m a faculty member in the library, so the games I most often create and use address information literacy competencies, another one of the broader goals of liberal education. Critical thinking is inherent in information literacy, of course, and to me information literacy is a natural fit for game-based learning. A focus on research, information-seeking, and evaluating information before using it are key components of many games, and indeed there are a wide range of information literacy and library instruction games in use at academic libraries.

Another possibility for using games in education is to involve students in making games in a course. I’ve had less experience with this process, as most of the instruction in my library is of the single-session variety, but have been thinking on ways to incorporate game creation into the workshops that I teach. Asking students to make games draws on all of the goals of liberal education noted above and then some, because students must go beyond playing the game to construct a successful game-playing experience. As they do when playing a good educational game, students ultimately must learn both course content and critical thinking skills well in order to create a game for others to play.

Q: Could you imagine an entire curriculum constructed out of making and playing games?

Smale: Yes, definitely. There are two examples that I can think of off the top of my head (and I’m sure there are more), though both are primarily at the secondary level rather than higher education. One is the New York City middle and high school calledQuest 2 Learn, which takes a game-based learning approach to the curriculum in all subjects; another Quest school opened recently in Chicago. Both are public schools co-run by the Institute of Play, a nonprofit organization that promotes game-based learning. The other is a Latin curriculum called Operation LAPIS, developed by The Pericles Group. It’s an alternate reality game that teaches a two-year course of Latin, designed for middle school through college students.

I would imagine that it would take a fair amount of work to adapt a curriculum designed for a more traditional lecture- or discussion-based pedagogy into one that used games for teaching and learning. But I think it could certainly be done, probably most thoughtfully by a group of faculty collaborating on the redesign of a program. I have occasionally encountered resistance when asking students to play games, which might be a concern for an entire course or program of study based on games. Involving students in making games as well as playing them might help overcome the hurdle of the occasional student who is less interested in games.

Q: Can you speak a bit more about the resistance to using games in education, and what might be done to overcome such objections?

Smale: Sure. Resistance can come from two groups: from educators who may consider using games for teaching and learning to be frivolous edutainment, and from students who are asked to play or make games in classes. In some ways addressing the concerns of the former is easier. There’s a large (and growing) body of qualitative and quantitative research that demonstrates the effectiveness of game-based learning at all educational levels and for many different disciplines.

Overcoming student objections to using games in education is potentially trickier. In my experience some college students are resistant to any form of active learning, and using games is an active learning strategy. They may be accustomed to a predominantly lecture-based curriculum from their K-12 education, which may shape their expectations for college. And some students also resist active learning in courses that they are not especially invested in, perhaps core or General Education requirements. As a librarian I work with many introductory composition courses and sometimes encounter this form of resistance from the students I meet.

Making sure that educational games are tightly linked to the course or lesson’s learning objectives is one strategy for trying to prevent student resistance to game-based learning. I think students may resist a pedagogical strategy when they are unable to determine whether the work they’re engaged in is meaningful in the context of the course. Many students may be concerned about whether gameplay counts towards their final grade, perhaps the opposite of what we as educators are hoping for: the opportunity that games provide for students to fail forward and learn from their mistakes. Taking the time to thoughtfully integrate playing and making games into the coursework, and ensuring that students know why we’re using games in a course, can help overcome student resistance.

Q: Final thoughts?

Smale: I’ve been delighted to read about many compelling examples of game-based learning over the past several years; it’s clear that using games in higher educational contexts is on the increase. Games can provide opportunities for customizing the student learning experience, peer collaboration, and increasing student engagement, all of which can help students achieve their academic goals. I’m optimistic about the possibilities for the future of games in education, from playing to modding to creating, and look forward to continuing to incorporate games into my teaching.

Distributed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Creative Commons License

This interview is part of a special issue of Transformations on games in education, published on September 30, 2013. The issue was developed by Mike Roy (Middlebury College), guest editor and editorial board member of the Academic Commons, and Todd Bryant (Dickinson College), who was instrumental in organizing and developing the special issue.

Games with a Purpose: Interview with Anastasia Salter

Posted on July 24, 2014

Salter_BioAnastasia Salter is an assistant professor at the University of Baltimore in the Department of Science, Information Arts, and Technologies. Her primary research is on digital narratives with a continuing focus on virtual worlds, gender and cyberspace, games as literature, educational games and fan production. She holds a doctorate in communications design from U. Baltimore and an M.F.A. in children’s literature at Hollins University. She is on the web at

This interview was conducted by Mike Roy, editorial board member of the Academic Commons. A founding editor of the Academic Commons with long-standing interest in the impact of new technology on the meaning and practice of liberal education, he is currently the dean of library and information services and chief information officer at Middlebury College.

Q: There are at least two ways of thinking about games in education. On the one hand, games are a form of culture that is increasingly important, and worthy of study in the same way that TV and Film have found their way into the curriculum. But they also have an instrumental value, as vehicles for helping to teach and learn about traditional subjects. What are the most interesting and useful examples you can think of where games are being used in the curriculum to facilitate learning?

Salter: Games with a purpose can be powerful both as classroom experiences and as design challenges: some of my favorite examples of games in the curriculum are student-designed games related to course topics. Games offer agency to students whether they are players or designers. Experiential games, including alternate reality games such as the Arcane Gallery of Gadgetry, Adeline Koh’s Trading Races, and The Pericles Group’s Operation LAPIS demonstrate the possibilities of play with or without technology. Commercial games like World of Warcraft and Civilization are also being integrated into the curriculum: all games can promote learning of some kind.

Q: As a professor who teaches using games, to what extent do you think that games are “just” effective vehicles for learning that could be achieved through other means, and to what extent do you think that integrating games into the classroom promotes new  types of learning that can be achieved no other way?

Salter: To some extent, everything is “just” a vehicle for learning–but why trivialize that? Games provide environments where we learn from our failures safely. They bring experiential learning into the classroom, and provide models for thinking about problems where there’s not only one right solution. The dynamics of games help distinguish them from learning environments where knowledge is dictated, lectured, presented or otherwise placed in front of a student. Games have the power to change how we think about the classroom, and while they may not be the only means to that end, they are invaluable for re-imagining learning as play.

Q: So using games in education fits into a broader movement to re-think education as something other than  “placing knowledge in front of a student.”  I’m going to play devil’s advocate here, and ask: how do you strike the right balance between transmitting the ‘facts in the head’ needed to work within any given domain of knowledge, and the imperative to support students’ development where ‘content’ is really a means to a greater developmental end?

Salter: Well, I’d say that transmitting those “facts in the head” is only ever really successful when learners can place facts in a meaningful context. When people are given a reason to learn, they tend to learn:  just look at the instant recall of Pokemon strengths and weaknesses by young gamers, the endless application of tactics and rotations by World of Warcraft players, or the hazard memorization of competitive Call of Duty and Halo players. Acquiring knowledge is always a first step towards application, but traditional learning tends to isolate the facts and leave the learner without clear motivation.

Q: As you point out, it is clear that a person playing a game is extremely motivated to learn what she needs to know in order to succeed at the game. The promise of games in education is that this same motivation and excitement can be leveraged to learn more traditional subject areas. However, most educators run their classrooms without using games. What do you see as the barriers to broader adoption of using games in the classroom?

Salter: Classroom education has always struggled with the isolation of the learning environment from the real world. Games can bridge that gap, but first they have to be seen as acceptable and not just a waste of time. In K-12, most educators are stuck with way too many administrative restrictions on their teaching to get away with something as apparently radical as teaching with games. In higher ed, we have a different problem: most faculty aren’t actually trained to teach so much as they are trained in research, so if games aren’t on their personal radar they are unlikely even to encounter the possibility. In that sense, our current education system is very self-perpetuation: teachers are products of the current systems, and it’s easy to reproduce what they experienced.

Q. Short of completely dismantling our entire educational system, what can we do to address the challenges you identify as standing in the way of broader adoption?

Salter: Well, I can’t say I have any problem with the idea of dismantling and rethinking the entire educational system! There are a lot of ways to address these challenges. Bringing teachers into gaming is a great first step, particularly when there are opportunities to demonstrate that gaming isn’t all Grand Theft Auto and Call of Duty. Empowering teachers and students as collaborative designers of their learning experience is the next step, and the one I work on a lot: any teacher can bring a playful approach into the classroom through creative activity, and making a game can be a great way to express ideas and probe at a system of knowledge. It doesn’t even matter if the final game is any good–the act of making something, and playing with it, and even failing is essential.

  1. Final thoughts?

Salter: Even if every educator doesn’t bring actual games into the classroom, there are lots of ideas we can take from the way learning happens in games. Games offer a sliding difficulty, and a space where failure is part of the learning experience, not an end outcome. Furthermore, games are inherently collaborative and often offer multiple ways to master something. Just like in life, if not the traditional classroom, there’s rarely only one “right” solution.

Distributed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Creative Commons License

This article is part of a special issue of Transformations on games in education, published on September 30, 2013. An earlier version was circulated for open peer review via Media Commons Press. The “Games in Education” issue was developed by Mike Roy (Middlebury College), guest editor and editorial board member of the Academic Commons, and Todd Bryant (Dickinson College), who was instrumental in organizing and developing the special issue.

Cyberinfrastructure = Hardware + Software + Bandwidth + People

by Michael Roy, Middlebury College


A report on the NERCOMP SIG workshop Let No Good Deed Go Unpunished; Setting up Centralized Computational Research Support, 10/25/06

Back to the Future of Research Computing

As Clifford Lynch pointed out at a recent CNI taskforce meeting, the roots of academic computing are in research. The formation of computing centers on our campuses was originally driven by faculty and students who needed access to computer systems in order to tackle research questions. It was only years later that the idea of computers being useful in teaching came into play. And once that idea took hold, it seemed that we forgot about the research origins of academic computing.

Lynch argues that the pendulum is swinging back again, as campuses nationwide report an increased interest in having libraries and computer centers provide meaningful, sustainable and programmatic support for the research enterprise across a wide range of disciplines.

At the October 27, 2005 NERCOMP meeting entitled “Let No Good Deed Go Unpunished,” Leo Hill, Leslie Hitch and Glenn Pierce from Northeastern University gave a presentation about how they planned for and implemented a university computer cluster that serves the research agendas of a wide array of Northeastern’s faculty.

The talks provided good information about the technology planning, the politics and the policy questions that arose, and placed the entire project within an economic model that is useful for analyzing a broad range of academic initiatives taking place on our campuses.

Key Questions:

  1. To what extent should support for research computing be centralized?
  2. If one runs a centralized research computing facility, how does one secure funding for it?
  3. What are some technology strategies for keeping these costs to a minimum?
  4. How can one justify the establishment of a centralized research facility in language that makes sense to academic administrators?
  5. How can this impulse be explained in terms of current trends in computation in particular and research in general?
  6. How do you allocate resources to map to institutional priorities?

Part One
On the Ground: Technical Considerations

Speaker: Leo Hill, Academic and Research Technology Consultant, Northeastern University

Slides available at

How do you support research and high performance computing?
As a way into explaining why Northeastern took on the project of building a centralized computer cluster, Hill began his talk by making the claim that faculty are not experts at many of the technologies that are required to provide a robust cluster computer environment (OS, Patches, Security, Networking). He also shared his impression that the National Science Foundation and other funding agencies increasingly look for centralized support as part of the overhead that they pay to universities.

In addition, a major benefit to a centralized facility is that a university can enjoy volume discounts for hardware and software, as well as for the considerable costs associated with creating a space to house a large cluster. These costs primarily revolve around power and air conditioning.

How did the process of designing this space work?
A Research Computing steering committee was created. The group’s job was to understand the needs of the greater community. They conducted interviews about present and future research projects of the Northeastern faculty, as a way to understand what sort of software and computational horsepower they would need. In analyzing the results of these interviews, they asked: Are there consistent terms? How can we resolve conflicting concepts? How do we translate these various desires into a viable service?

Their solution was to build a cluster that had the following features:

  1. Job management (queues)
  2. Ability to interactively run jobs (required for some users)
  3. Ability to support large files
  4. Ability to efficiently support large data sets (in excess of 4 gig)

As is true of all centrally-managed computational facilities, they had to factor in (and make trade-offs between) processing power and very large file storage. The list of software that the cluster would be supporting (see slides) was large but did not seem to exceed what most schools support on a range of devices on their network.

Once they had the hardware and software requirements in place, the team chose to develop an RFP (request for proposal) in order to collect bids from multiple vendors. Hill used a web-based service offered by HP ( for both developing and distributing RFP. As cluster computing has matured into a commodity that one can buy, vendors have begun to provide data on the impact of their systems upon air conditioning and power, allowing a better understanding of the overall set-up costs of a data center.

One of the more alarming aspects of the requirements of this project was that it all had to be accomplished with no new staff. This drove the team to look for a vendor-supported turnkey solution (they ended up choosing Dell with ROCKS as the platform). With no new staff, there has been some impact on existing services. The helpdesk now needs to be able to respond to new types of questions. System administration is accomplished by two existing staff who collectively dedicate roughly four hours per week to this service. They also needed to develop service level agreements around node downtime. How quickly should they respond if a single node goes down? What if the head end of the system is no longer functioning? Implicit in making software available is the support for that software, which has meant that they have also reinstated a dormant training program to explain how to work in this environment, and to provide some support for particular applications.

While the cluster is presently offered as a free service, the work on developing the cluster has triggered interest in and the development of other services at Northeastern. This includes selling rackspace in the datacenter, advanced programming support, and increased (and welcome) consultation on grant writing and equipment specifications.

Part Two
Campus Politics and Process
Speaker: Leslie Hitch, Director of Academic Technology, Northeastern University

Slides available at

While Hill’s presentation provided useful insights into the actual process by which the particular hardware and software were selected, installed and managed, Hitch’s talk focused on the institutional framework in which the project was carried out. Northeastern’s issues should be quite familiar to anyone working in higher ed today. The University’s academic plan calls for an increase in the quantity and quality of faculty research, and the project responds nicely to that area. It also calls for increased undergraduate involvement in research, which can be linked to this project as well. Advocates also linked the project to a possible boost in NEU’s rankings in US News & World Report, suggesting that ignoring research computing is something that one did only at one’s peril.

While the project was driven partially by actual faculty demand, it also anticipated growth in need in the social sciences and humanities, which do not have the traditional funding streams that the scientists enjoy. (For more information, see the draft report on Cyberinfrastructure for the Humanities and Social Sciences, recently published by the American Council of Learned Societies.)

In order to design the system, Hitch’s team set out to find what is common among various scientists and social scientists—a perfectly fine question, and one that those wanting to document the complex working relationships among their various faculties would be well-advised to consider. The act of asking people about what they do with technology, and what they would like to do with technology, almost always reveals useful insights into the nature and structure of their disciplines.

While the list of differences (software, memory requirements, gui v. command line, support requirements) in this case was framed as a means of specifying a particular system, the differences can also be understood in terms of what is called “scholarly infrastructure,” based on Dan Atkins’s recent work for the NSF in describing “cyberinfrastructure.” The slide below—from Indiana University’s recent Educause presentation—suggests a useful way of visualizing what particular disciplines have in common, and how they differ.

Source: “Centralize Research Computing to Drive Innovation, Really,” a presentation by Thomas J. Hacker and Bradley C. Wheeler, Indiana University.

Of course, with increased bandwidth among our schools, the act of centralization need not necessarily stay within the campus. Couldn’t our faculty share infrastructure by discipline in multi-institutional facilities staffed by domain experts who can help with the domain-specific applications? What of the various national supercomputer centers? Why should we build campus-specific clusters if the NSF and others will provide for us through national centers?

One answer to this question lies in governance. For such centers to be sustainable, there needs to be a funding model in place, and a fair and agreed-upon system for allocating computational cycles and provisioning support. (Hitch provides the charge to their user group in her slides.)

Northeastern’s funding model, not yet fully articulated, is to be determined by its users. Northeastern has also decided to allow the users of the system to develop their own policy about the allocation of computational cycles. Since there is no new FTE attached to this project, they do not have to worry about how to allocate the provision of support!

One funding model under discussion links awareness of IT to sponsored research. How can IT be used to bring in more money for research? Is providing this service something that should be part of overhead? If so, how do you go about securing a portion of overhead to dedicate to this sort of facility?

If one believes that the future of academic research lies in the increased use of such facilities, the question of staffing these facilities becomes critical. Is it enough to fund centralized facilities just to avoid the costs of lots of little clusters and to promote outside funding, allowing faculty to raise more money? One needs to more fully understand the support needs of such a transformed enterprise. In the discussion, hard questions arose about who would be providing this sort of support. Who pays for these people? To whom do they report? Even harder, where do they come from? How do you find people who can do this kind of work with/for the faculty? Does shifting the research associate from the department to the central IT universe reduce the amount of freedom, control, and experimental play? How can one build into the structure of these new types of support positions the ability to raise funds, to do research, to stay engaged in the field?

Part Three
Academic Research Process and IT Services
Speaker: Glenn Pierce, Director, IS Strategy and Research, College of Criminal Justice, Northeastern University

The next session moved from the particulars of Northeastern’s technical and political environment to a broader reflection on the implications of centralized research computing support for the academic enterprise. Pierce began by using the history of other enterprises (most notably, banking) to suggest that there are profound changes underway that could (for many disciplines) completely transform their way of conducting research, and eventually affect what happens in the classroom.

Using language more familiar to business school than to the usual IT conference, Pierce described the research process as a value/supply chain heavily dependent on IT investments and support. In this model, any break in the chain disrupts the process, slowing down the rate at which the faculty member can productively produce research, while new efficiencies (faster hardware, better software, training of faculty, hands-on IT support) can improve the efficiency of the process.


Source: Weill, Peter and Marianne Broadbent. Leveraging the New Infrastructure: How Market Leaders Capitalize on IT. Boston: Harvard Business School Press, 1998.

In a slide reminiscent of the scholarly cyberinfrastructure slide Hitch used, one is able to see the core question of the day: Where is the cut-off for central services: fast changing local application? shared standard IT applications? shared IT services? For Pierce, central IT should aim to go as high up the pyramid as possible.

While Pierce acknowledges that it is a real challenge to imagine a world in which centralized IT has intimate knowledge about domain-specific applications, he also challenges colleges and universities to re-think what is meant by productivity, and to ask not what it costs to provide central IT support for research computing, but instead to ask what it costs NOT to provide it. He argues that faculty doing their own IT represents a loss in productivity and a lost opportunity, and that traditional measures of academic productivity (like traditional measures of productivity in other industries) do not capture the fact that entire industries can be changed, created, or eliminated altogether through the revolution afforded by the powers of computing.

One concrete example Pierce offers is Softricity, an application (like Citrix) that allows one to run applications locally, taking advantage of local computer resources, without installing the application directly on the local machine. This fundamental change in how software can be distributed would require major changes both organizationally and financially. Pierce argues that the predominant model where all budgets across an institution rise and fall at the same rate gets in the way of fundamental change. In the case of Softricity, in order to meet an increased demand for applications and data, we need more money to make this available, and yet these arguments rarely succeed in an academic culture that approaches change incrementally. It is therefore difficult, if not impossible, to fundamentally re-tool to take advantage of the power and increased productivity enabled by centralized IT services.

If one accepts the argument that investing in central IT makes good business sense, and one is looking for other parts of the academic enterprise where one can point to increased productivity, Pierce suggests that the same productivity gains enjoyed by centrally-supported research initiatives can be (hypothetically) found in student education outcomes. This tantalizing claim, not backed up by examples, certainly seems worthy of further investigation.

So what keeps us from all changing overnight from our distributed model back to something that looks, to many, an awful lot like the old mainframe centralized model? Pierce identifies four major barriers to extending centrally-supported IT for research:

  1. The existing perception of IT service (many researchers simply do not believe central IT is up to the task)
  2. Current funding models that
    1. are balkanized
    2. measure costs rather than productivity
    3. make it difficult to measure or even see cost of lost opportunities
  3. Current planning models that suffer from the same problems as our funding models
  4. Anxiety over the loss of local control

Using the scholarly infrastructure model, Pierce made the point that the further one moves away from technical issues of hardware, operating systems and networking, and into the domain of discipline-specific software, the more involved faculty need to be in the planning process. He also makes the point that the sort of financial re-organization required to support this shift toward a centralized model requires a genuine partnership between the IT leadership and academic leadership. All of this is possible only if the campus really and truly believes that IT-supported research can fundamentally change for the better how we conduct research and eventually how we educate our students.

Possible Futures & Implications

What follows is a list of possible changes in the daily operations on campuses that embrace the idea of investing in the support of IT-supported research, and a few ideas for collaboration between campuses (or business opportunities):

  1. Change the way you distribute software to allow more ubiquitous access to software, using technologies such as Softricity or Citrix.
  2. Fund more aggressively-centralized resources such as clusters.
  3. Hire discipline-aware IT support staff who can work with faculty on research problems.

As our campuses become increasingly connected by high-speed networks, one can ask questions such as:

  1. Can we negotiate licenses with vendors that would allow us to consortially provide access to software?
  2. Can we create local clusters that multiple campuses can fund and support?
  3. Can discipline-specific support be organized consortially to allow (for example), an economist at School A in need of help with SAS to get that help from a SAS expert at School B?

What does cluster and research computing have to do with liberal arts education?
One can imagine protests about shifting institutional resources into IT-supported research computing. For some this will be seen as an unwelcome return to the early days of campus computing, when a disproportionate share of the support went to a few faculty from the handful of fields that had discovered how to use computers to facilitate their research. As in the first generation of campus computing, however, this trend may be a harbinger of demands that will arise across campus and across disciplines. If one takes seriously the propositions put forth in the recent American Council of Learned Societies report on cyberinfrastructure for the humanities and social sciences, this re-alignment of resources in support of changing requirements for scientific and quantitative research is very likely one of the re-alignments that will be required to support teaching, research, and scholarly communications in ALL disciplines.

Further Readings

Educause Resource Library on Cyberinfrastructure

“The new model for supporting research at Purdue University,” ECAR Publication (requires subscription)

Beyond Productivity, National Academy of Sciences
William J. Mitchell, Alan S. Inouye, and Marjory S. Blumenthal, Editors, Committee on Information Technology and Creativity, National Research Council, 2003.

Speaker Contact Information

Leo Hill, Academic and Research Technology Consultant, Northeastern University

Leslie Hitch, Ed.D. Director of Academic Technology, Northeastern University

Glenn Pierce, Ph.D, Director-IS Strategy & Research, College of Criminal Justice, Northeastern University

Robert Bechtle Retrospective & the Pachyderm Project

by Michael Roy, Middlebury College


The San Francisco Museum of Modern’s Art ( ) retrospective on the work of Robert Bechtle explores Bechtle’s life and work through videos of the artist working in his studio, as well as photographs, letters, newspaper clippings, and other primary source materials from his personal archive. A gallery of artworks zoom-enabled for closer inspection shows highlights from the artist’s 40-year career. Accompanying the show is a nifty web application that provides access to a wide range of multimedia materials. This application serves as a preview of some of the new features that will be available in the 2.0 version of Pachyderm Project ( which is a project being managed by the NMC ( which brings together software development teams and digital library experts from six NMC universities together with counterparts from five major museums to create a new, open source authoring environment for creators of web-based and multimedia learning experiences. Pachyderm should be of particular interest to small schools that do not have professional multimedia development shops.

Technology & the Pseudo-Intimacy of the Classroom: an interview with Jerry Graff

by Michael Roy, Middlebury College


Gerald Graff ( is a professor of English at the University of Illinois at Chicago. His recent work has centered on how for most students and members of the general population, academia in general and literary studies in particular are obscure and opaque, a theme taken up in his CLUELESS IN ACADEME: HOW SCHOOLING OBSCURES THE LIFE OF THE MIND(Yale University Press, April 2003).

Academic Commons caught up with Graff to explore his thoughts about technology and the future of liberal education.

Academic Commons: Is our country’s commitment to the ideals of liberal education really in crisis?
Graff: Probably, but one constant seems to survive the crises of every generation: a small percentage of college students “get” the intellectual culture of academia and do well in college while the majority remain more or less oblivious to that culture and pass through it largely unchanged. Changing these conditions, creating a truly democratic higher education system that liberally educates more than a small minority, has always been and still is the main challenge of liberal education.

Much has been made of the neo-Millenials (also known as the Net Generation) who are presently enrolled on our campuses, and how they learn differently than past generations. Do you see this description as accurate or useful when thinking about how educators need to change their teaching strategies?
I have always been skeptical of claims about learning differences between generations. Formerly, it was the ‘60s that purportedly made the adolescent mind non-linear, more visual, and so forth. Now pixels and megabytes supposedly produce a new kind of non-linear consciousness, or one wired into simultaneity, or whatever.

How is technology helping higher education?
Probably only in rather narrowly technical ways, so far, e.g. making registration processes more efficient. Communication across campus has been made much easier, but this benefit may have been negated by the overload problem: we now get information much more readily, but it comes in such excessive volume that the chances of our recognizing the information that is really relevant and useful to us are correspondingly lessened.

How is technology hurting higher education? Aside from the overload problem just mentioned, I think there has been a failure to recognize and exploit the potential that technology offers for improving and transforming day-to-day instruction.

Let me give one example.

I have long thought that there is something infantilizing about the standard classroom situation, where the very face-to-face intimacy that is so valued actually encourages sloppy and imprecise habits of communication. That is, the intimate classroom is very different from–and therefore poor training for–the most powerful kinds of real-world communication, where we are constantly trying to reach and influence audiences we do not know and will probably never meet. We should be using online technologies to go beyond the cozy pseudo-intimacy of the classroom, to put students in situations that force them to communicate at a distance and therefore learn the more demanding rhetorical habits of constructing and reaching an anonymous audience. We have begun to do this to some extent, but our habit of idealizing presence and “being there,” the face-to-face encounter between teachers and students, blinds us to the educational advantages of the very impersonality and distancing of online communication. Indeed, online communication makes it possible for schools and colleges to create real intellectual communities rather than the fragmented and disconnected simulation of such communities that “the classroom” produces.

Can you point to examples of such communities?
I meant possible intellectual communities rather than actually existing ones. I do not know any campus in America that has what I would call a real intellectual community, online or otherwise, in the sense of everyone–or almost everyone–on campus engaged in a continuous conversation about ideas all the time (as occurred for a brief time during the campus protest era in the ‘60s and early ‘70s). I think online technology makes something like such a community of discussion possible even without a crisis like the Vietnam War, but I do not know of any campus that has come close to creating such a potential community. Of course there may be many things going on that I do not know about.

How do you use technology in your own teaching?
I love using e-mail for writing instruction. I can get right inside my students’ sentences and paragraphs, stop them and ask them “can you see a problem with this phrase?” or “can you think of an alternative to this formulation?” or “please improve on this sentence,” with an immediacy and turn-around speed that handing papers back with comments cannot begin to match.

I have also used class listservs, which seem to me to have great potential.The big benefit for me is the creation of a common space of class discussion that everyone can (and in my case must) contribute to, a space that prolongs the in-class discussion and enables us to pursue issues that had gotten short shrift in class. I wish these listserv discussions were more controlled and focused than they have been in my classes, and I think they can be when and if I learn better how to structure them.

One interesting thing I have learned from listservs is that most students see electronic communication as an extension of informal oral discourse, whereas I see it (when used in a class anyway) as properly an extension of formal writing. When I chastised one class for writing sloppy, prolix, and often unreadable blather on the class listserv, they objected that I was trying to shut down the liberating spontaneity and informality that is inherent in electronic media. I think this was a rationalization, but one that has to be anticipated.

In recent years it has become increasingly easy for non-technical people to produce extravagant multimedia productions on their desktop computers. Certain faculty mourn this as the final nail in the coffin of literacy and literature, while others celebrate the possibilities afforded by this new multimedia literacy. Who is right?
Neither group seems worth taking seriously. I do not mean to denigrate multimedia assignments or the way in which they can produce new kinds of learning. I just do not accept the claim that such multimedia creativity is either the final nail in the literacy coffin or a revolutionary breakthrough. If I had to choose, though, I would be more sympathetic to the latter view, or at least be interested in hearing more about multimedia assignments. I am not technologically adept enough to have tried any myself.

Open Access to Scholarship: An Interview with Ray English

by Michael Roy, Middlebury College


What is the open access movement?
Open access is online access to scholarly research that’s free of any charge to libraries or to end-users, and also free of most copyright and licensing restrictions. In other words, it’s scholarly research that is openly available on the Internet. Open access primarily relates to the scholarly research journal literature–works that have no royalties and that authors freely give away for publication without any expectation of monetary reward.

The open access movement is international in scope, and includes faculty and other researchers, librarians, IT specialists, and publishers. There has been especially strong interest from faculty in scientific fields, but open access applies to all disciplines. The movement has gained great impetus in recent years through proclamations on open access, endorsements from major research funding agencies, the advent of new major open access publishers, and through the growth of author self-archiving and author control of copyright.

Are there different forms of open access?
Open access journals and author self-archiving are the two fundamental strategies of the open access movement. Open access journals make their full content available on the Internet without access restrictions. They cover publication costs through various business models, but what they have in common is that they generate revenue and other support prior to the process of publication. Open access journals are generally peer-reviewed and they are, by definition, published online in digital form, though in some instances they may also produce print versions. Author self-archiving involves an author making his or her work openly accessible on the Internet. That could be on a personal website, but a preferable way is in a digital repository maintained by a university or in a subject repository for a discipline. I should point out that author self-archiving is fully in harmony both with copyright and with the peer review process. It involves the author retaining the right to make an article openly accessible. Authors clearly have that right for their preprints (the version that is first submitted to a journal) – but they also can retain that right for post-prints (the version that has undergone peer review and editing).

Do journals generally allow authors to archive their work in that way?
A very large percentage of both commercial and non-profit journals do allow authors to make post-prints of their works openly accessible in institutional or disciplinary archives. There tend to be more restrictions on the final published versions (usually the final pdf file), but many publishers allow that as well. An interesting site that keeps track of that is SHERPA in the United Kingdom.

Why is open access important for higher education?
Open access is one strategy – and actually the most successful strategy so far – for addressing dysfunctions in the system of scholarly communication. That system is in serious trouble. High rates of price increase for scholarly journals (particularly in scientific fields), stagnant library budgets, journal cancellations, declining library monograph acquisitions, university presses in serious economic trouble, and increasing corporate control of journal publishing by a small number of international conglomerates that have grown in size through repeated mergers and acquisitions – those are all symptoms of the problem. Scholars have lost control of a system that was meant to serve their needs; more importantly, they are also losing access to research. Open access has extraordinary potential for overcoming the fundamental problem of access to scholarship. It is a means of reasserting control by the academy over the scholarship that it produces and of making that scholarship openly available to everyone – at anytime and from virtually any place on the globe.

Why does open access matter to liberal arts colleges in particular?
It is especially important for liberal arts colleges because of the access issue. Liberal arts college libraries have smaller budgets, compared to the research universities. While even the major research libraries cannot afford all of the journals that they need, the lack of access is an even bigger problem in the liberal arts college realm. Faculty at many liberal arts colleges are expected to be active researchers and independent study is also a hallmark of a liberal arts college undergraduate education. So the lack of access to journal literature can be even more problematic in the liberal arts college framework than it is for the research universities.

Are there other benefits to open access?
There are many benefits, but the main one that I would point out relates to the growing body of research that demonstrates how open access increases research impact. A number of studies have shown that articles that are made openly accessible have a research impact that is several times larger than that of articles that are not openly accessible. Authors will get larger readership and more citations to their work if they make it openly accessible.

And what about disadvantages?
Well, one of the main objections to open access journals relates to the fact that most of them are new and don’t have the prestige factor of older established journals. So, younger faculty who are working for tenure may not want to publish in open access journals, particularly if they can publish in traditional subscription journals that are high in prestige and impact. That’s not as much of a concern for tenured faculty, though, and some open access journals are becoming especially successful and prestigious. Titles published by the Public Library of Science are a great example of that. Prestige and tenure considerations don’t come into play for self-archiving. All authors can exert control over copyright and can make their work openly accessible in a repository, and that will definitely benefit both themselves and the research process generally.

What about the business viability of open access journals?
As I mentioned, there are a variety of business models that support open access publishing. They include institutional sponsorship, charging authors’ fees, and generating revenue from advertising or related services. Business models will differ, depending upon the discipline and the particular circumstances of a journal. In the sciences, where there is tradition of grant support, charging authors’ fees is very feasible. Both the Public Library of Science (the most prominent nonprofit open access publisher) and BioMed Central (the most prominent commercial open access publisher) are great examples of that. In humanities fields, by contrast, there is very little grant support for research, but publishing is also less costly, so open access there is likely to be fostered primarily through institutional sponsorship. Open access publishing is inherently less expensive than traditional subscription or print publishing. There are virtually no distribution costs and no costs related to maintaining subscriptions, licensing, or IP access control. There are also a number of open source publishing software systems that support the creation of new open access journals. I’m amazed by how many new peer-reviewed open access journals are appearing all the time. One way to get a sense of that is to go from time to time to the Directory of Open Access Journals. As of right now there are almost 2,000 titles listed. Just six months ago there were about 1,450.

Are there over 500 new titles in the last six months, or are there 1,000 new titles, and 500 titles that went out of business? Should faculty who don’t have tenure worry about publishing in journals that might no longer exist when they come up for tenure?
I’m not aware of any conclusive data on the failure rate for open access journals (or new subscription journals, for that matter). A new study that will be published in January indicates that about 9% of the titles listed in the Directory of Open Access Journals have published no articles since 2003. Those titles are still available online, so it’s hard to say if the journals have actually ceased. In addition, a small percentage of titles in the directory (about 3%) were inaccessible during the study. The reasons for those titles being offline are not clear; some may have failed, but some may just be inaccessible temporarily. A significant percentage of open access journals are from well-established publishers and some individual titles have been in existence for a decade or longer. At the same time, a large majority of open access titles are from smaller, more independent contexts – they are produced by non-profit organizations, academic departments, or leaders in a field. Since they are relatively new, their viability isn’t proven yet. So it could be advantageous for untenured faculty to publish in some open access journals, but that may not be the case a lot of open access titles.

What’s the hottest current issue related to open access?
I think it’s the issue of taxpayer-funded research. Both in this country and abroad there is increasing interest in making publicly-funded scientific research openly accessible. We saw the beginnings of that with the National Institute of Health policy that was instituted last year and I think we will soon see a broad national debate about the advisability of this for all U.S. government agencies. The United Kingdom is moving toward a comprehensive policy of mandating open access to all government-funded research.

What is your role in the open access movement?
I have been a member of the steering committee of SPARC (the Scholarly Publishing and Academic Resources Coalition) since its inception. SPARC, which is a coalition of academic and research libraries, has been a prominent advocate for open access. I have also played a leading role in the scholarly communications program of the Association of College & Research Libraries. I chaired a task force that recommended the ACRL scholarly communications initiative and I have been chair of the ACRL Scholarly Communications Committee since it was established. Being involved with both SPARC and ACRL has put me in the middle of a number of these issues for the past several years.

How does open access fit into your role as library director at Oberlin?
We have been doing ongoing work at the campus level to build faculty awareness of scholarly communications issues and also to support open access in concrete ways. We have taken out institutional memberships to major open access journals and we’ve encouraged faculty to publish in open access journals in instances where that made sense for them. I have also been involved as a steering committee member with the creation of a statewide institutional repository that OhioLINK is developing. When that repository system is implemented we will be working very actively with our faculty on the question of author control of copyright and self-archiving.

What are some concrete things that faculty, librarians, and other stakeholders can do to help?
Faculty have great power in the system of scholarly communication (as editors, reviewers, and authors), so they are in the best position to bring about change. They can assert control over their copyright, archive their research openly, and publish in open access journals, among other things. The role of librarians and IT staff necessarily needs to be more educational in nature. They can become informed about these issues and then work with faculty and other researchers to bring about fundamental change. There is a good summary of a lot of these issues, along with concrete suggestions for what faculty, librarians, and others can do, in the ACRL Scholarly Communications Tool Kit.

The Create Change website is another great resource.

Other than Academic Commons, what is your personal favorite open access publication?
My favorite one, for obvious professional reasons, is D-Lib Magazine. It publishes a variety of content – articles, news, commentary, conference reports – related to the development of digital libraries. They’ve had a number of important pieces on open access and scholarly communications issues.