Game-Based Learning: Interview with Maura Smale

Smale_Bio


Maura A. Smale is an associate professor and head of library instruction at New York City College of Technology, CUNY, where she leads the library’s information literacy efforts. Her academic background includes both a Ph.D. in anthropology (New York University) and a Masters of library and information science (Pratt Institute). She is a member of the Steering Committee of the CUNY Games Network, a multidisciplinary group of faculty and staff interested in exploring the use of games and simulations for teaching and learning. She is also involved in a multi-year study of how, where, and when undergraduates at CUNY do their academic work. Her other research interests include open access publishing and new models of scholarly communication, critical information literacy, and emerging instructional technologies.

This interview was conducted by Mike Roy, editorial board member of the Academic Commons. A founding editor of the Academic Commons with long-standing interest in the impact of new technology on the meaning and practice of liberal education, he is currently the dean of library and information services and chief information officer at Middlebury College.

*****

Q: Thanks for taking the time to talk with us about games and education. I’d like to start with a clarifying question: There are many different types of games in the world. What do you mean by game? And of the various sorts of games that are available, which ones seem to lend themselves best to educational applications?

Maura Smale: I’ve always found “game” to be difficult to define. A playful contest involving competition and a winner is one definition, though there are lots of games that are more cooperative than competitive or that don’t clearly end with a winner. Similarly, the line between game and simulation can be blurry; I’m not sure there’s a difference between playing the “learn to dissect a frog game” and dissecting a simulated frog. I think most games do involve play, though only if play as an activity does not always require ease or happiness. There are plenty of good games that are sometimes difficult or make the player unhappy (and sometimes that’s the point of the game).

Any type of game could potentially be used in education, as long as the learning objectives for the topic, class, or course aren’t superseded by the game. That is, the game must be in service of what we’d like our students to learn, not the other way around. And that’s what I’ve found most tricky about using games in teaching—figuring out the best game mechanics to use to teach a concept that will result in an engaging experience for students in which they learn the course material.

A focus on learning outcomes leaves the field wide open for the kinds of games to use in teaching. Quiz games like Jeopardyor Trivial Pursuit perhaps have a natural affinity for the classroom—they can be a public, low-stakes form of the assessments that many educators already use. If the content of the game matches the course, like in many historical or military games, the game could be incorporated into the course as source material along with relevant readings. Students can also play a game and then react or respond to themes in it; the rise in the number of games that address serious topics like privacy issues, immigration, and poverty might be appropriate, but so might a discussion of gender issues in commercial video games.

Q: There are ways in which the content of many courses is really a vehicle for teaching broader, more intangible things often referred to as critical thinking skills that have little to do with the actual subject matter being studied. Can you speak to examples of how games have been used to promote this sort of liberal education?

Smale: I think that many, perhaps even most, games incorporate the goals of liberal education that you describe. All games require players to figure things out: from the rules at the outset (sometimes via reading the instructions but also, in more recent video games, by playing through the first, training level of the game) to the strategy required to have the best chance of winning. Every time a player takes her turn she engages in critical thinking, using all of the information she’s gained in the game to evaluate and complete the best move possible. Games can also provide an opportunity for students to practice solving problems until they arrive at the right answer—often referred to as failing forward (a term that I love). That resiliency in the face of a challenge—the ability to pause, reconsider your actions, and come up with creative solutions to a problem—is another strength of liberal education that games can teach and encourage in students.

I’m a faculty member in the library, so the games I most often create and use address information literacy competencies, another one of the broader goals of liberal education. Critical thinking is inherent in information literacy, of course, and to me information literacy is a natural fit for game-based learning. A focus on research, information-seeking, and evaluating information before using it are key components of many games, and indeed there are a wide range of information literacy and library instruction games in use at academic libraries.

Another possibility for using games in education is to involve students in making games in a course. I’ve had less experience with this process, as most of the instruction in my library is of the single-session variety, but have been thinking on ways to incorporate game creation into the workshops that I teach. Asking students to make games draws on all of the goals of liberal education noted above and then some, because students must go beyond playing the game to construct a successful game-playing experience. As they do when playing a good educational game, students ultimately must learn both course content and critical thinking skills well in order to create a game for others to play.

Q: Could you imagine an entire curriculum constructed out of making and playing games?

Smale: Yes, definitely. There are two examples that I can think of off the top of my head (and I’m sure there are more), though both are primarily at the secondary level rather than higher education. One is the New York City middle and high school calledQuest 2 Learn, which takes a game-based learning approach to the curriculum in all subjects; another Quest school opened recently in Chicago. Both are public schools co-run by the Institute of Play, a nonprofit organization that promotes game-based learning. The other is a Latin curriculum called Operation LAPIS, developed by The Pericles Group. It’s an alternate reality game that teaches a two-year course of Latin, designed for middle school through college students.

I would imagine that it would take a fair amount of work to adapt a curriculum designed for a more traditional lecture- or discussion-based pedagogy into one that used games for teaching and learning. But I think it could certainly be done, probably most thoughtfully by a group of faculty collaborating on the redesign of a program. I have occasionally encountered resistance when asking students to play games, which might be a concern for an entire course or program of study based on games. Involving students in making games as well as playing them might help overcome the hurdle of the occasional student who is less interested in games.

Q: Can you speak a bit more about the resistance to using games in education, and what might be done to overcome such objections?

Smale: Sure. Resistance can come from two groups: from educators who may consider using games for teaching and learning to be frivolous edutainment, and from students who are asked to play or make games in classes. In some ways addressing the concerns of the former is easier. There’s a large (and growing) body of qualitative and quantitative research that demonstrates the effectiveness of game-based learning at all educational levels and for many different disciplines.

Overcoming student objections to using games in education is potentially trickier. In my experience some college students are resistant to any form of active learning, and using games is an active learning strategy. They may be accustomed to a predominantly lecture-based curriculum from their K-12 education, which may shape their expectations for college. And some students also resist active learning in courses that they are not especially invested in, perhaps core or General Education requirements. As a librarian I work with many introductory composition courses and sometimes encounter this form of resistance from the students I meet.

Making sure that educational games are tightly linked to the course or lesson’s learning objectives is one strategy for trying to prevent student resistance to game-based learning. I think students may resist a pedagogical strategy when they are unable to determine whether the work they’re engaged in is meaningful in the context of the course. Many students may be concerned about whether gameplay counts towards their final grade, perhaps the opposite of what we as educators are hoping for: the opportunity that games provide for students to fail forward and learn from their mistakes. Taking the time to thoughtfully integrate playing and making games into the coursework, and ensuring that students know why we’re using games in a course, can help overcome student resistance.

Q: Final thoughts?

Smale: I’ve been delighted to read about many compelling examples of game-based learning over the past several years; it’s clear that using games in higher educational contexts is on the increase. Games can provide opportunities for customizing the student learning experience, peer collaboration, and increasing student engagement, all of which can help students achieve their academic goals. I’m optimistic about the possibilities for the future of games in education, from playing to modding to creating, and look forward to continuing to incorporate games into my teaching.

Distributed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Creative Commons License

This interview is part of a special issue of Transformations on games in education, published on September 30, 2013. The issue was developed by Mike Roy (Middlebury College), guest editor and editorial board member of the Academic Commons, and Todd Bryant (Dickinson College), who was instrumental in organizing and developing the special issue.

Games with a Purpose: Interview with Anastasia Salter

Posted on July 24, 2014

Salter_BioAnastasia Salter is an assistant professor at the University of Baltimore in the Department of Science, Information Arts, and Technologies. Her primary research is on digital narratives with a continuing focus on virtual worlds, gender and cyberspace, games as literature, educational games and fan production. She holds a doctorate in communications design from U. Baltimore and an M.F.A. in children’s literature at Hollins University. She is on the web at http://selfloud.net.

This interview was conducted by Mike Roy, editorial board member of the Academic Commons. A founding editor of the Academic Commons with long-standing interest in the impact of new technology on the meaning and practice of liberal education, he is currently the dean of library and information services and chief information officer at Middlebury College.

Q: There are at least two ways of thinking about games in education. On the one hand, games are a form of culture that is increasingly important, and worthy of study in the same way that TV and Film have found their way into the curriculum. But they also have an instrumental value, as vehicles for helping to teach and learn about traditional subjects. What are the most interesting and useful examples you can think of where games are being used in the curriculum to facilitate learning?

Salter: Games with a purpose can be powerful both as classroom experiences and as design challenges: some of my favorite examples of games in the curriculum are student-designed games related to course topics. Games offer agency to students whether they are players or designers. Experiential games, including alternate reality games such as the Arcane Gallery of Gadgetry, Adeline Koh’s Trading Races, and The Pericles Group’s Operation LAPIS demonstrate the possibilities of play with or without technology. Commercial games like World of Warcraft and Civilization are also being integrated into the curriculum: all games can promote learning of some kind.

Q: As a professor who teaches using games, to what extent do you think that games are “just” effective vehicles for learning that could be achieved through other means, and to what extent do you think that integrating games into the classroom promotes new  types of learning that can be achieved no other way?

Salter: To some extent, everything is “just” a vehicle for learning–but why trivialize that? Games provide environments where we learn from our failures safely. They bring experiential learning into the classroom, and provide models for thinking about problems where there’s not only one right solution. The dynamics of games help distinguish them from learning environments where knowledge is dictated, lectured, presented or otherwise placed in front of a student. Games have the power to change how we think about the classroom, and while they may not be the only means to that end, they are invaluable for re-imagining learning as play.

Q: So using games in education fits into a broader movement to re-think education as something other than  “placing knowledge in front of a student.”  I’m going to play devil’s advocate here, and ask: how do you strike the right balance between transmitting the ‘facts in the head’ needed to work within any given domain of knowledge, and the imperative to support students’ development where ‘content’ is really a means to a greater developmental end?

Salter: Well, I’d say that transmitting those “facts in the head” is only ever really successful when learners can place facts in a meaningful context. When people are given a reason to learn, they tend to learn:  just look at the instant recall of Pokemon strengths and weaknesses by young gamers, the endless application of tactics and rotations by World of Warcraft players, or the hazard memorization of competitive Call of Duty and Halo players. Acquiring knowledge is always a first step towards application, but traditional learning tends to isolate the facts and leave the learner without clear motivation.

Q: As you point out, it is clear that a person playing a game is extremely motivated to learn what she needs to know in order to succeed at the game. The promise of games in education is that this same motivation and excitement can be leveraged to learn more traditional subject areas. However, most educators run their classrooms without using games. What do you see as the barriers to broader adoption of using games in the classroom?

Salter: Classroom education has always struggled with the isolation of the learning environment from the real world. Games can bridge that gap, but first they have to be seen as acceptable and not just a waste of time. In K-12, most educators are stuck with way too many administrative restrictions on their teaching to get away with something as apparently radical as teaching with games. In higher ed, we have a different problem: most faculty aren’t actually trained to teach so much as they are trained in research, so if games aren’t on their personal radar they are unlikely even to encounter the possibility. In that sense, our current education system is very self-perpetuation: teachers are products of the current systems, and it’s easy to reproduce what they experienced.

Q. Short of completely dismantling our entire educational system, what can we do to address the challenges you identify as standing in the way of broader adoption?

Salter: Well, I can’t say I have any problem with the idea of dismantling and rethinking the entire educational system! There are a lot of ways to address these challenges. Bringing teachers into gaming is a great first step, particularly when there are opportunities to demonstrate that gaming isn’t all Grand Theft Auto and Call of Duty. Empowering teachers and students as collaborative designers of their learning experience is the next step, and the one I work on a lot: any teacher can bring a playful approach into the classroom through creative activity, and making a game can be a great way to express ideas and probe at a system of knowledge. It doesn’t even matter if the final game is any good–the act of making something, and playing with it, and even failing is essential.

  1. Final thoughts?

Salter: Even if every educator doesn’t bring actual games into the classroom, there are lots of ideas we can take from the way learning happens in games. Games offer a sliding difficulty, and a space where failure is part of the learning experience, not an end outcome. Furthermore, games are inherently collaborative and often offer multiple ways to master something. Just like in life, if not the traditional classroom, there’s rarely only one “right” solution.

Distributed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Creative Commons License


This article is part of a special issue of Transformations on games in education, published on September 30, 2013. An earlier version was circulated for open peer review via Media Commons Press. The “Games in Education” issue was developed by Mike Roy (Middlebury College), guest editor and editorial board member of the Academic Commons, and Todd Bryant (Dickinson College), who was instrumental in organizing and developing the special issue.

Putting Study Abroad on the Map

by Jeff Howarth, Middlebury College

“Each year about 60% of the junior class at Middlebury studies abroad in more than 40 countries at more than 90 different programs and universities.”

When I read this sentence on the Middlebury College Web site, I thought to myself: that’s a dataset that my students ought to map. I knew that there had to be a dataset behind that sentence, something that the author could summarize by counting the number of different countries, programs and students. But I imagined this dataset could show us much more if we represented it spatially and visually rather than just verbally. I didn’t know exactly what it might show, but I knew that my cartography students could figure that out as long as I taught them the technical skills for handing the data and the general concepts for visualizing multivariate data. What they decided to make with this knowledge was up to them.

Increasingly, teaching cartography involves training students on specific software platforms while communicating more general principles of the craft. This presents the need to design instructional materials that connect technical skills with thematic concepts while allowing students to creatively achieve the broader educational objectives of a liberal education. As an instructor of cartography at Middlebury College, I have largely followed a project-based learning approach focused on the process of cartographic design. My learning objectives seek to link techniques and concepts in an extended creative procedure that involves data management, problem setting, problem solving and reflection. At different steps along the way, the students must make their own design decisions, applying the available means to their chosen ends. Here, I describe the case of mapping the study abroad program in order to illustrate the general approach of integrating technical and conceptual teaching through design problems.

The Project

I gave the students a very simple prompt: Design a layout for the Web that explores how the Study Abroad Program connects Middlebury students to the rest of the world. The students also received a spreadsheet, supplied by Stacey Thebodo of the Study Abroad Program, listing all students who had studied abroad between 2006 and 2010. In addition, the students received some geographic data, including country boundaries, in a common GIS format. Like all the projects in the course, this assignment provided students with an opportunity to apply topical and theoretical concepts that had been introduced in lecture and readings. For that week, the topic concerned spatial representations of multivariate data based largely on the Jacques Bertin’s theory of graphics.1 The three learning objectives of this assignment each connected theory to technique at different steps of the creative procedure: 1. demonstrate data management skills for cartography, specifically how to transform a list of nominal data into a statistical map; 2. identify the components of information to visualize in order to satisfy a purpose for an intended audience; 3. solve the problem given real-world constraints (available data, software, time and knowledge).

Data Management

The dataset came packaged as a spreadsheet with columns for semester, year, student name, major, minor, gender, program name, city, and country. The first problem was to reformat this dataset into something that could be mapped, which could be completed with two technical operations–linking the country or city names to geographic coordinates that could be plotted on a map and transforming nominal data into quantitative data.

The students were familiar with both the purpose and procedure of the first operation as it had been introduced in a previous assignment. They knew that descriptions of locations in an attribute table, like country names, could be joined to a separate file with corresponding geographic coordinates of each location in order to plot them on a map. But that alone would not get them much closer to visualizing the dataset, as they would wind up with a lot of overlapping geographic features, one for every row in the database. It would be far more preferable to format the dataset so that each row represented a different geographic feature (e.g. country) and each feature had attributes like the total number of students or the total number of programs. Then the students could make a map that showed spatial variation in these quantities.

To do this, the students needed to transform nominal data into quantitative data, which was a new problem. It introduced a new technical procedure with a theoretical concept that had been introduced in lecture that week. Technically, it involved using spreadsheet functions to ‘pivot’ data, summarizing instances of one category by instances of another category (e.g. counting the number of students per country). Conceptually, however, it involved defining the core theme that the students wanted to map, or what Bertin called the ‘invariant’ of information: the base concept that does not vary across spatial, temporal or thematic dimensions and by its invariability allows us to recognize the components of information that do vary. And this conceptual side of the problem made the task a bit more difficult than simply repeating the technical steps that I had demonstrated for ‘pivoting’ data.

The intuitive unit of the study abroad dataset was ‘student who studies abroad,’ but the dataset did not necessarily come structured in a way that let us map this. It was essentially organized by semester: for every semester between 2006 and 2010, it recorded each student studying abroad. This meant that if a student studied abroad for an entire year (two semesters) then they would be listed in the dataset twice and simply counting the number of students during the pivot operation would generate error. There were a number of possible fixes for this but they all required the student to think about balancing what they could do given the dataset and what they should do to achieve their purpose and help the reader interpret their map.

Setting the Problem

Once the students had seen the dataset and were shown how they could manipulate it so that it could be mapped, their next problem was to decide what they wanted to show on their map. In general, the students had to consider how to apply the means available to them (the dataset and their technical skills) to one or more ends, but these ends–the goals they sought to achieve by making their map–were decisions that they had to make on their own.

Throughout the course, I asked the students to consider their audience, media and theme when deciding what kind of map they would make. What kinds of questions would the people using this map with this kind of media want to answer about this theme? In this case, what would visitors to the study abroad Web site want to know about the program that a map could help them understand?

This tested students’ understanding of how relationships between variables help a reader make inferences from a graphic. This, of course, is the underlying principle of simple graphics, like scatterplots and charts, but in this case the students had more than just two variables that they could let the reader visually compare. The dataset included attributes for region, country, program, year, semester, gender and major. In addition, the students could generate new attributes from these, such as changes in number of students over time. What combination of variables would allow the reader to answer questions they might have? Or better yet, what might provide answers to questions that the reader might not even have thought of?

Problem Solving

As students begin the process of making a map layout, the workflow becomes less linear as the student must coordinate high levels of interacting elements. During this phase, students will not be able to work through the problem one step at a time, but rather must shift into a mode of reflective action, as they evaluate how each decision interacts with other decisions that they have made or will make, constantly adapting these pieces of the design to improve the quality of the whole.2 Their work during this phase thus reflects conceptual understanding at two levels: the individual components of the map and their interaction as a whole.

In this case, students demonstrated their comprehension of lower-level concepts in two ways. First, the students needed to choose one or more visual variables (e.g. shape, size, hue, value, texture) to represent each component of the dataset, evidencing both their conceptual understanding of Bertin’s theory of visual variables and their technical ability to implement these concepts with graphic design software. Second, the students needed to provide geographic context for the symbolized data by making a base layer. This evidenced their conceptual understanding of cartographic principles, such as projection and generalization, and their technical ability to implement these concepts.

As students implement these lower-level concepts and begin to produce a design, they confront conceptual problems of design that result from the interaction of their lower-level decisions. These include concepts like figure-ground, contrast, balance, as well as knowing when to use an inset or when to use small multiples because a single graphic is simply trying to say too much. These concepts are difficult to master by following simple rules but rather mature through thoughtful reflection during the process of design.

Reflection

In addition to the map layout itself, I also required students to submit a written discussion of their design process. My objective was to provide another means to distinguish between a student’s comprehension of a concept and their ability to implement the concept with technical operations. I asked students to describe the decisions that they made during the design process and to relate these decisions to concepts that were introduced in lecture and readings. The short reflective write-up provided students with an opportunity to communicate their understanding of theoretical content even if the could not apply this understanding in their layout due to technical shortcomings.

Evaluation

Throughout the course, my joy of receiving thirty uniquely creative expressions of student work at the end of each week was countered by the dilemma of pegging each to a standard evaluation scale. My main objective when grading was to recognize both the student’s conceptual and technical understanding during each phase of the project–data management, problem setting and problem solving–using both their map layout and written discussion. For this assignment, I focused on the following:

  1. Is the thematic unit clearly defined and intuitive for the intended audience?
  2. How many components are visualized and what kinds of inferences can the reader make by relating these components?
  3. Do the visual variables help the audience interpret the components?
  4. Does the base map demonstrate comprehension of cartographic principles?
  5. Does the map composition demonstrate comprehension of higher-level graphic design principles?

Each of these questions relates an aspect of the work to one or more concepts from the course.

Examples

This map designed by Marty Schnure ’11 map shows a simple message well.
01mschnure.jpg
Figure 1. Map by Marty Schnure ’11

Marty has simplified the information content of her layout by removing the time component and aggregating by region. She uses intuitive visual variables (width of lines to represent magnitudes, hues of polygons to represent regions). She also uses a projection that is appropriate for this spatial extent. Her map is especially pleasing because she also demonstrates higher-level concepts of graphic design: her color scheme draws from the palette of the Middlebury Web site, her layout expresses symmetrical balance and she’s using contrast to effectively distinguish figure from ground.

Like Marty, Jue Yang ’12 also used flow-lines to represent numbers of students traveling abroad, but she added another component to this information and shows this data at two levels of spatial aggregation. Her flow lines originate from Middlebury aggregated by region and then branch midway to quantify the proportion of students studying in each country.
02jyang.jpgFigure 2. Map by Jue Yang ’12.

By designing her origin as a pie chart, which she repeats at a larger scale in the upper corner, she quietly urges her readers to compare the regional pattern while also providing a very subtle legend to her color scheme. She’s also made several decisions that evidence good cartographic principles. For one thing, she’s removed Antarctica, which makes sense for a lot of reasons: no students study there, the projection distorts the poles and would have made the continent funny-looking, and it frees up space for her flow lines to Oceanic countries. She’s also hidden an artifact that can be seen on most of the other student maps. The country boundary data has more detail than necessary for mapping at this scale. This makes some coastlines, like the west coast of Canada, accumulate line weights and appear as distracting blobs rather than crisp boundaries. Jue’s creative solution to this problem was to use white boundaries for countries and white fill for her oceans. This visually simplified coastlines without any laborious data transformations.

Several students increased the information content of their graphics by representing temporal components. Thom Corrado ’11 visualized the number of students studying in each country for each year of the dataset. 03tcorrado.jpg
Figure 3. Map by Thom Corrado ’11.

Thom developed an original scheme that used size to represent the number of students and color to represent the year. This allows the reader to infer changes in the study over time with any single country and also to compare the numbers of students studying in different countries for any single year. His insertion of an inset map evidences his awareness of a higher-level design problem resulting from the popularity of Europe, where circles representing number of students each year would overlap and obscure the underlying country boundaries.

The layout developed by Katie Panhorst ’10 layout was one of the most ambitious efforts due to the number of components that she included. She shows two temporal components (year and semester), two spatial components (country and region), and one thematic component (program sponsor). Her design uses small multiples arranged in a grid to reveal temporal components. Her thematic component allows the reader to interpret the quantitative data in a new way by correlating the number of students to the presence of Middlebury-sponsored programs.
04kpanhorst.jpg
Figure 4. Map by Katie Panhorst ’10.

Some students chose to represent change rather than time. This involved calculating the difference between the number of students studying in different countries or regions over two consecutive years and then representing the change symbolically. Jordan Valen ’10 offered one creative solution that used proportionally-sized arrows to represent change. This allows the reader to recognize patterns of change: Latin America, Europe and Asia seem to be largely consistent over time, Africa and Oceania fluctuate from one year to the next, while the popularity of the Middle East appears to be on the rise.
05jvalen_sa.jpg
Figure 5. Map by Jordan Valen ’10. 

Lessons Learned

There are four key lessons that I’ve gained from this project-based approach to teaching cartographic design:

  1. Show students how to solve problems, but allow students to set the problems to be solved. While some liberal arts students may appreciate that a course allows them to list commercial software names under the skills section of their resume, simply training students how to use software falls outside the traditional scope of a liberal education. Providing students with the technical skills to solve problems while allowing them to set the problem to solve will foster student creativity and a learning environment characterized by exploration and discovery.
  2. Integrate your teaching of technique and theory, but separate your evaluation of technical skills and conceptual knowledge. The disparity of technical skills in a classroom can challenge both the evaluation of student work and the motivation of students to work. Some students will feel disadvantaged if their peers have had prior experience with a particular software, while those that enter the classroom with experience won’t be challenged if they are simply being shown how to push buttons that they have already learned to push. Additionally, a student will feel frustrated if their mastery of a complicated tool constrains their opportunity to demonstrate their comprehension of concepts. The reflective write-up describing design process is one strategy to tease apart these two kinds of knowledge, but I found that some students, even those with much to gain from a verbal description of their thinking, seemed to treat this part of the assignment with less effort than the map product itself. This may have been due a failure on my part to clearly communicate the importance of this part of the assignment or it may reflect a more intrinsic bias on the part of some students to focus on the product of design rather than the process.
  3. Design is reflective action and reflection takes time. This assignment required the students to commit a significant amount of time. In part, this stems directly from my expectation that students set their own problems. Problem-setting requires students to take the time to explore the dataset in order to discover its possibilities. Also, because the students decide what to make, they also have to decide when they’re done. Any student who has learned how to efficiently meet a professor’s expectations will find it difficult, if not frustrating, to decide when to stop working on their own. But independently of having students set problems, the complex nature of cartographic design, where elements interact with each other, and one decision influences both past and future decisions, translates into time needed to reflect and adapt. In particular, high-level design concepts, such as contrast and balance, are not dependent on a correspondingly high-level of technical knowledge that is difficult to master. Rather, they rely on students taking the time to consider and resolve them. As such, these gestalt concepts underlie the most common design flaws in student projects.
  4. Provide project topics that engage students.
    This last point is by no means novel in a liberal education but it should not be ignored when developing topics for student projects. The study abroad project provides an example of a dataset that students were drawn to explore. Many had studied abroad, so many started by looking themselves up in the dataset. This provided an opportunity to discuss key cartographic concepts, like data integrity and abstraction, as the row of fields attached to the dot on the monitor didn’t quite map to the richness of their memory. They became curious about how popular their program was and what places were less traveled. And they became interested in sharing this with other students and promoting the college program. It’s a useful case in the larger pedagogy of teaching techniques at a liberal arts college: give students problems that connect to their experience and involve both problem setting and solving. Many will recognize that visualizing quantitative data is a creative act.

References 
1.  Jacques Bertin, Semiology of Graphics (Madison, WI: The University of Wisconsin Press, 1983). [return to text]
2.  Donald A. Schön, The Reflective Practitioner: How Professionals Think in Action (New York: Basic Books, 1983). [return to text]

Simple Animations Bring Geographic Processes to Life

by Christopher L. Fastie, Middlebury College

Introduction

It seems we spend a lot of time teaching about things that we can’t easily observe, maybe because students are already familiar with processes they see operating around them, or because previous teachers have already harvested those low-hanging fruit. Processes that are obscure because they are small, large, slow, fast, or distant in time or space require more careful explanation. Some of these processes can now be revealed using digital technologies. I used Google Earth to model a very large process that took place 13,500 years ago. I used a global positioning system (GPS) receiver to map a series of glacial features in west central Vermont and transferred the results to Google Earth. I then added graphical models of the retreating Laurentide glacier and associated pro-glacial lakes and rivers which shaped the mapped features. Animated flyovers of the augmented Google Earth surface at different stages of the reconstructed glacial retreat were saved as video files and incorporated into an explanatory video. I have presented this video both before and after student field trips to the study area with good results. Subsequent upgrades to Google Earth allow animated flyovers to be recorded and played back in the free version of the program. This offers a streamlined creation process and the potential for a more interactive and collaborative experience.

Click on the video link below to view.Old, Flat, and Unconsolidated: Salisbury’s Gravelly Past from Chris Fastie on Vimeo.

Science instruction benefits greatly from graphical demonstrations of physical structures and processes. Current textbooks are elaborately illustrated and associated Web sites sometimes include animations of important general processes, but ready-made animations of more specific processes or locally relevant examples are rarely available. Software for producing custom animations is becoming more user-friendly, but the cost and training commitment still prevent wide adoption. Google Earth is a free program that is based on animation of the earth’s surface and that includes tools sufficient for creating simple animations of many social, geographic, geologic, and ecological processes. The professional version (Google Earth Pro), which is not free, adds the capability to save these animations as video files that can be viewed separate from the program.

Geomorphology and Google Earth

Most geomorphic processes, by definition, include movement of material at the earth’s surface, and are therefore well suited for animated representations in Google Earth. Extant geomorphic features can be difficult to observe in the field because they are large, subtle, or obscured by vegetation. Google Earth is an effective way to highlight such features before they are visited in the field, or afterwards when observations can be summarized and interpreted. By animating the time course of development of such features, geomorphic processes and concepts can be effectively revealed.

Glaciers shape the landscape as they flow, but evidence of glacier advance is often obscured by more recent features produced during glacier retreat. The last part of the Laurentide ice sheet to retreat from Vermont was a lobe of ice in the Champlain Valley. As the length and thickness of this lobe diminished, great sediment-laden rivers pouring from the glacier and from the surrounding barren landscape flowed through and alongside the ice. The Champlain Valley drains to the north, and the glacier impounded a temporary body of water called Lake Vermont which rose to a level several hundred feet higher than the current Lake Champlain. Some of the water flowing south into this lake flowed alongside the glacier and built gravelly floodplains between the newly exposed valley walls and the ice. As the glacier continued its retreat, these flat surfaces were abandoned when the river found a lower course next to the ice. Remnants of these surfaces, called kame terraces, are conspicuous features of the Champlain Valley. When the glacial rivers reached the level of Lake Vermont, they built sandy deltas into the lake. These fine-grained deposits were left high and dry when Lake Vermont eventually drained as the impounding ice retreated far enough north.

Modeling Landscape Features

In 1998, I moved into a house at the eastern edge of the Champlain Valley and began to explore the neighborhood. The landscape was dominated by the steep lower slopes of the Green Mountains, but these bedrock slopes were interrupted by dozens of flat, level terraces that appeared to be built of unconsolidated material (sand, gravel, boulders, etc.), instead of solid bedrock. I am a plant ecologist by training, not a geologist, but I began to sketch the extent and location of these flat places to see if the larger pattern held clues to their origin. The sketch maps on paper were a key element of the discovery process because the pattern of the flat areas, which are spread along miles of valley edge, was difficult to see without them. Dense forest covers most of the area and the resolution of the existing topographic maps was insufficient to reveal the subtle terraces. It is possible to identify some of the larger terraces from the air or from stereo aerial photographs, but most terrace margins and their relative heights cannot be discerned well. I assumed that no one had ever mapped these terraces before, so my map would be the first opportunity to study their landscape-level pattern in detail.

The evolving paper map allowed me to begin to reconstruct the progressive positions of the glacier margin and the associated routes of the ice-marginal river that must have created the kame terraces. It required considerable imagination to visualize the massive glacier redirecting a swollen, turbulent river along a hillside that today is three hundred feet above the valley floor. The map was good data, but to explain the complex course of events that played out over many decades and affected many square miles of hillside, it was just a start.

In 2007, I acquired a consumer GPS receiver which had two crucial features. It could produce tracklogs of walking tours by recording location coordinates at ten second intervals and the Garmin Mapsource software it came with had a menu item called “View in Google Earth.”  So I could walk the margins of a kame terrace with the GPS recording, upload the tracklog to a PC using Mapsource, and then see the tracklog in Google Earth. Google Earth allowed the terrace margins to be displayed on a recent color aerial photo stretched over the three dimensional topographic surface of the study area. This digital landscape could be viewed from any angle and any height above the surface, and one could “fly” over the scene at will. This encouraged me to make digital tracklogs of all the terraces I had found. Without the tracklogs displayed, the terraces could not be discerned in the crude Google Earth topography, which is just a digital version of the mid-twentieth century USGS topographic maps. As the terraces accumulated in Google Earth, I realized that the animated movie of ice, rivers, deposition, and erosion that had been playing in my mind for several years might be successfully shared with others.

Google Earth incorporates simple drawing tools that allow lines and shapes to be placed on or above the digital landscape surface. Three-dimensional objects can be represented by extending lines from objects down to the ground surface. Far more elaborate 3-D objects can be created using the free program Google SketchUp, but all of the objects created for this project were done with the tools included in Google Earth. I used these tools to trace all  the terrace margins imported from Mapsource, creating horizontal polygons in the shape of each terrace. I used the option to extend lines down to the ground surface to give each terrace a solid appearance. The resulting shapes are crude representations of the actual terraces (which do not have vertical sides, and are not all perfectly level) but provide a bold display of the overall pattern formed by the terraces.

I also used Google Earth’s drawing tools to make simple models of the glacier, Lake Vermont, other pro-glacial lakes, and meltwater rivers as I envisioned them at three different times during the formation of the terraces. This allowed the geomorphic features along a four mile stretch of hillside to be put into the context of the retreating ice margin and the associated lateral displacement of an ice-marginal river. I could now display three stages of the landscape process that had shaped my backyard 13,500 years ago.

To bring the process to life, I used the Movie Maker tool in Google Earth Pro to record flyovers of the augmented landscape at different stages in the reconstructed landform-building process. Due to the large scale of the study area there is great explanatory power when the view zooms from the regional to the local and then to a detail, for example, of a river’s route along the glacier. Google Earth allows any view of the digital landscape to be saved by “snapshotting” the view of a saved “placemark.”  The program will automatically and smoothly “fly” from one placemark view to another and these single flights formed the content of most of the video clips I produced. A few dozen of these clips were edited together using Adobe Premiere Pro. By inserting cross-fades between identical landscape views depicting different stages in the process, simple animations of the landscape development could be produced.

Presenting the Results

I first presented a draft of the video after students in my class at Middlebury College spent a January day exploring the snow-covered landforms. We made multiple stops to see several key parts of the study area and were still thawing out when we piled into my office to watch the video consisting only of  the silent flyovers from Google Earth. I think the students were able to more meaningfully synthesize their field observations after seeing the animated landscape. The reward was probably greatest for those students who had been working hard during the trip to make sense of the individually mundane features. I assume that the video allowed everyone to attach some additional geomorphological significance to the flat surfaces we had visited. During this field trip, we collected some new video of ourselves which was later incorporated into the final version of the video along with other footage and a narration.

For a subsequent class field trip to this area, I asked a new group of students to watch the video beforehand. By this time, a completed twelve-minute version of the video was available online. Viewing the video gave them a context for understanding what they later saw in the field and established a shared baseline of knowledge. I asked students a year later whether viewing the video before or after the field trip would have been more productive and the consensus was that before was better. The primary reason given was that the subject was sufficiently novel and obscure that every explanatory aid was welcome. Viewing the video first also allows a class to quickly address more complex issues such as the relationship between geomorphic origin and vegetation. However, some students recognized that the process of struggling to make sense of confusing field observations has pedagogical value. The video presents a compelling explanatory model, so it eliminates the need for students to assemble and test their own. Waiting until after the field trip to view the video has great potential for classes with the background and motivation to benefit from a puzzle-solving exercise.

In May 2009, Google Earth 5 was released with a new feature that allows flyover tours to be saved and played back within the program. The tour is not saved as video, but as a set of instructions that the program interprets in real time. While creating the tours, drawn objects (e.g., rivers or kame terraces) can be toggled on or off, creating simple animations. Photographs or videos can be displayed sequentially at designated places in the landscape. Narrations or music can be created and saved with a tour. This new feature offers an alternative method of sharing explanatory flyovers and animations.

Learning to save and distribute tours is easier than learning to save video clips and produce online videos and can be done with the free version of Google Earth. Without programming, tours can be embedded on Web pages where they play automatically in a window. The window is a working instance of Google Earth, so if the tour is stopped the user can interact with the digital landscape without having Google Earth installed (a free Google Earth browser plug-in is required). Tour files can also be distributed directly to users who can interact with them using Google Earth. The keyhole markup language (KML) files which encode the tours are usually small and easy to distribute to others. In addition to watching the recorded tour, users with Google Earth installed can experiment by toggling features on and off or creating their own new features. This creates the opportunity for interactive and collaborative projects. An advantage of KML tours over tours saved as video files is that it provides a view of the full resolution Google Earth landscape, not a compressed video version, and displays the most current aerial photos. Soon after I completed the video about glacier features, Google Earth updated the photo coverage of Vermont with higher quality, more recent images, instantly changing the video’s status to outdated. A primary disadvantage of distributing KML files to others is that there is less control over the viewing experience, which depends on the user’s operational knowledge of Google Earth, and settings in Google Earth (and of course, Google Earth must be installed). For examples of the tours I created, see www.fastie.net. You can also download the .kmz file for viewing in Google Earth.

Learning to view the landscape in Google Earth is fun and easy. Learning to produce and save video clips or KML tours is more of a challenge.  Google’s online help and tutorials are a start, but you should plan for some trial and error if you want to produce something other than the simplest result. If there is someone on your campus who can help you get started, you might be able to climb the steepest part of learning curve in an hour. Otherwise, plan for some additional learning time. Although the required commitment is not trivial, the models and tours you create can be used year after year to give students valuable insight into geographic patterns and processes that no one has witnessed firsthand.

Cyberinfrastructure = Hardware + Software + Bandwidth + People

by Michael Roy, Middlebury College

 

A report on the NERCOMP SIG workshop Let No Good Deed Go Unpunished; Setting up Centralized Computational Research Support, 10/25/06

Introduction
Back to the Future of Research Computing

As Clifford Lynch pointed out at a recent CNI taskforce meeting, the roots of academic computing are in research. The formation of computing centers on our campuses was originally driven by faculty and students who needed access to computer systems in order to tackle research questions. It was only years later that the idea of computers being useful in teaching came into play. And once that idea took hold, it seemed that we forgot about the research origins of academic computing.

Lynch argues that the pendulum is swinging back again, as campuses nationwide report an increased interest in having libraries and computer centers provide meaningful, sustainable and programmatic support for the research enterprise across a wide range of disciplines.

At the October 27, 2005 NERCOMP meeting entitled “Let No Good Deed Go Unpunished,” Leo Hill, Leslie Hitch and Glenn Pierce from Northeastern University gave a presentation about how they planned for and implemented a university computer cluster that serves the research agendas of a wide array of Northeastern’s faculty.

The talks provided good information about the technology planning, the politics and the policy questions that arose, and placed the entire project within an economic model that is useful for analyzing a broad range of academic initiatives taking place on our campuses.

Key Questions:

  1. To what extent should support for research computing be centralized?
  2. If one runs a centralized research computing facility, how does one secure funding for it?
  3. What are some technology strategies for keeping these costs to a minimum?
  4. How can one justify the establishment of a centralized research facility in language that makes sense to academic administrators?
  5. How can this impulse be explained in terms of current trends in computation in particular and research in general?
  6. How do you allocate resources to map to institutional priorities?

Part One
On the Ground: Technical Considerations

Speaker: Leo Hill, Academic and Research Technology Consultant, Northeastern University

Slides available at https://myfiles.neu.edu/l.hill/deed/

How do you support research and high performance computing?
As a way into explaining why Northeastern took on the project of building a centralized computer cluster, Hill began his talk by making the claim that faculty are not experts at many of the technologies that are required to provide a robust cluster computer environment (OS, Patches, Security, Networking). He also shared his impression that the National Science Foundation and other funding agencies increasingly look for centralized support as part of the overhead that they pay to universities.

In addition, a major benefit to a centralized facility is that a university can enjoy volume discounts for hardware and software, as well as for the considerable costs associated with creating a space to house a large cluster. These costs primarily revolve around power and air conditioning.

How did the process of designing this space work?
A Research Computing steering committee was created. The group’s job was to understand the needs of the greater community. They conducted interviews about present and future research projects of the Northeastern faculty, as a way to understand what sort of software and computational horsepower they would need. In analyzing the results of these interviews, they asked: Are there consistent terms? How can we resolve conflicting concepts? How do we translate these various desires into a viable service?

Their solution was to build a cluster that had the following features:

  1. Job management (queues)
  2. Ability to interactively run jobs (required for some users)
  3. Ability to support large files
  4. Ability to efficiently support large data sets (in excess of 4 gig)

As is true of all centrally-managed computational facilities, they had to factor in (and make trade-offs between) processing power and very large file storage. The list of software that the cluster would be supporting (see slides) was large but did not seem to exceed what most schools support on a range of devices on their network.

Once they had the hardware and software requirements in place, the team chose to develop an RFP (request for proposal) in order to collect bids from multiple vendors. Hill used a web-based service offered by HP (http://linuxhpc.org) for both developing and distributing RFP. As cluster computing has matured into a commodity that one can buy, vendors have begun to provide data on the impact of their systems upon air conditioning and power, allowing a better understanding of the overall set-up costs of a data center.

One of the more alarming aspects of the requirements of this project was that it all had to be accomplished with no new staff. This drove the team to look for a vendor-supported turnkey solution (they ended up choosing Dell with ROCKS as the platform). With no new staff, there has been some impact on existing services. The helpdesk now needs to be able to respond to new types of questions. System administration is accomplished by two existing staff who collectively dedicate roughly four hours per week to this service. They also needed to develop service level agreements around node downtime. How quickly should they respond if a single node goes down? What if the head end of the system is no longer functioning? Implicit in making software available is the support for that software, which has meant that they have also reinstated a dormant training program to explain how to work in this environment, and to provide some support for particular applications.

While the cluster is presently offered as a free service, the work on developing the cluster has triggered interest in and the development of other services at Northeastern. This includes selling rackspace in the datacenter, advanced programming support, and increased (and welcome) consultation on grant writing and equipment specifications.

Part Two
Campus Politics and Process
Speaker: Leslie Hitch, Director of Academic Technology, Northeastern University

Slides available at https://myfiles.neu.edu/l.hill/deed/

While Hill’s presentation provided useful insights into the actual process by which the particular hardware and software were selected, installed and managed, Hitch’s talk focused on the institutional framework in which the project was carried out. Northeastern’s issues should be quite familiar to anyone working in higher ed today. The University’s academic plan calls for an increase in the quantity and quality of faculty research, and the project responds nicely to that area. It also calls for increased undergraduate involvement in research, which can be linked to this project as well. Advocates also linked the project to a possible boost in NEU’s rankings in US News & World Report, suggesting that ignoring research computing is something that one did only at one’s peril.

While the project was driven partially by actual faculty demand, it also anticipated growth in need in the social sciences and humanities, which do not have the traditional funding streams that the scientists enjoy. (For more information, see the draft report on Cyberinfrastructure for the Humanities and Social Sciences, recently published by the American Council of Learned Societies.)

In order to design the system, Hitch’s team set out to find what is common among various scientists and social scientists—a perfectly fine question, and one that those wanting to document the complex working relationships among their various faculties would be well-advised to consider. The act of asking people about what they do with technology, and what they would like to do with technology, almost always reveals useful insights into the nature and structure of their disciplines.

While the list of differences (software, memory requirements, gui v. command line, support requirements) in this case was framed as a means of specifying a particular system, the differences can also be understood in terms of what is called “scholarly infrastructure,” based on Dan Atkins’s recent work for the NSF in describing “cyberinfrastructure.” The slide below—from Indiana University’s recent Educause presentation—suggests a useful way of visualizing what particular disciplines have in common, and how they differ.

Source: “Centralize Research Computing to Drive Innovation, Really,” a presentation by Thomas J. Hacker and Bradley C. Wheeler, Indiana University.

Of course, with increased bandwidth among our schools, the act of centralization need not necessarily stay within the campus. Couldn’t our faculty share infrastructure by discipline in multi-institutional facilities staffed by domain experts who can help with the domain-specific applications? What of the various national supercomputer centers? Why should we build campus-specific clusters if the NSF and others will provide for us through national centers?

One answer to this question lies in governance. For such centers to be sustainable, there needs to be a funding model in place, and a fair and agreed-upon system for allocating computational cycles and provisioning support. (Hitch provides the charge to their user group in her slides.)

Northeastern’s funding model, not yet fully articulated, is to be determined by its users. Northeastern has also decided to allow the users of the system to develop their own policy about the allocation of computational cycles. Since there is no new FTE attached to this project, they do not have to worry about how to allocate the provision of support!

One funding model under discussion links awareness of IT to sponsored research. How can IT be used to bring in more money for research? Is providing this service something that should be part of overhead? If so, how do you go about securing a portion of overhead to dedicate to this sort of facility?

If one believes that the future of academic research lies in the increased use of such facilities, the question of staffing these facilities becomes critical. Is it enough to fund centralized facilities just to avoid the costs of lots of little clusters and to promote outside funding, allowing faculty to raise more money? One needs to more fully understand the support needs of such a transformed enterprise. In the discussion, hard questions arose about who would be providing this sort of support. Who pays for these people? To whom do they report? Even harder, where do they come from? How do you find people who can do this kind of work with/for the faculty? Does shifting the research associate from the department to the central IT universe reduce the amount of freedom, control, and experimental play? How can one build into the structure of these new types of support positions the ability to raise funds, to do research, to stay engaged in the field?

Part Three
Academic Research Process and IT Services
Speaker: Glenn Pierce, Director, IS Strategy and Research, College of Criminal Justice, Northeastern University

The next session moved from the particulars of Northeastern’s technical and political environment to a broader reflection on the implications of centralized research computing support for the academic enterprise. Pierce began by using the history of other enterprises (most notably, banking) to suggest that there are profound changes underway that could (for many disciplines) completely transform their way of conducting research, and eventually affect what happens in the classroom.

Using language more familiar to business school than to the usual IT conference, Pierce described the research process as a value/supply chain heavily dependent on IT investments and support. In this model, any break in the chain disrupts the process, slowing down the rate at which the faculty member can productively produce research, while new efficiencies (faster hardware, better software, training of faculty, hands-on IT support) can improve the efficiency of the process.

 

Source: Weill, Peter and Marianne Broadbent. Leveraging the New Infrastructure: How Market Leaders Capitalize on IT. Boston: Harvard Business School Press, 1998.

In a slide reminiscent of the scholarly cyberinfrastructure slide Hitch used, one is able to see the core question of the day: Where is the cut-off for central services: fast changing local application? shared standard IT applications? shared IT services? For Pierce, central IT should aim to go as high up the pyramid as possible.

While Pierce acknowledges that it is a real challenge to imagine a world in which centralized IT has intimate knowledge about domain-specific applications, he also challenges colleges and universities to re-think what is meant by productivity, and to ask not what it costs to provide central IT support for research computing, but instead to ask what it costs NOT to provide it. He argues that faculty doing their own IT represents a loss in productivity and a lost opportunity, and that traditional measures of academic productivity (like traditional measures of productivity in other industries) do not capture the fact that entire industries can be changed, created, or eliminated altogether through the revolution afforded by the powers of computing.

One concrete example Pierce offers is Softricity, an application (like Citrix) that allows one to run applications locally, taking advantage of local computer resources, without installing the application directly on the local machine. This fundamental change in how software can be distributed would require major changes both organizationally and financially. Pierce argues that the predominant model where all budgets across an institution rise and fall at the same rate gets in the way of fundamental change. In the case of Softricity, in order to meet an increased demand for applications and data, we need more money to make this available, and yet these arguments rarely succeed in an academic culture that approaches change incrementally. It is therefore difficult, if not impossible, to fundamentally re-tool to take advantage of the power and increased productivity enabled by centralized IT services.

If one accepts the argument that investing in central IT makes good business sense, and one is looking for other parts of the academic enterprise where one can point to increased productivity, Pierce suggests that the same productivity gains enjoyed by centrally-supported research initiatives can be (hypothetically) found in student education outcomes. This tantalizing claim, not backed up by examples, certainly seems worthy of further investigation.

So what keeps us from all changing overnight from our distributed model back to something that looks, to many, an awful lot like the old mainframe centralized model? Pierce identifies four major barriers to extending centrally-supported IT for research:

  1. The existing perception of IT service (many researchers simply do not believe central IT is up to the task)
  2. Current funding models that
    1. are balkanized
    2. measure costs rather than productivity
    3. make it difficult to measure or even see cost of lost opportunities
  3. Current planning models that suffer from the same problems as our funding models
  4. Anxiety over the loss of local control

Using the scholarly infrastructure model, Pierce made the point that the further one moves away from technical issues of hardware, operating systems and networking, and into the domain of discipline-specific software, the more involved faculty need to be in the planning process. He also makes the point that the sort of financial re-organization required to support this shift toward a centralized model requires a genuine partnership between the IT leadership and academic leadership. All of this is possible only if the campus really and truly believes that IT-supported research can fundamentally change for the better how we conduct research and eventually how we educate our students.

Conclusions
Possible Futures & Implications

What follows is a list of possible changes in the daily operations on campuses that embrace the idea of investing in the support of IT-supported research, and a few ideas for collaboration between campuses (or business opportunities):

  1. Change the way you distribute software to allow more ubiquitous access to software, using technologies such as Softricity or Citrix.
  2. Fund more aggressively-centralized resources such as clusters.
  3. Hire discipline-aware IT support staff who can work with faculty on research problems.

As our campuses become increasingly connected by high-speed networks, one can ask questions such as:

  1. Can we negotiate licenses with vendors that would allow us to consortially provide access to software?
  2. Can we create local clusters that multiple campuses can fund and support?
  3. Can discipline-specific support be organized consortially to allow (for example), an economist at School A in need of help with SAS to get that help from a SAS expert at School B?

What does cluster and research computing have to do with liberal arts education?
One can imagine protests about shifting institutional resources into IT-supported research computing. For some this will be seen as an unwelcome return to the early days of campus computing, when a disproportionate share of the support went to a few faculty from the handful of fields that had discovered how to use computers to facilitate their research. As in the first generation of campus computing, however, this trend may be a harbinger of demands that will arise across campus and across disciplines. If one takes seriously the propositions put forth in the recent American Council of Learned Societies report on cyberinfrastructure for the humanities and social sciences, this re-alignment of resources in support of changing requirements for scientific and quantitative research is very likely one of the re-alignments that will be required to support teaching, research, and scholarly communications in ALL disciplines.

Further Readings

Educause Resource Library on Cyberinfrastructure
http://www.educause.edu/Cyberinfrastructure/645?Parent_ID=803

“The new model for supporting research at Purdue University,” ECAR Publication (requires subscription)
http://www.educause.edu/LibraryDetailPage/666?ID=ECS0507

Beyond Productivity, National Academy of Sciences
William J. Mitchell, Alan S. Inouye, and Marjory S. Blumenthal, Editors, Committee on Information Technology and Creativity, National Research Council, 2003.
http://books.nap.edu/html/beyond_productivity/

Speaker Contact Information

Leo Hill, Academic and Research Technology Consultant, Northeastern University l.hill@neu.edu

Leslie Hitch, Ed.D. Director of Academic Technology, Northeastern University l.hitch@neu.edu

Glenn Pierce, Ph.D, Director-IS Strategy & Research, College of Criminal Justice, Northeastern University g.pierce@neu.edu

Review of “Connecting Technology & Liberal Education: Theories and Case Studies” A NERCOMP event (4/5/06)

by Shel Sax, Middlebury College

 

On April 5, at the University of Massachusetts, Amherst, NERCOMP offered a SIG event on “Connecting Technology and Liberal Education: Theories and Case Studies.” Examining the description of the event on the NERCOMP web site (http://www.nercomp.org) made two things immediately apparent. This was a workshop looking at a very broad topic and all of the presenters came from an academic background rather than a technological one.

The flow of the day went from the most general, with Jo Ellen Parker beginning the proceedings with a discussion of the various theories of liberal education and their impact and influence on institutional technology decisions, to specific case studies offered by faculty from Emerson, Hamilton, Mt. Holyoke and Hampshire Colleges.

Session 1: What’s So “Liberal” About Higher Ed?

Speaker: Jo Ellen Parker, Executive Director, National Institute for Technology and Liberal Education (NITLE)

Jo Ellen Parker’s essay on the same topic can be found on the Academic Commons website at:
http://www.academiccommons.org/commons/essay/parker-whats-so-liberal-about-higher-ed

In her talk, Jo Ellen laid out a framework for thinking about the relationship between liberal education values and issues relating to instructional technology. She noted that:

  • Resistance to technology can be simply resistance, that is, defending important educational commitments from the perceived threat of technology.
  • The discussion of the role of liberal education and technology is often tangled up in conflicting ideas as to what liberal education really is.
  • Regardless of your background, being able to frame discussions of instructional technology initiatives within the language of liberal education can make you a more effective and articulate spokesperson.

Jo Ellen presented four models or theories of liberal education, noting that some are competing and some complementary. In reality, institutions reflect combinations of these theories rather than any one exclusively.

The first theory of liberal education is that it is one of content-based curriculum, studies liberated from the pressure of immediate applications and pursued without immediate practical benefits. This thread has been and remains dominant in most small, elite liberal arts colleges. In this view, the curriculum consists of pure rather than applied disciplines. Applied studies are not part of a liberal arts curriculum, e.g. accounting, musical performance, or community service (for credit). She noted an example where language acquisition is not given credit, considered simply the acquisition of a tool necessary to study literature in the foreign language.

The second theory of a liberal education comes from a pedagogical perspective and focuses on the development of intellectual skills over the mastery of content. Defining characteristics of this model are practices, not disciplines: group studies, student presentations, active learning, collaboration, and paper writing rather than test taking. This view of education is supported by research from the psychology of learning and pedagogical research. It is possible that an applied discipline can be taught “liberally.” Nursing, for example, can be taught either liberally or illiberally. If nurses are taught to solve problems and work collaboratively, then they are being taught liberally. If they are required to memorize large bodies of information and assigned content, then they are taught illiberally.

To some, liberal education is about the education of citizens. This approach values the development of literacy, numeracy, scientific and statistical proficiency, history, etc. The curriculum should target what is required to produce good citizens. It tends to value ethics and socially responsible behavior and emphasizes developing the whole person. In this model, faculty will view student life as an educational opportunity, and will value service learning and community service requirements. It encourages closer relationships between students, faculty and staff. There is greater concern about extending access, welcoming more low income students and encouraging the sharing of campus resources with the greater community. This civic focus of liberal education is often based in a religious history.

The final model of liberal education is less philosophical and more economic. It associates liberal education with institutions of a specific type. In a sense, it associates the degree with liberal education—whatever these colleges do, it is liberal education. This view tends to emphasize the physical characteristics of an institution: small size, privately funded, residential. These characteristics supposedly foster the goals of liberal educations so that any institution that does not share these physical attributes cannot deliver a liberal education. People who favor this model view the economic viability of these institutions as critical to the well-being of liberal education.

These competing theories of liberal education tend to “muddy the waters” when it comes to thinking about liberal education and instructional technology. The curricular-centric view of liberal education will regard technology as an extension of the library. The acquisition of new online scholarly resources, data sets, art objects, etc. are highly desirable and should be a priority within the technology budget. Advocates of this approach may not see value in spending technology resources on communications technology or course management systems, for example. Jo Ellen said that these folks have no understanding of Wikipedia! Those holding the curricular-centric perspective fret about the difficulty of locating quality material online and worry that students will be unable to distinguish quality materials from second rate ones. This often leads to a demand for “literacy programs.” Technology is not valued for its potential to change the nature of teaching and learning but rather values it for access to primary materials.

In comparison, the second theory sees technology as a change agent that enables faculty and students to do more and different things together. The focus is on a student-centered view of IT. Here, one finds more emphasis on course/learning management systems, communications tools, group study tools and new media formats. In colleges where this view has currency, IT resources will be spent on multimedia centers, collaborative classroom spaces, and developing faculty technical skills, giving these higher priority than acquiring resources. Critics of this approach are often concerned about the role of faculty and how technology may change it for the worse. Faculty pursuing this type of student-based learning can be intimidated by the technological fluency of both students and IT staff. They are concerned about the cost in time of acquiring IT skills at the expense of other scholarly and teaching activities. Faculty, having to confront what it means to become a student learning again, may resist moving in this direction. Librarians often feel threatened in this environment as there is more uncertainty as to exactly what their role should be.

The “citizenship” model of liberal education tries to extend resources. This can include tutoring high school students in the local community, using GIS to help local planners, taking on oral history projects with local primary schools and libraries, electronic portfolio projects and so forth. This view highly values those technologies that support both on- and off-campus communication and making course materials available beyond the institution. Technology in this context is evaluated by its contribution to community.

The “physical” model of liberal education sees IT as a way to overcome some of the limitations of the small size of these institutions and enables the smaller institutions to become competitive with large ones. It hopes to synthesize the virtues of small and the advantages of big. Technology may be seen as a potential cost saver and thus contribute to the economic viability of these smaller entities.

The discussion of liberal education and technology is often intertwined in the discussion of liberal education itself. Decisions about the allocation of technology resources can be most effective when the IT spokesperson has a good understanding of the different competing visions of the liberal arts institution and is able to articulate how various technologies impact these sometimes competing institutional views.

Session 2: Emerging Literacies and the Liberal Arts

Speakers:
David Bogen, Executive Director, Institute for Liberal Arts and Interdisciplinary Studies, Emerson College
Eric Gordon, Assistant Professor, Visual and Media Arts, Emerson College
James Sheldon, Associate Professor, New Media, Emerson College

While Jo Ellen Parker presented four models of liberal arts education and demonstrated how differing models can lead to different technological priorities to support the curriculum, the Emerson team re-framed the discussion in terms of focusing on the technologies and studying how they have changed the ways in which we interact with the world around us. More specifically, David Bogen noted that the process of designing curriculum is an essential foundation of the work of educators. Technology forces us to study not only the changes in content but also changes in the “mode of delivery.”

An important part of the Emerson team’s argument is that new technologies are never without cultural ramifications. They impose constraints in some ways and open new possibilities in others. As such, one must look beyond the purpose and value of technology in the liberal arts per se and study the inter-relatedness of curriculum, technology and pedagogy. While Jo Ellen Parker would argue that clarity about an institution’s vision of itself will help to clarify the technological decisions to complement that vision, the Emerson group would argue that the technology itself can and is changing the essence of the liberal arts and as such, should be placed on the “front burner” of such discussions.

David uses the concept of “emerging literacies” to refer to the combination of literacies that are evolving. Aware of the ambiguity of this very term, David Bogen described it as a placeholder for that combination of literacies that will ultimately transform teaching and learning. In this context, the deliberateness, traditions and methodical rate of transformation in higher education are not necessarily bad things as they allow for careful study of the agents that can transform education and the identification of models that may well be inappropriate or counter-productive.

There is a plurality of literacies. Using Wikipedia as a source, David found over 69 different literacies. This is testimony to the elusiveness and ambiguity of the concept at this stage of technological and social transformation. These emerging literacies are not to be confused with the literacy initiatives referred to by Jo Ellen in her description of the content oriented liberal arts institution. Rather, they include new ways of knowing: information, cultural, visual, media, multi-modal, and scientific literacies are all attempts at describing some social/technological change that necessitates the need for new skills and expertise.

In closing his part of the presentation, Bogen described three approaches to emerging literacies:

  • “Politics of loss” seeks to document the negative impacts of contemporary technology on the traditional liberal arts institution.
  • “Politics of scaled solutions” represents the force within education that represents a technological utilitarianism, trying to deliver the greatest good to the greatest numbers.
  • “Politics of transformation” focuses not on what has been lost, but rather on the creation of a new medium of expression (the integration of visual, multimedia and digital communication) worthy of study in its own right.

David clearly favored the third approach, arguing that it offers the opportunity to study “a whole new semiotics of expression.”

David’s opening remarks were followed by James Sheldon, Associate Professor of New Media at Emerson. James’s presentation featured a “Digital Culture” online first year program developed at Emerson that was team-taught with David Bogen and included the production of electronic portfolios.

James observed that students have changed since 1996. Then, students were proficient in oral, verbal and written expression. Their ability to use a computer, navigate the web, and incorporate technology into their work was very limited. In comparison, today’s students are comfortable authoring web pages, using image manipulation and editing programs like Photoshop, online communication, etc. Every student knows how to use Instant Messenger.

In this class everything was done electronically. Every student needed to produce an electronic portfolio and become proficient in making visual documents. While the students knew how to create material digitally, James noted, they had no idea of visual history. They did not understand how we have arrived at our current state and what the development of technology has allowed.

James then gave a slide presentation. The first slide showed 2 images: an early photograph and a painting clearly influenced by the photograph. The pair of images showed how technology changed the way in which an image could be produced and how the technology subsequently influenced artists. Another slide provided an example of how the slow shutter speed in a photograph influenced a landscape painting. James provided an example of early motion (a famous animation of a horse trotting) with the idea that to create motion, one had to first stop motion. He then described the influence of real color photography in the late 1930’s and 1940’s and compared images from that period to contemporary images that could only be produced with today’s technology, using as an example the image of a bullet traveling through an apple. All of his examples reinforced his contention that not only is contemporary art influenced by new media, but new media can remediate older media,absorbing it and minimizing discontinuities. That is, new media remains dependent on older media both in acknowledged and unacknowledged ways.

After the slide presentation, James talked about a Davis Foundation grant that Emerson had received to develop online learning communities. A key component of this project was the development of electronic portfolios. These portfolios raised a host of interesting questions: how does the instructor assess students’ multimedia work? How does the student see the path of his/her development? What does it mean to be working in digital media? With the electronic portfolio, all of a student’s work is in one place, facilitating faculty evaluation, the ability of students to learn from each other and students thinking of themselves as artists.

The third presenter from Emerson was Eric Gordon, Assistant Professor of Visual and Media Arts. He demonstrated MediaBASE, a tool that he and colleagues at USC’s Institute for Media Literacy developed for creating and working with media objects. The real conceptual innovation is that this is a tool for teaching/learning about media rather than teaching/learning with media.

MediaBASE is a platform for the development of media compositions that enables users to transform, manipulate and arrange media objects according to the intent of the creator without changing the state of the original object. MediaBASE was described as a social software package for use both within archives and classrooms.

The object of MediaBASE is to enable students developing electronic portfolios to include a variety of manipulated images while maintaining the integrity of the original images, the metatagging of all objects and the ability to search contextually while in the authoring environment. It is an attempt to provide a functionality to creators of multimedia, one comparable to that which is currently available to authors of text works.

Overall, the Emerson presentation was a thoughtful assessment of the current state of the liberal art curriculum in light of sweeping technological changes, the need to contextualize current development with a historical understanding of the relationship between changing tools (technology) and creativity, and an exploration of a tool developed to further the articulation of concepts needed to encompass these changes within the educational lexicon.

Session 3: A Different Mission, A Different Method: Assessment of Liberal Arts Education

Speaker: Dan Chambliss, Eugene M. Tobin Distinguished Professor, Department of Sociology, Hamilton College

Liberal arts colleges do not like assessment, period! Faculty dislike assessment more than administrators, but by and large, in the liberal arts environment, assessment is seen as outside interference. Some opponents think that assessment is essentially a business exercise, its rationale underlied by a political antipathy to left-wing intellectuals. Further, many examples of assessment are not intellectually rigorous and faculty see them as a “lightweight” activity.

There is no obvious reason why liberal arts colleges should like assessment. They are doing pretty well already and there is no correlation between doing assessment and being a good college. Swarthmore, for example, does assessment because it has to, but Swarthmore does very well without it. It has a huge endowment, and people are willing to pay the tuition costs to send their kids there. Swarthmore students clearly value their school and their education, donating and bequeathing cash and assets as alumni. Dan noted that very few businesses can claim that level of customer loyalty. So, it is not clear that Swarthmore needs assessment; its survivability without assessment is excellent.

Some of the hostility to assessment in the liberal arts relates to its origins in business/efficiency models that do not transfer well to colleges. The usual assessment drill includes:

  • state clear mission, goals, objectives
  • state in advance what students should get out of a course
  • provide clear links between the goals and the means (every course, program, etc. needs to explicitly state goals and actions to achieve them)
  • motivation is never considered a variable

The entire model has a “throughput” mentality – students are fed in at one end of the process and come out the other end with the requisite skills, thus fulfilling the mission, goals and objectives. While this works well for certain fields, it does not make a lot of sense for liberal education.

Seven years ago, Dan reported, Hamilton was funded by the Mellon Foundation to study the assessment of student learning outcomes at a liberal arts college. A panel study, in the fall of 2001, drew a random sample of 100 students from the class of 2001 and has tracked them ever since. Each student is interviewed every year and grades are tracked.

The findings, discussed below, provide some interesting insights into the uniqueness of the liberal education. Alumni in this survey responded that the specific knowledge that they acquired as undergraduates was virtually irrelevant. This is not job training. Unlike the U.S. military, the liberal arts institution does not know where all of its graduates are headed. We like the fact that our graduates will do all sorts of amazing things with a huge variety of positive outcomes. We are looking for long term results and life long learning. Other goals are uncertain to the point of unknowable. While college presidents may talk about creating “citizens” or great thinkers, the goals are not mutually agreed on. Nor is it clear that what faculty do every day has any impact on any goals that may exist.

As a result, Dan argued, we must have a different approach to assessment for liberal education. We want to have minimal interference and not expect faculty and students to change what they are doing to accommodate the assessment. The assessment should represent sound social science methodologically and be multi-modal, since it is not clear exactly what we are looking for. The assessment must be useful – we should learn something that will help people with their work. Finally, an appropriate assessment should be true to the mission of the liberal education institute; it should be open to possibilities and serendipity.

The Study:

1. Writing samples were collected from the last year of high school through the senior year. Some 1100 student papers were selected and a team of outside evaluators assessed the writing to see if students’ writing actually improves.

2. The outside evaluators were able to rank order the writing of students from high school through the senior year, although they could not differentiate between junior and senior level writing (the sample excluded senior theses).

3. The study also revealed that first and second year advising was not as good as junior and senior advising. Freshman advising is not the same job as senior advising. Freshman advising is more about course selection and is heavily influenced by the academic calendar, more so than the relationship between the student and advisor. It is the interaction of the student’s initial interest and the offering and scheduling of courses that is the most critical relationship in first year advising.

4. Friendships and friendship networks are crucial. The people a student meets in the first semester has a big influence on the student’s academic development.

5. All life is local—if students have 2 or 3 good friends and 1 or 2 good professors, students are in good shape. Students can like almost nobody on campus as long as they have a few good friends and professors. Most faculty can, as Dan described, “be awful,” as long as students take courses from the “good” teachers during the freshman year. A very few professors have a huge impact on a large number of students. At Hamilton, he said, 12 professors out of 180 would do the trick!

6. Happiness is a legitimate outcome of college—students after graduation tend to cherish their undergraduate experience, although in the assessment realm, this is not considered at all. Nonetheless, students, family and parents are very aware of the value of the undergraduate experience and the importance of feeling good about life. These are important ingredients of success.Confidence, optimism and a sense of well-being are very good things with which to leave college.

Some lessons from assessment:

  • Because externalities abound, it is a mistake to look at only a single department. Assessment should not be department-based.
  • The student’s point of view is crucial:
    • They have no idea as to what has gone on in the past, so “innovative” curriculum does not register.
    • Small classes are great for the people who are in them, but not so wonderful for the people who cannot get into them, and take larger ones instead. Classes may be small because they have unpopular topics, pre-requisites, etc.
  • For a residential liberal arts college, what is sold is the  “uniqueness” of the experience, the advantage of not being in a mass market.

In closing, Dan emphasized that small colleges are not businesses in the usual sense of a business; as a result, traditional assessment methods must be modified to fit the needs of the liberal arts institution. When done appropriately, assessment can provide useful insights and information to further the objectives of the liberal education.

 

Session 4: The Liberal Arts College and Technology: Who Captures Whom?

Speakers: Bryan Alexander, Director for Emerging Technologies, National Institute for Technology and Liberal Education (NITLE)
Donald Cotter, Associate Professor Chemistry, Mt. Holyoke College
James Wald, Associate Professor of History, Hampshire College

Bryan Alexander’s study of emerging technologies focuses on gaming, mobile devices and web 2.0 (which he described as a mixture of micro-content and social software). There are similarities among some of the characteristics of liberal education and emerging technologies. They are becoming increasingly trans-disciplinary and require critical thinking.

In a notion similar to David Bogen’s, Bryan argued that while we can speak about teaching with technology, we can develop a better understanding of the process if we study the technology itself using the traditional methodologies of the liberal arts. Thus, it is possible to apply the intellectual heritage of liberal arts to technology. Bryan used the example of Robinson Crusoe to exemplify someone who has no need for a liberal education but rather for specific technical skills. Such informatic needs are often highly individualistic.

The liberal arts approach to technology itself tends be a collectivist one. Echoing one of Jo Ellen Parker’s themes, Bryan noted that if liberal education has a civic engagement thread, then technology is an integral part of the process. Bryan noted that there is now a new wrinkle: students are both producers and consumers of technology.

Donald Cotter then used the example of having his students engage in the xml tagging of source documents in the teaching of the history of science. Donald said that one of his motivations is to incorporate the work of professional historians of science into his course to emphasize how technology makes us behave as cultural beings and to provide a context within which to study both chemistry and the history of chemistry. In the same way that James Sheldon addressed the issue of students being adept with the technology without having an understanding of how we arrived in our present state, Donald Cotter felt the similar need to ground his chemistry students in the history of chemistry.

Fortuitously for Donald, Mt. Holyoke has played a substantial role in the development of science, and particularly the development of women scientists. Science has been taught to and by women over the last 100 years, and its faculty has included a number of pre-eminent women chemists. Mt. Holyoke has rich resources in terms of primary materials related to the history of science. Using xml tags on these original documents enables the students to illuminate and notate this content.

Refining his course over time, Donald now asks his students to develop a project based on what they find in the College’s archives. They cannot begin the project until they know something about the history of chemistry. The students learn the historical context in which to understand science, combined with technical sessions (led by the College’s archivists) to teach them the technical aspects of XML tagging. The technical sessions are conducted during a weekly lab period.

Donald has several objects in mind with this project. He wants the students to develop a taste for doing primary source work and wants them to appreciate that this is reasonable intellectual activity. He noted that students struggle somewhat with the nebulousness of the activity (which they have never done before). He finds that advising them to tell the same story twice, first in standard text and then in the markup of the source document, helps to clarify the objectives of this activity. In closing, he noted that while the outcome of this initiative remains uncertain, he is pleased with the process.

The third participant in this panel was James Wald from Hampshire College. Not only is James an associate professor of History at Hampshire College, he is also the Director of the Center for the Book. In his opening remarks, he noted that he is a historian, not a technologist nor scientist. His research interests include the history of the book and the evolution of the book vis-à-vis technology. As he became more proficient with technology, expanding from email and word processing to the development of course web sites, he became increasingly interested in the technology as it related to his research. Continuing a common thread articulated by David Bogen and Bryan Alexander, he, too, argued that one should not possess merely the technological skills but also an understanding of the underlying concepts of communication and the history of communication.

Noting that writing is a technology (even though we take it for granted and do not think of it as such) James designed his course to increase his students’ understanding of the evolution of writing and technology. He invited a book artist to the class who taught the students how to create a book by hand, using techniques from the 15th and 16th centuries. Using their own writing, they would produce a handmade book. At the same time, in a creative juxtaposition of technologies, they were required to create a web page that discussed and documented the creation of the book, covering topics such as authorship and binding. This exercise engaged the students in considering the different aspects and issues of presenting text on paper and on a computer screen. James Wald is hopeful that students taking this course now have a better understanding of how writing, printing and computers are not competing and different phenomena but rather the same act of expression using different tools.

In closing, Wald remarked that the role of the librarian in the Middle Ages (and until fairly recently, one might add) was primarily to preserve information. Now there is too much information and the focus has shifted to applying search strategies to find relevant information. He argued that the technology revolution helps one to understand the revolutions that came before it, such as the print revolution. There is benefit to studying what was written and predicted about the development of the printed book and its impact on society to see what was accurate and inaccurate. This can provide useful contextual background with which to assess what the current technological revolution will mean to us.

Session 5: Introducing The Academic Commons

Speaker: Mike Roy, Director of Academic Computing & Digital Library Projects,
Wesleyan University

The Academic Commons (http://www.academiccommons.org ) is a recently-launched web publication and community that brings together faculty, technologists, librarians, and other stakeholders in the academic enterprise to foster collaboration, and to critically examine the complex relationship of new technology and liberal arts education. This session provided a brief introduction to the Academic Commons, and highlighted ways in which NERCOMP members can both benefit from and contribute to this initiative.

Conclusion:

This was one of the best NERCOMP workshops that I have attended. My interest in the interaction of technology and pedagogy was well met by presentations combining strategic thinking about what constitutes and shapes a liberal arts education and examples of technology being used in the classroom in a traditionally “liberal” manner.

Bryan Alexander stated the need to study technology in an academic manner and the case studies presented by faculty reflected this approach. Both NITLE presenters effectively set the stage for the presentations that followed.

Dan Chambliss from Hamilton provided very useful insights based on survey data from a Mellon funded survey and set the findings within the context of why liberal arts institutions tend to be dismissive of traditional assessment techniques coming from the business sector. Of particular interest was the survey result that in retrospect, alumni considered discipline-specific learning to be relatively unimportant, compared to the entire undergraduate experience. This finding seemed particularly relevant when compared to Jo Ellen Parker’s contention that the primary model of a liberal arts institution is “content-based.”

The Emerson presentation was very thoughtful and provided excellent examples of faculty grappling with these issues within the context of “teaching media.” Complementing the morning presentation by the Emerson faculty, the Mt. Holyoke and Hampshire faculty reinforced the need for a contextual understanding of technology and how students may be involved in projects that combine the acquisition of new technical skills with a greater understanding of the evolution of such tools and their societal impact.

All in all, it was a very useful event, with high quality presentations and a strong intellectual bent. I suspect that SIGs such as this, emphasizing pedagogical and broader institutional considerations, will become increasingly important and valuable in the future.

Robert Bechtle Retrospective & the Pachyderm Project

by Michael Roy, Middlebury College

bechtle.jpg

The San Francisco Museum of Modern’s Art (http://www.sfmoma.org/ ) retrospective on the work of Robert Bechtle explores Bechtle’s life and work through videos of the artist working in his studio, as well as photographs, letters, newspaper clippings, and other primary source materials from his personal archive. A gallery of artworks zoom-enabled for closer inspection shows highlights from the artist’s 40-year career. Accompanying the show is a nifty web application that provides access to a wide range of multimedia materials. This application serves as a preview of some of the new features that will be available in the 2.0 version of Pachyderm Project (http://www.nmc.org/pachyderm/index.shtml) which is a project being managed by the NMC (http://www.nmc.org) which brings together software development teams and digital library experts from six NMC universities together with counterparts from five major museums to create a new, open source authoring environment for creators of web-based and multimedia learning experiences. Pachyderm should be of particular interest to small schools that do not have professional multimedia development shops.

Technology & the Pseudo-Intimacy of the Classroom: an interview with Jerry Graff

by Michael Roy, Middlebury College

 

Gerald Graff (http://tigger.uic.edu/~ggraff/) is a professor of English at the University of Illinois at Chicago. His recent work has centered on how for most students and members of the general population, academia in general and literary studies in particular are obscure and opaque, a theme taken up in his CLUELESS IN ACADEME: HOW SCHOOLING OBSCURES THE LIFE OF THE MIND(Yale University Press, April 2003).

Academic Commons caught up with Graff to explore his thoughts about technology and the future of liberal education.

Academic Commons: Is our country’s commitment to the ideals of liberal education really in crisis?
Graff: Probably, but one constant seems to survive the crises of every generation: a small percentage of college students “get” the intellectual culture of academia and do well in college while the majority remain more or less oblivious to that culture and pass through it largely unchanged. Changing these conditions, creating a truly democratic higher education system that liberally educates more than a small minority, has always been and still is the main challenge of liberal education.

Much has been made of the neo-Millenials (also known as the Net Generation) who are presently enrolled on our campuses, and how they learn differently than past generations. Do you see this description as accurate or useful when thinking about how educators need to change their teaching strategies?
I have always been skeptical of claims about learning differences between generations. Formerly, it was the ‘60s that purportedly made the adolescent mind non-linear, more visual, and so forth. Now pixels and megabytes supposedly produce a new kind of non-linear consciousness, or one wired into simultaneity, or whatever.

How is technology helping higher education?
Probably only in rather narrowly technical ways, so far, e.g. making registration processes more efficient. Communication across campus has been made much easier, but this benefit may have been negated by the overload problem: we now get information much more readily, but it comes in such excessive volume that the chances of our recognizing the information that is really relevant and useful to us are correspondingly lessened.

How is technology hurting higher education? Aside from the overload problem just mentioned, I think there has been a failure to recognize and exploit the potential that technology offers for improving and transforming day-to-day instruction.

Let me give one example.

I have long thought that there is something infantilizing about the standard classroom situation, where the very face-to-face intimacy that is so valued actually encourages sloppy and imprecise habits of communication. That is, the intimate classroom is very different from–and therefore poor training for–the most powerful kinds of real-world communication, where we are constantly trying to reach and influence audiences we do not know and will probably never meet. We should be using online technologies to go beyond the cozy pseudo-intimacy of the classroom, to put students in situations that force them to communicate at a distance and therefore learn the more demanding rhetorical habits of constructing and reaching an anonymous audience. We have begun to do this to some extent, but our habit of idealizing presence and “being there,” the face-to-face encounter between teachers and students, blinds us to the educational advantages of the very impersonality and distancing of online communication. Indeed, online communication makes it possible for schools and colleges to create real intellectual communities rather than the fragmented and disconnected simulation of such communities that “the classroom” produces.

Can you point to examples of such communities?
I meant possible intellectual communities rather than actually existing ones. I do not know any campus in America that has what I would call a real intellectual community, online or otherwise, in the sense of everyone–or almost everyone–on campus engaged in a continuous conversation about ideas all the time (as occurred for a brief time during the campus protest era in the ‘60s and early ‘70s). I think online technology makes something like such a community of discussion possible even without a crisis like the Vietnam War, but I do not know of any campus that has come close to creating such a potential community. Of course there may be many things going on that I do not know about.

How do you use technology in your own teaching?
I love using e-mail for writing instruction. I can get right inside my students’ sentences and paragraphs, stop them and ask them “can you see a problem with this phrase?” or “can you think of an alternative to this formulation?” or “please improve on this sentence,” with an immediacy and turn-around speed that handing papers back with comments cannot begin to match.

I have also used class listservs, which seem to me to have great potential.The big benefit for me is the creation of a common space of class discussion that everyone can (and in my case must) contribute to, a space that prolongs the in-class discussion and enables us to pursue issues that had gotten short shrift in class. I wish these listserv discussions were more controlled and focused than they have been in my classes, and I think they can be when and if I learn better how to structure them.

One interesting thing I have learned from listservs is that most students see electronic communication as an extension of informal oral discourse, whereas I see it (when used in a class anyway) as properly an extension of formal writing. When I chastised one class for writing sloppy, prolix, and often unreadable blather on the class listserv, they objected that I was trying to shut down the liberating spontaneity and informality that is inherent in electronic media. I think this was a rationalization, but one that has to be anticipated.

In recent years it has become increasingly easy for non-technical people to produce extravagant multimedia productions on their desktop computers. Certain faculty mourn this as the final nail in the coffin of literacy and literature, while others celebrate the possibilities afforded by this new multimedia literacy. Who is right?
Neither group seems worth taking seriously. I do not mean to denigrate multimedia assignments or the way in which they can produce new kinds of learning. I just do not accept the claim that such multimedia creativity is either the final nail in the literacy coffin or a revolutionary breakthrough. If I had to choose, though, I would be more sympathetic to the latter view, or at least be interested in hearing more about multimedia assignments. I am not technologically adept enough to have tried any myself.

Open Access to Scholarship: An Interview with Ray English

by Michael Roy, Middlebury College

 

What is the open access movement?
Open access is online access to scholarly research that’s free of any charge to libraries or to end-users, and also free of most copyright and licensing restrictions. In other words, it’s scholarly research that is openly available on the Internet. Open access primarily relates to the scholarly research journal literature–works that have no royalties and that authors freely give away for publication without any expectation of monetary reward.

The open access movement is international in scope, and includes faculty and other researchers, librarians, IT specialists, and publishers. There has been especially strong interest from faculty in scientific fields, but open access applies to all disciplines. The movement has gained great impetus in recent years through proclamations on open access, endorsements from major research funding agencies, the advent of new major open access publishers, and through the growth of author self-archiving and author control of copyright.

Are there different forms of open access?
Open access journals and author self-archiving are the two fundamental strategies of the open access movement. Open access journals make their full content available on the Internet without access restrictions. They cover publication costs through various business models, but what they have in common is that they generate revenue and other support prior to the process of publication. Open access journals are generally peer-reviewed and they are, by definition, published online in digital form, though in some instances they may also produce print versions. Author self-archiving involves an author making his or her work openly accessible on the Internet. That could be on a personal website, but a preferable way is in a digital repository maintained by a university or in a subject repository for a discipline. I should point out that author self-archiving is fully in harmony both with copyright and with the peer review process. It involves the author retaining the right to make an article openly accessible. Authors clearly have that right for their preprints (the version that is first submitted to a journal) – but they also can retain that right for post-prints (the version that has undergone peer review and editing).

Do journals generally allow authors to archive their work in that way?
A very large percentage of both commercial and non-profit journals do allow authors to make post-prints of their works openly accessible in institutional or disciplinary archives. There tend to be more restrictions on the final published versions (usually the final pdf file), but many publishers allow that as well. An interesting site that keeps track of that is SHERPA in the United Kingdom.

Why is open access important for higher education?
Open access is one strategy – and actually the most successful strategy so far – for addressing dysfunctions in the system of scholarly communication. That system is in serious trouble. High rates of price increase for scholarly journals (particularly in scientific fields), stagnant library budgets, journal cancellations, declining library monograph acquisitions, university presses in serious economic trouble, and increasing corporate control of journal publishing by a small number of international conglomerates that have grown in size through repeated mergers and acquisitions – those are all symptoms of the problem. Scholars have lost control of a system that was meant to serve their needs; more importantly, they are also losing access to research. Open access has extraordinary potential for overcoming the fundamental problem of access to scholarship. It is a means of reasserting control by the academy over the scholarship that it produces and of making that scholarship openly available to everyone – at anytime and from virtually any place on the globe.

Why does open access matter to liberal arts colleges in particular?
It is especially important for liberal arts colleges because of the access issue. Liberal arts college libraries have smaller budgets, compared to the research universities. While even the major research libraries cannot afford all of the journals that they need, the lack of access is an even bigger problem in the liberal arts college realm. Faculty at many liberal arts colleges are expected to be active researchers and independent study is also a hallmark of a liberal arts college undergraduate education. So the lack of access to journal literature can be even more problematic in the liberal arts college framework than it is for the research universities.

Are there other benefits to open access?
There are many benefits, but the main one that I would point out relates to the growing body of research that demonstrates how open access increases research impact. A number of studies have shown that articles that are made openly accessible have a research impact that is several times larger than that of articles that are not openly accessible. Authors will get larger readership and more citations to their work if they make it openly accessible.

And what about disadvantages?
Well, one of the main objections to open access journals relates to the fact that most of them are new and don’t have the prestige factor of older established journals. So, younger faculty who are working for tenure may not want to publish in open access journals, particularly if they can publish in traditional subscription journals that are high in prestige and impact. That’s not as much of a concern for tenured faculty, though, and some open access journals are becoming especially successful and prestigious. Titles published by the Public Library of Science are a great example of that. Prestige and tenure considerations don’t come into play for self-archiving. All authors can exert control over copyright and can make their work openly accessible in a repository, and that will definitely benefit both themselves and the research process generally.

What about the business viability of open access journals?
As I mentioned, there are a variety of business models that support open access publishing. They include institutional sponsorship, charging authors’ fees, and generating revenue from advertising or related services. Business models will differ, depending upon the discipline and the particular circumstances of a journal. In the sciences, where there is tradition of grant support, charging authors’ fees is very feasible. Both the Public Library of Science (the most prominent nonprofit open access publisher) and BioMed Central (the most prominent commercial open access publisher) are great examples of that. In humanities fields, by contrast, there is very little grant support for research, but publishing is also less costly, so open access there is likely to be fostered primarily through institutional sponsorship. Open access publishing is inherently less expensive than traditional subscription or print publishing. There are virtually no distribution costs and no costs related to maintaining subscriptions, licensing, or IP access control. There are also a number of open source publishing software systems that support the creation of new open access journals. I’m amazed by how many new peer-reviewed open access journals are appearing all the time. One way to get a sense of that is to go from time to time to the Directory of Open Access Journals. As of right now there are almost 2,000 titles listed. Just six months ago there were about 1,450.

Are there over 500 new titles in the last six months, or are there 1,000 new titles, and 500 titles that went out of business? Should faculty who don’t have tenure worry about publishing in journals that might no longer exist when they come up for tenure?
I’m not aware of any conclusive data on the failure rate for open access journals (or new subscription journals, for that matter). A new study that will be published in January indicates that about 9% of the titles listed in the Directory of Open Access Journals have published no articles since 2003. Those titles are still available online, so it’s hard to say if the journals have actually ceased. In addition, a small percentage of titles in the directory (about 3%) were inaccessible during the study. The reasons for those titles being offline are not clear; some may have failed, but some may just be inaccessible temporarily. A significant percentage of open access journals are from well-established publishers and some individual titles have been in existence for a decade or longer. At the same time, a large majority of open access titles are from smaller, more independent contexts – they are produced by non-profit organizations, academic departments, or leaders in a field. Since they are relatively new, their viability isn’t proven yet. So it could be advantageous for untenured faculty to publish in some open access journals, but that may not be the case a lot of open access titles.

What’s the hottest current issue related to open access?
I think it’s the issue of taxpayer-funded research. Both in this country and abroad there is increasing interest in making publicly-funded scientific research openly accessible. We saw the beginnings of that with the National Institute of Health policy that was instituted last year and I think we will soon see a broad national debate about the advisability of this for all U.S. government agencies. The United Kingdom is moving toward a comprehensive policy of mandating open access to all government-funded research.

What is your role in the open access movement?
I have been a member of the steering committee of SPARC (the Scholarly Publishing and Academic Resources Coalition) since its inception. SPARC, which is a coalition of academic and research libraries, has been a prominent advocate for open access. I have also played a leading role in the scholarly communications program of the Association of College & Research Libraries. I chaired a task force that recommended the ACRL scholarly communications initiative and I have been chair of the ACRL Scholarly Communications Committee since it was established. Being involved with both SPARC and ACRL has put me in the middle of a number of these issues for the past several years.

How does open access fit into your role as library director at Oberlin?
We have been doing ongoing work at the campus level to build faculty awareness of scholarly communications issues and also to support open access in concrete ways. We have taken out institutional memberships to major open access journals and we’ve encouraged faculty to publish in open access journals in instances where that made sense for them. I have also been involved as a steering committee member with the creation of a statewide institutional repository that OhioLINK is developing. When that repository system is implemented we will be working very actively with our faculty on the question of author control of copyright and self-archiving.

What are some concrete things that faculty, librarians, and other stakeholders can do to help?
Faculty have great power in the system of scholarly communication (as editors, reviewers, and authors), so they are in the best position to bring about change. They can assert control over their copyright, archive their research openly, and publish in open access journals, among other things. The role of librarians and IT staff necessarily needs to be more educational in nature. They can become informed about these issues and then work with faculty and other researchers to bring about fundamental change. There is a good summary of a lot of these issues, along with concrete suggestions for what faculty, librarians, and others can do, in the ACRL Scholarly Communications Tool Kit.

The Create Change website is another great resource.

Other than Academic Commons, what is your personal favorite open access publication?
My favorite one, for obvious professional reasons, is D-Lib Magazine. It publishes a variety of content – articles, news, commentary, conference reports – related to the development of digital libraries. They’ve had a number of important pieces on open access and scholarly communications issues.

css.php