Month: May 2017
Putting Study Abroad on the Map
by Jeff Howarth, Middlebury College
“Each year about 60% of the junior class at Middlebury studies abroad in more than 40 countries at more than 90 different programs and universities.”
When I read this sentence on the Middlebury College Web site, I thought to myself: that’s a dataset that my students ought to map. I knew that there had to be a dataset behind that sentence, something that the author could summarize by counting the number of different countries, programs and students. But I imagined this dataset could show us much more if we represented it spatially and visually rather than just verbally. I didn’t know exactly what it might show, but I knew that my cartography students could figure that out as long as I taught them the technical skills for handing the data and the general concepts for visualizing multivariate data. What they decided to make with this knowledge was up to them.
Increasingly, teaching cartography involves training students on specific software platforms while communicating more general principles of the craft. This presents the need to design instructional materials that connect technical skills with thematic concepts while allowing students to creatively achieve the broader educational objectives of a liberal education. As an instructor of cartography at Middlebury College, I have largely followed a project-based learning approach focused on the process of cartographic design. My learning objectives seek to link techniques and concepts in an extended creative procedure that involves data management, problem setting, problem solving and reflection. At different steps along the way, the students must make their own design decisions, applying the available means to their chosen ends. Here, I describe the case of mapping the study abroad program in order to illustrate the general approach of integrating technical and conceptual teaching through design problems.
The Project
I gave the students a very simple prompt: Design a layout for the Web that explores how the Study Abroad Program connects Middlebury students to the rest of the world. The students also received a spreadsheet, supplied by Stacey Thebodo of the Study Abroad Program, listing all students who had studied abroad between 2006 and 2010. In addition, the students received some geographic data, including country boundaries, in a common GIS format. Like all the projects in the course, this assignment provided students with an opportunity to apply topical and theoretical concepts that had been introduced in lecture and readings. For that week, the topic concerned spatial representations of multivariate data based largely on the Jacques Bertin’s theory of graphics.1 The three learning objectives of this assignment each connected theory to technique at different steps of the creative procedure: 1. demonstrate data management skills for cartography, specifically how to transform a list of nominal data into a statistical map; 2. identify the components of information to visualize in order to satisfy a purpose for an intended audience; 3. solve the problem given real-world constraints (available data, software, time and knowledge).
Data Management
The dataset came packaged as a spreadsheet with columns for semester, year, student name, major, minor, gender, program name, city, and country. The first problem was to reformat this dataset into something that could be mapped, which could be completed with two technical operations–linking the country or city names to geographic coordinates that could be plotted on a map and transforming nominal data into quantitative data.
The students were familiar with both the purpose and procedure of the first operation as it had been introduced in a previous assignment. They knew that descriptions of locations in an attribute table, like country names, could be joined to a separate file with corresponding geographic coordinates of each location in order to plot them on a map. But that alone would not get them much closer to visualizing the dataset, as they would wind up with a lot of overlapping geographic features, one for every row in the database. It would be far more preferable to format the dataset so that each row represented a different geographic feature (e.g. country) and each feature had attributes like the total number of students or the total number of programs. Then the students could make a map that showed spatial variation in these quantities.
To do this, the students needed to transform nominal data into quantitative data, which was a new problem. It introduced a new technical procedure with a theoretical concept that had been introduced in lecture that week. Technically, it involved using spreadsheet functions to ‘pivot’ data, summarizing instances of one category by instances of another category (e.g. counting the number of students per country). Conceptually, however, it involved defining the core theme that the students wanted to map, or what Bertin called the ‘invariant’ of information: the base concept that does not vary across spatial, temporal or thematic dimensions and by its invariability allows us to recognize the components of information that do vary. And this conceptual side of the problem made the task a bit more difficult than simply repeating the technical steps that I had demonstrated for ‘pivoting’ data.
The intuitive unit of the study abroad dataset was ‘student who studies abroad,’ but the dataset did not necessarily come structured in a way that let us map this. It was essentially organized by semester: for every semester between 2006 and 2010, it recorded each student studying abroad. This meant that if a student studied abroad for an entire year (two semesters) then they would be listed in the dataset twice and simply counting the number of students during the pivot operation would generate error. There were a number of possible fixes for this but they all required the student to think about balancing what they could do given the dataset and what they should do to achieve their purpose and help the reader interpret their map.
Setting the Problem
Once the students had seen the dataset and were shown how they could manipulate it so that it could be mapped, their next problem was to decide what they wanted to show on their map. In general, the students had to consider how to apply the means available to them (the dataset and their technical skills) to one or more ends, but these ends–the goals they sought to achieve by making their map–were decisions that they had to make on their own.
Throughout the course, I asked the students to consider their audience, media and theme when deciding what kind of map they would make. What kinds of questions would the people using this map with this kind of media want to answer about this theme? In this case, what would visitors to the study abroad Web site want to know about the program that a map could help them understand?
This tested students’ understanding of how relationships between variables help a reader make inferences from a graphic. This, of course, is the underlying principle of simple graphics, like scatterplots and charts, but in this case the students had more than just two variables that they could let the reader visually compare. The dataset included attributes for region, country, program, year, semester, gender and major. In addition, the students could generate new attributes from these, such as changes in number of students over time. What combination of variables would allow the reader to answer questions they might have? Or better yet, what might provide answers to questions that the reader might not even have thought of?
Problem Solving
As students begin the process of making a map layout, the workflow becomes less linear as the student must coordinate high levels of interacting elements. During this phase, students will not be able to work through the problem one step at a time, but rather must shift into a mode of reflective action, as they evaluate how each decision interacts with other decisions that they have made or will make, constantly adapting these pieces of the design to improve the quality of the whole.2 Their work during this phase thus reflects conceptual understanding at two levels: the individual components of the map and their interaction as a whole.
In this case, students demonstrated their comprehension of lower-level concepts in two ways. First, the students needed to choose one or more visual variables (e.g. shape, size, hue, value, texture) to represent each component of the dataset, evidencing both their conceptual understanding of Bertin’s theory of visual variables and their technical ability to implement these concepts with graphic design software. Second, the students needed to provide geographic context for the symbolized data by making a base layer. This evidenced their conceptual understanding of cartographic principles, such as projection and generalization, and their technical ability to implement these concepts.
As students implement these lower-level concepts and begin to produce a design, they confront conceptual problems of design that result from the interaction of their lower-level decisions. These include concepts like figure-ground, contrast, balance, as well as knowing when to use an inset or when to use small multiples because a single graphic is simply trying to say too much. These concepts are difficult to master by following simple rules but rather mature through thoughtful reflection during the process of design.
Reflection
In addition to the map layout itself, I also required students to submit a written discussion of their design process. My objective was to provide another means to distinguish between a student’s comprehension of a concept and their ability to implement the concept with technical operations. I asked students to describe the decisions that they made during the design process and to relate these decisions to concepts that were introduced in lecture and readings. The short reflective write-up provided students with an opportunity to communicate their understanding of theoretical content even if the could not apply this understanding in their layout due to technical shortcomings.
Evaluation
Throughout the course, my joy of receiving thirty uniquely creative expressions of student work at the end of each week was countered by the dilemma of pegging each to a standard evaluation scale. My main objective when grading was to recognize both the student’s conceptual and technical understanding during each phase of the project–data management, problem setting and problem solving–using both their map layout and written discussion. For this assignment, I focused on the following:
- Is the thematic unit clearly defined and intuitive for the intended audience?
- How many components are visualized and what kinds of inferences can the reader make by relating these components?
- Do the visual variables help the audience interpret the components?
- Does the base map demonstrate comprehension of cartographic principles?
- Does the map composition demonstrate comprehension of higher-level graphic design principles?
Each of these questions relates an aspect of the work to one or more concepts from the course.
Examples
This map designed by Marty Schnure ’11 map shows a simple message well.
Figure 1. Map by Marty Schnure ’11
Marty has simplified the information content of her layout by removing the time component and aggregating by region. She uses intuitive visual variables (width of lines to represent magnitudes, hues of polygons to represent regions). She also uses a projection that is appropriate for this spatial extent. Her map is especially pleasing because she also demonstrates higher-level concepts of graphic design: her color scheme draws from the palette of the Middlebury Web site, her layout expresses symmetrical balance and she’s using contrast to effectively distinguish figure from ground.
Like Marty, Jue Yang ’12 also used flow-lines to represent numbers of students traveling abroad, but she added another component to this information and shows this data at two levels of spatial aggregation. Her flow lines originate from Middlebury aggregated by region and then branch midway to quantify the proportion of students studying in each country.
Figure 2. Map by Jue Yang ’12.
By designing her origin as a pie chart, which she repeats at a larger scale in the upper corner, she quietly urges her readers to compare the regional pattern while also providing a very subtle legend to her color scheme. She’s also made several decisions that evidence good cartographic principles. For one thing, she’s removed Antarctica, which makes sense for a lot of reasons: no students study there, the projection distorts the poles and would have made the continent funny-looking, and it frees up space for her flow lines to Oceanic countries. She’s also hidden an artifact that can be seen on most of the other student maps. The country boundary data has more detail than necessary for mapping at this scale. This makes some coastlines, like the west coast of Canada, accumulate line weights and appear as distracting blobs rather than crisp boundaries. Jue’s creative solution to this problem was to use white boundaries for countries and white fill for her oceans. This visually simplified coastlines without any laborious data transformations.
Several students increased the information content of their graphics by representing temporal components. Thom Corrado ’11 visualized the number of students studying in each country for each year of the dataset.
Figure 3. Map by Thom Corrado ’11.
Thom developed an original scheme that used size to represent the number of students and color to represent the year. This allows the reader to infer changes in the study over time with any single country and also to compare the numbers of students studying in different countries for any single year. His insertion of an inset map evidences his awareness of a higher-level design problem resulting from the popularity of Europe, where circles representing number of students each year would overlap and obscure the underlying country boundaries.
The layout developed by Katie Panhorst ’10 layout was one of the most ambitious efforts due to the number of components that she included. She shows two temporal components (year and semester), two spatial components (country and region), and one thematic component (program sponsor). Her design uses small multiples arranged in a grid to reveal temporal components. Her thematic component allows the reader to interpret the quantitative data in a new way by correlating the number of students to the presence of Middlebury-sponsored programs.
Figure 4. Map by Katie Panhorst ’10.
Some students chose to represent change rather than time. This involved calculating the difference between the number of students studying in different countries or regions over two consecutive years and then representing the change symbolically. Jordan Valen ’10 offered one creative solution that used proportionally-sized arrows to represent change. This allows the reader to recognize patterns of change: Latin America, Europe and Asia seem to be largely consistent over time, Africa and Oceania fluctuate from one year to the next, while the popularity of the Middle East appears to be on the rise.
Figure 5. Map by Jordan Valen ’10.
Lessons Learned
There are four key lessons that I’ve gained from this project-based approach to teaching cartographic design:
- Show students how to solve problems, but allow students to set the problems to be solved. While some liberal arts students may appreciate that a course allows them to list commercial software names under the skills section of their resume, simply training students how to use software falls outside the traditional scope of a liberal education. Providing students with the technical skills to solve problems while allowing them to set the problem to solve will foster student creativity and a learning environment characterized by exploration and discovery.
- Integrate your teaching of technique and theory, but separate your evaluation of technical skills and conceptual knowledge. The disparity of technical skills in a classroom can challenge both the evaluation of student work and the motivation of students to work. Some students will feel disadvantaged if their peers have had prior experience with a particular software, while those that enter the classroom with experience won’t be challenged if they are simply being shown how to push buttons that they have already learned to push. Additionally, a student will feel frustrated if their mastery of a complicated tool constrains their opportunity to demonstrate their comprehension of concepts. The reflective write-up describing design process is one strategy to tease apart these two kinds of knowledge, but I found that some students, even those with much to gain from a verbal description of their thinking, seemed to treat this part of the assignment with less effort than the map product itself. This may have been due a failure on my part to clearly communicate the importance of this part of the assignment or it may reflect a more intrinsic bias on the part of some students to focus on the product of design rather than the process.
- Design is reflective action and reflection takes time. This assignment required the students to commit a significant amount of time. In part, this stems directly from my expectation that students set their own problems. Problem-setting requires students to take the time to explore the dataset in order to discover its possibilities. Also, because the students decide what to make, they also have to decide when they’re done. Any student who has learned how to efficiently meet a professor’s expectations will find it difficult, if not frustrating, to decide when to stop working on their own. But independently of having students set problems, the complex nature of cartographic design, where elements interact with each other, and one decision influences both past and future decisions, translates into time needed to reflect and adapt. In particular, high-level design concepts, such as contrast and balance, are not dependent on a correspondingly high-level of technical knowledge that is difficult to master. Rather, they rely on students taking the time to consider and resolve them. As such, these gestalt concepts underlie the most common design flaws in student projects.
- Provide project topics that engage students.
This last point is by no means novel in a liberal education but it should not be ignored when developing topics for student projects. The study abroad project provides an example of a dataset that students were drawn to explore. Many had studied abroad, so many started by looking themselves up in the dataset. This provided an opportunity to discuss key cartographic concepts, like data integrity and abstraction, as the row of fields attached to the dot on the monitor didn’t quite map to the richness of their memory. They became curious about how popular their program was and what places were less traveled. And they became interested in sharing this with other students and promoting the college program. It’s a useful case in the larger pedagogy of teaching techniques at a liberal arts college: give students problems that connect to their experience and involve both problem setting and solving. Many will recognize that visualizing quantitative data is a creative act.
References
1. Jacques Bertin, Semiology of Graphics (Madison, WI: The University of Wisconsin Press, 1983). [return to text]
2. Donald A. Schön, The Reflective Practitioner: How Professionals Think in Action (New York: Basic Books, 1983). [return to text]
Simple Animations Bring Geographic Processes to Life
by Christopher L. Fastie, Middlebury College
Introduction
It seems we spend a lot of time teaching about things that we can’t easily observe, maybe because students are already familiar with processes they see operating around them, or because previous teachers have already harvested those low-hanging fruit. Processes that are obscure because they are small, large, slow, fast, or distant in time or space require more careful explanation. Some of these processes can now be revealed using digital technologies. I used Google Earth to model a very large process that took place 13,500 years ago. I used a global positioning system (GPS) receiver to map a series of glacial features in west central Vermont and transferred the results to Google Earth. I then added graphical models of the retreating Laurentide glacier and associated pro-glacial lakes and rivers which shaped the mapped features. Animated flyovers of the augmented Google Earth surface at different stages of the reconstructed glacial retreat were saved as video files and incorporated into an explanatory video. I have presented this video both before and after student field trips to the study area with good results. Subsequent upgrades to Google Earth allow animated flyovers to be recorded and played back in the free version of the program. This offers a streamlined creation process and the potential for a more interactive and collaborative experience.
Click on the video link below to view.Old, Flat, and Unconsolidated: Salisbury’s Gravelly Past from Chris Fastie on Vimeo.
Science instruction benefits greatly from graphical demonstrations of physical structures and processes. Current textbooks are elaborately illustrated and associated Web sites sometimes include animations of important general processes, but ready-made animations of more specific processes or locally relevant examples are rarely available. Software for producing custom animations is becoming more user-friendly, but the cost and training commitment still prevent wide adoption. Google Earth is a free program that is based on animation of the earth’s surface and that includes tools sufficient for creating simple animations of many social, geographic, geologic, and ecological processes. The professional version (Google Earth Pro), which is not free, adds the capability to save these animations as video files that can be viewed separate from the program.
Geomorphology and Google Earth
Most geomorphic processes, by definition, include movement of material at the earth’s surface, and are therefore well suited for animated representations in Google Earth. Extant geomorphic features can be difficult to observe in the field because they are large, subtle, or obscured by vegetation. Google Earth is an effective way to highlight such features before they are visited in the field, or afterwards when observations can be summarized and interpreted. By animating the time course of development of such features, geomorphic processes and concepts can be effectively revealed.
Glaciers shape the landscape as they flow, but evidence of glacier advance is often obscured by more recent features produced during glacier retreat. The last part of the Laurentide ice sheet to retreat from Vermont was a lobe of ice in the Champlain Valley. As the length and thickness of this lobe diminished, great sediment-laden rivers pouring from the glacier and from the surrounding barren landscape flowed through and alongside the ice. The Champlain Valley drains to the north, and the glacier impounded a temporary body of water called Lake Vermont which rose to a level several hundred feet higher than the current Lake Champlain. Some of the water flowing south into this lake flowed alongside the glacier and built gravelly floodplains between the newly exposed valley walls and the ice. As the glacier continued its retreat, these flat surfaces were abandoned when the river found a lower course next to the ice. Remnants of these surfaces, called kame terraces, are conspicuous features of the Champlain Valley. When the glacial rivers reached the level of Lake Vermont, they built sandy deltas into the lake. These fine-grained deposits were left high and dry when Lake Vermont eventually drained as the impounding ice retreated far enough north.
Modeling Landscape Features
In 1998, I moved into a house at the eastern edge of the Champlain Valley and began to explore the neighborhood. The landscape was dominated by the steep lower slopes of the Green Mountains, but these bedrock slopes were interrupted by dozens of flat, level terraces that appeared to be built of unconsolidated material (sand, gravel, boulders, etc.), instead of solid bedrock. I am a plant ecologist by training, not a geologist, but I began to sketch the extent and location of these flat places to see if the larger pattern held clues to their origin. The sketch maps on paper were a key element of the discovery process because the pattern of the flat areas, which are spread along miles of valley edge, was difficult to see without them. Dense forest covers most of the area and the resolution of the existing topographic maps was insufficient to reveal the subtle terraces. It is possible to identify some of the larger terraces from the air or from stereo aerial photographs, but most terrace margins and their relative heights cannot be discerned well. I assumed that no one had ever mapped these terraces before, so my map would be the first opportunity to study their landscape-level pattern in detail.
The evolving paper map allowed me to begin to reconstruct the progressive positions of the glacier margin and the associated routes of the ice-marginal river that must have created the kame terraces. It required considerable imagination to visualize the massive glacier redirecting a swollen, turbulent river along a hillside that today is three hundred feet above the valley floor. The map was good data, but to explain the complex course of events that played out over many decades and affected many square miles of hillside, it was just a start.
In 2007, I acquired a consumer GPS receiver which had two crucial features. It could produce tracklogs of walking tours by recording location coordinates at ten second intervals and the Garmin Mapsource software it came with had a menu item called “View in Google Earth.” So I could walk the margins of a kame terrace with the GPS recording, upload the tracklog to a PC using Mapsource, and then see the tracklog in Google Earth. Google Earth allowed the terrace margins to be displayed on a recent color aerial photo stretched over the three dimensional topographic surface of the study area. This digital landscape could be viewed from any angle and any height above the surface, and one could “fly” over the scene at will. This encouraged me to make digital tracklogs of all the terraces I had found. Without the tracklogs displayed, the terraces could not be discerned in the crude Google Earth topography, which is just a digital version of the mid-twentieth century USGS topographic maps. As the terraces accumulated in Google Earth, I realized that the animated movie of ice, rivers, deposition, and erosion that had been playing in my mind for several years might be successfully shared with others.
Google Earth incorporates simple drawing tools that allow lines and shapes to be placed on or above the digital landscape surface. Three-dimensional objects can be represented by extending lines from objects down to the ground surface. Far more elaborate 3-D objects can be created using the free program Google SketchUp, but all of the objects created for this project were done with the tools included in Google Earth. I used these tools to trace all the terrace margins imported from Mapsource, creating horizontal polygons in the shape of each terrace. I used the option to extend lines down to the ground surface to give each terrace a solid appearance. The resulting shapes are crude representations of the actual terraces (which do not have vertical sides, and are not all perfectly level) but provide a bold display of the overall pattern formed by the terraces.
I also used Google Earth’s drawing tools to make simple models of the glacier, Lake Vermont, other pro-glacial lakes, and meltwater rivers as I envisioned them at three different times during the formation of the terraces. This allowed the geomorphic features along a four mile stretch of hillside to be put into the context of the retreating ice margin and the associated lateral displacement of an ice-marginal river. I could now display three stages of the landscape process that had shaped my backyard 13,500 years ago.
To bring the process to life, I used the Movie Maker tool in Google Earth Pro to record flyovers of the augmented landscape at different stages in the reconstructed landform-building process. Due to the large scale of the study area there is great explanatory power when the view zooms from the regional to the local and then to a detail, for example, of a river’s route along the glacier. Google Earth allows any view of the digital landscape to be saved by “snapshotting” the view of a saved “placemark.” The program will automatically and smoothly “fly” from one placemark view to another and these single flights formed the content of most of the video clips I produced. A few dozen of these clips were edited together using Adobe Premiere Pro. By inserting cross-fades between identical landscape views depicting different stages in the process, simple animations of the landscape development could be produced.
Presenting the Results
I first presented a draft of the video after students in my class at Middlebury College spent a January day exploring the snow-covered landforms. We made multiple stops to see several key parts of the study area and were still thawing out when we piled into my office to watch the video consisting only of the silent flyovers from Google Earth. I think the students were able to more meaningfully synthesize their field observations after seeing the animated landscape. The reward was probably greatest for those students who had been working hard during the trip to make sense of the individually mundane features. I assume that the video allowed everyone to attach some additional geomorphological significance to the flat surfaces we had visited. During this field trip, we collected some new video of ourselves which was later incorporated into the final version of the video along with other footage and a narration.
For a subsequent class field trip to this area, I asked a new group of students to watch the video beforehand. By this time, a completed twelve-minute version of the video was available online. Viewing the video gave them a context for understanding what they later saw in the field and established a shared baseline of knowledge. I asked students a year later whether viewing the video before or after the field trip would have been more productive and the consensus was that before was better. The primary reason given was that the subject was sufficiently novel and obscure that every explanatory aid was welcome. Viewing the video first also allows a class to quickly address more complex issues such as the relationship between geomorphic origin and vegetation. However, some students recognized that the process of struggling to make sense of confusing field observations has pedagogical value. The video presents a compelling explanatory model, so it eliminates the need for students to assemble and test their own. Waiting until after the field trip to view the video has great potential for classes with the background and motivation to benefit from a puzzle-solving exercise.
In May 2009, Google Earth 5 was released with a new feature that allows flyover tours to be saved and played back within the program. The tour is not saved as video, but as a set of instructions that the program interprets in real time. While creating the tours, drawn objects (e.g., rivers or kame terraces) can be toggled on or off, creating simple animations. Photographs or videos can be displayed sequentially at designated places in the landscape. Narrations or music can be created and saved with a tour. This new feature offers an alternative method of sharing explanatory flyovers and animations.
Learning to save and distribute tours is easier than learning to save video clips and produce online videos and can be done with the free version of Google Earth. Without programming, tours can be embedded on Web pages where they play automatically in a window. The window is a working instance of Google Earth, so if the tour is stopped the user can interact with the digital landscape without having Google Earth installed (a free Google Earth browser plug-in is required). Tour files can also be distributed directly to users who can interact with them using Google Earth. The keyhole markup language (KML) files which encode the tours are usually small and easy to distribute to others. In addition to watching the recorded tour, users with Google Earth installed can experiment by toggling features on and off or creating their own new features. This creates the opportunity for interactive and collaborative projects. An advantage of KML tours over tours saved as video files is that it provides a view of the full resolution Google Earth landscape, not a compressed video version, and displays the most current aerial photos. Soon after I completed the video about glacier features, Google Earth updated the photo coverage of Vermont with higher quality, more recent images, instantly changing the video’s status to outdated. A primary disadvantage of distributing KML files to others is that there is less control over the viewing experience, which depends on the user’s operational knowledge of Google Earth, and settings in Google Earth (and of course, Google Earth must be installed). For examples of the tours I created, see www.fastie.net. You can also download the .kmz file for viewing in Google Earth.
Learning to view the landscape in Google Earth is fun and easy. Learning to produce and save video clips or KML tours is more of a challenge. Google’s online help and tutorials are a start, but you should plan for some trial and error if you want to produce something other than the simplest result. If there is someone on your campus who can help you get started, you might be able to climb the steepest part of learning curve in an hour. Otherwise, plan for some additional learning time. Although the required commitment is not trivial, the models and tours you create can be used year after year to give students valuable insight into geographic patterns and processes that no one has witnessed firsthand.
SmartChoices: A Geospatial Tool for Community Outreach and Educational Research
by Jack Dougherty, Trinity College
SmartChoices, a Web-based map and data sorting application, empowers parents to navigate and compare their growing number of public school options in metropolitan Hartford, Connecticut. A team of students, faculty, and academic computing staff at Trinity College developed this digital tool in collaboration with two non-profit urban school reform organizations: the Connecticut Coalition for Achievement Now (ConnCAN) and Achieve Hartford (the city’s public education foundation). While English and Spanish-speaking parents learned how to use SmartChoices through a series of hands-on workshops, my students and I simultaneously collected data to better understand the “digital divide” and factors influencing parental decision-making on school choice. Overall, our project supports two liberal arts learning goals: to deepen student interactions with members of our urban community, and to nurture student participation in creating original research for real audiences.
The idea for SmartChoices began during a conference call with community partners a few weeks before the fall 2008 semester. Marc Porter Magee from ConnCAN and I were brainstorming about a possible collaboration between his education reform group and my Cities, Suburbs, and Schools undergraduate seminar at Trinity. Building on Trinity’s long-standing Community Learning Initiative, I designed this interdisciplinary seminar as a team research workshop, where we read historical and social science studies on schooling and housing and then design local research projects to test the application of research findings to metropolitan Hartford. Our region is a land of extremes: Hartford is one of the nation’s poorest cities, located inside a belt that includes some of the wealthiest suburbs. A year earlier, while learning basic GIS skills, my students created thematic maps to explore city and suburban differences in educational resources and outcomes, using data provided by ConnCAN. We all sensed the power of maps, and sought to build on our relationship by going a step further.
Marc and I agreed that the expansion of public school choice would soon become the most pressing issue for Hartford parents, because each family’s number of options was dramatically increasing, for two reasons. First, the Sheff v O’Neill school desegregation case created more interdistrict choices. Based on a 1996 ruling, the court mandated Connecticut to create more magnet schools (designed to attract both city and suburban students), encourage suburban districts to accept more city student transfers, and begin counting public charter and technical school students when calculating racial integration goals. Second, the Hartford Public School launched its district-wide school choice program. The district replaced neighborhood school assignment with a citywide lottery, required for all students who completed their current school’s last grade level, and optional for any students who desired to change schools. Suddenly, Hartford parents who were accustomed to sending their children to the neighborhood school were surrounded with more choices, and now when their child finished elementary or middle school, they were required to submit a choice application to advance to the next grade level. All together, a typical Hartford parent of a child entering the 6th grade now faced over thirty different school options. Moreover, competition between interdistrict and district providers meant that there were two different major application processes–and a host of minor ones–each with their own application form and procedures. While public school choice was intended to improve educational opportunity, it quickly became overwhelming.
“Do you think you and your students could design a brochure to show Hartford parents their school choices?” Marc asked.
I explained that there was no way to create one printed document that showed parents their exact set of eligible choices. We needed a dynamic system to deliver the right school data–and only that data–for each family, based on their residence and child’s age. First, parents wanted to see only those schools that offered their child’s grade level, and these varied widely across the two hundred public schools in the metropolitan region (ranging from K-2, K-5, K-6, 3-5, 5-8, 6-8, 7-12, 9-12, and so forth). Second, the Hartford Public Schools divided the city into four zones, guaranteeing bus transportation only for students attending schools within their residential zone, provided they did not live so close that they could walk. Third, across the region, many schools were limited to enrolling students from designated attendance zones or school districts. Yet public school choice happened so fast that most Hartford parents, particularly new arrivals with limited literacy skills, had little sense of where their interdistrict and district school options were located.
“The only way we can do this is to create a Web site,” I replied, “and it needs to show parents their eligible schools on a map, in relation to where they live.” We agreed to cooperate on attempting to build a pilot version during the fall semester, with Trinity designing the technology and ConnCAN providing school data and community support. The fact that my graduate training had focused on history and sociology (not computer science), and that I had acquired only “advanced beginner” GIS and HTML skills during the past decade at Trinity, should have made me think twice before leaping. But I was fascinated by the idea of blending a much-needed community outreach project with a research tool to better understand how parents from different neighborhoods made school choices.
Prior to this conversation, I had read innovative research studies on parents, information, and school choice. In Washington, DC, Jack Buckley and Mark Schneider created a Web site where users could compare different public schools (traditional and charter), while researchers monitored mouse clicks and search patterns.1 The authors found that parents using the site displayed racial preferences: when comparing two schools with comparable achievement levels, parents were more likely to drop the school with a higher percentage of black students. Later, I became aware of a related study by Justine Hastings and Jeffrey Weinstein in Charlotte-Mecklenberg, North Carolina, where researchers experimented with providing school data to parents in different paper formats.2 They discovered that low-income parents who received a list of schools ranked by test scores were more likely to choose higher-performing ones than a control group which received an alphabetical list, without test data. Furthermore, Trinity economics professor Diane Zannoni, our undergraduate co-authors and I published an article that analyzed how much money suburban homebuyers were willing to pay for a comparable home on the more “desirable” side of an elementary school attendance line, and connected this trend to the growing availability of school-level data on the Internet.3
Fortunately, my Trinity colleagues and students shared in the enthusiasm and hard work to create the SmartChoices Web site. When parents type in a child’s home address and grade level, the site displays all of their eligible district and interdistrict public schools on an interactive Google Map, as well as a table for sorting and comparing distance from home, racial balance, and student achievement levels. Additional links point users directly to individual school Web sites, application forms, and transportation information. David Tatem, academic computing instructional technologist, helped me to conceptualize the interactive map and school database, and provided GIS support. Undergraduate research assistants Jesse Wanzer and Nick Bacon digitized school attendance boundaries. My seminar students compiled address and demographic data for over two hundred schools in the city and nearby suburbs. Devlin Hughes concentrated on refining the user interface as a case study for her senior thesis on data visualization, with assistance from Trinity’s social science data coordinator, Rachael Barlow. Another student, Christina Seda, provided the Spanish translations. Jean-Pierre Haeberly, the college’s director of academic computing and an exceptionally talented programmer, developed the Web application. Based on Web 2.0 design principles, SmartChoices exists on a three-tier server architecture, which integrates the Web server (for the search page and interactive map) with the application and database servers. Asynchronous requests permit the user to initiate searches and view results without having to reload the page, as in a traditional form-based Web site. To encourage other regions to create similar Web sites, we are distributing SmartChoices code as free, open-source software upon request by email <SmartChoices@trincoll.edu>.
Figure 1. Smart Choices Web interface
Prior to our public launch, ConnCAN community organizer Lourdes Fonseca helped organize a series of focus groups to receive feedback from Hartford parents and administrators of different school choice programs. My seminar students designed interview guides and guided participants through the pilot site, while recording how users interacted with and interpreted school data on their screens. We made several revisions to make the site as user-friendly as possible for Hartford parents, including many who have little or no experience with computers. We also faced difficult choices when deciding which school-level data categories to feature, since we committed to developing a site that would fit on display screens no larger than 1024 pixels wide. School choice administrators sometimes requested revisions that would serve their particular program’s needs over others, or feature promotional material. Some education officials expressed concern about direct school-to-school comparisons of test scores or student racial composition. As a result, we took the position that SmartChoices would stand as an independent project, not affiliated with any school, district, or choice program. Furthermore, we committed to reporting data obtained from public sources of information, such as the state department of education or school Web sites. By providing the most comprehensive source of public school choice information, SmartChoices has filled the role of a “consumer reports” service for public education in the Hartford metropolitan region.
After our public launch in early 2009, the Achieve Hartford local education foundation joined the project to fund research and community outreach. Our primary research questions were: Who uses SmartChoices, and how does digital information influence parental decision-making? My Trinity students and I organized a series of parent training workshops to collect both qualitative and quantitative data, and ConnCAN contracted with community organizers from another Hartford organization, the Voices of Women of Color, to assist parents at public libraries and to bring laptop computers into people’s homes through school choice “house parties.” Print, radio, and television media also broadcast features about the Web Site.
Who used SmartChoices, and where did they search? In our full report, we analyzed Web site statistics and found that during the five-month choice application period in 2009-10, over 3,385 distinct searches were conducted on SmartChoices. Over three-quarters of these searches were conducted for addresses in the city of Hartford, while the remainder included addresses in suburban towns and outside our coverage area. The dot distribution map illustrates the geographical spread of SmartChoices usage across urban and suburban areas. The grade levels most commonly searched were Kindergarten (16 percent) and 9th grade (14 percent), which matches the most common grade-level entry points in the system.
Figure 2. Distribution of SmartChoices searches
How did people use SmartChoices, according to Web statistics? We created a sorting feature that allowed users to organize their search results in five different categories: school name, distance from home, racial balance, test scores, and test gain over the previous year. The Web site randomized how each user’s initial results were sorted, to determine which categories were most frequently selected. Among users who sorted results, the most popular categories were Distance (25 percent) and Test Goal (24 percent), with Test Gain and Racial Balance trailing behind. However, we observed that most users never sorted their results (70 percent of the 3,385 distinct searches), perhaps because they did not see the sort button, nor understood how it worked.
Rather than simply waiting for users to find and visit our site, Trinity students and I organized ten hands-on workshops (in both English and Spanish) in Hartford to train parents how to use the site, while interviewing them in depth about their decision-making process. Our sample of 93 workshop interview participants was limited to parents of children entering elementary school (grades PreK-8) in the next academic year. Each workshop participant interacted one-on-one with a trained Web site guide, in front of a computer, for about fifteen to forty minutes, and gave informed consent to be interviewed. Each guide followed a script that asked parents to list their top-choice schools (before and after using SmartChoices), and walked users through the Web site while explaining what data labels meant. About half of these interviews took place in workshops at local neighborhood schools, while the other half occurred during larger regional school choice fairs. At the neighborhood events, our most successful workshops were organized with the assistance of Hartford Public School Family Resource Aides (FRAs), who helped us arrange access to school computer labs and attract interested parents with bilingual flyers. Note that these workshops were not located at representative locations across the city (due to research design and logistical issues). Furthermore, all workshop participants were self selected, meaning they voluntarily responded to a neighborhood workshop flyer or walked up to our regional school choice fair tables. By definition, self-selected participants are not necessarily representative of the Hartford-area population at large, limiting the interpretation of our results.
SmartChoices parent workshops. Photos by Nick Lacy.
How did the SmartChoices workshop influence participants’ decision-making? Before introducing the Web site, our interviewers asked a pre-workshop question: for one child in your family, what are your top choices for schools next fall? After hands-on Web searching and sorting, we asked it again as a post-workshop question. When we compared participants’ pre- and post-workshop responses for their top-choice schools, we found that the total sample divided into roughly equal thirds. About one-third changed their top choice, meaning the workshop experience led them to switch from one school to another. About one-third clarified their top choice, meaning they began with no response or one that was too vague for an application form (“the school near Walmart”) but eventually selected a specific school. Finally, about one-third did not change their top choice.
For the thirty-two workshop participants who changed their top choices, we compared their initial selection to their final selection, to measure the relative influence of the four key data categories in the SmartChoices search results. To compare pre/post-workshop responses across different categories, we expressed all in common units, based on one-third of a standard deviation of the mean difference. On this scale, Test Goal (69 percent) and Test Gain (64 percent) were the most influential categories in this sample, followed by Racial Balance (47 percent). Interestingly, Distance was the least influential category in this phase of the analysis, because roughly equal portions selected new schools that were farther, closer, or a similar distance to their homes.
Does this mean that school distance from home does not matter to parents? Absolutely not. When we compared how workshop participants sorted results, we found that Test Goal and Distance were virtually tied (at 23 and 22 percent, respectively), followed by the other categories. Given that parents often make trade-offs between distance and school quality factors they value, we infer that SmartChoices helped workshop participants to identify desirable schools that were located closer to, or a similar distance from, their initial top-ranked school. In other words, we suspect that the SmartChoices map and distance calculator helped workshop participants find “good schools” (however they defined them) that they were not previously aware of.
Does increased public school choice improve education for all? SmartChoices cannot answer this policy question, because this project only considers families who seek to make a choice and self-selected to try our website. For our next research project, Diane Zannoni and I wish to conduct a spatial analysis of who does (and does not) participate in school choice, either by submitting an application or by exiting the district. We are also deeply interested in spatial research that uncovers racial and social stratification as a result of choice.
Nevertheless, the movement for public school choice has attracted multiple supporters in our politically divided nation, particularly in metropolitan Hartford. Advocates of the Sheff ruling support voluntary interdistrict magnet schools and city-suburban transfers as the most viable means to racially integrate schools. At the same time, market-oriented advocates embrace public school competition as a means to empower urban parents to exit low-performing schools and enter those more likely to reduce the achievement gap. “Choice” has become such a politically popular label in metropolitan Hartford that it appears in the name of at least three distinct entities: the Open Choice city-suburban transfer program, the Regional School Choice Office, and the Hartford Public School’s “All-Choice” initiative.
We cannot ignore the influence that the Internet has had on consumerist activity in “shopping” for public schools. Google, the ubiquitous search engine, recently reported that the category of “school comparisons” was the leading type of public data search conducted on its Web site in November 2009.4 In their report, Google defined “school comparisons” as any search on education from PreK to higher education, such as: “Douglas County schools” or “top law schools.” Indeed, other categories might have ranked higher if Google had not broken out certain subgroups of searches, such as separating “cancer” from “health” searches in general. But the report confirms that citizen-consumers are eagerly looking to the Internet to help them make judgments about comparing the relative qualities of different educational options.
Whether or not one supports public school choice, it exists and continues to grow in our nation’s urbanized areas. To participate in these application processes, families need access to reliable information to make informed decisions about public schools. To be sure, some information flows through parents’ social networks: the opinions of trusted relatives and neighbors, conversations with principals and teachers, and personal visits to schools. But other sources of information–such as student achievement, racial balance, distance from home, and program offerings–are more readily available on the Internet.
Yet access to information, and knowledge about how to search and interpret Web sites, is not uniformly distributed. The “digital divide” was more commonly discussed a decade ago, but it has not disappeared, and remains as one of the most challenging barriers in the twenty-first century knowledge-driven economy. While working on the SmartChoices project, we were struck by the difficulty of obtaining reliable, current data on the scope and size of the digital divide in the Hartford region. In 2007, the US Census Current Population Survey posed this question to a national sample: “Do you (or anyone in this household) connect to the Internet from home?” The proportion responding “Yes” who resided in the city of Hartford ranged from 34 to 55 percent, while those living in the three-county Hartford metropolitan statistical area ranged between 75 to 92 percent. The range in estimates is due to the large number of people whose responses were omitted because they answered “No” or did not respond to the initial question, “Do you access the internet from any location?” Therefore, if we include these omitted responses, the results point to the low end of the estimated range. In addition, we still lack comprehensive data on the true scope of adult literacy–particularly computer literacy–among residents of the city of Hartford, compared to the metropolitan region or state. Based on our first-hand experience with the SmartChoices parent workshops, we witnessed a wide range of computer ability between adults who self-identified as new versus regular users.
As the “SmartChoices” name clearly implies, familiarity with the World Wide Web has become a necessary ingredient to be an informed consumer of public education in Greater Hartford. The rapidly expanding (and constantly changing) set of public school options, as well as differences between competing choice providers and their eligibility guidelines, made it nearly impossible for us to communicate with parents through a paper booklet or catalog. We created SmartChoices as a dynamic Web site–with an interactive map of school locations, distance-to-home calculator, and transportation links–because we could not conceive of a way to adequately present the key information that each parent needed on paper. Furthermore, beginning in January 2010, the Hartford Public School Choice Office shifted from paper-only to Web-only applications. For families in our urban setting, learning how to navigate the Internet is not an option, but a requirement.
To be sure, digital tools like SmartChoices are only valuable to people who have access and knowledge of how to use them. In our parent workshops, my Trinity students observed significant differences between participants who had greater familiarity with computers and higher levels of education. If school choice is expected to improve public education for all, then community outreach needs to focus on novice computer users, with information literacy to help users understand and interpret key data categories (in English and other languages), as well as hands-on guidance on Web skills such as sorting data and following through with online applications. Liberal arts college students, staff, and faculty already enjoy most of these skills, and we can learn a great deal about our broader communities if we find meaningful ways to engage with them on these important issues.
Notes
1. Jack Buckley and Mark Schneider, Charter Schools: Hope or Hype? (Princeton, NJ: Princeton University Press, 2007). [return to text]
2. Justine S. Hastings and Jeffrey M. Weinstein, “Information, School Choice, and Academic Achievement: Evidence from Two Experiments,” Quarterly Journal of Economics 123, no. 4 (November 2008):1373-1414, posted online 15 October 2008. [return to text]
3. Jack Dougherty, Jeffrey Harrelson, Laura Maloney, Drew Murphy, Russell Smith, Michael Snow, and Diane Zannoni, “School Choice in Suburbia: Test Scores, Race, and Housing Markets,” American Journal of Education 115 (August 2009): 523-548, published online 4 June 2009. [return to text]
4.”Statistics for a Changing World: Google Public Data Explorer in Labs,” Official Google Blog (8 March 2010), http://googleblog.blogspot.com/2010/03/statistics-for-changing-world-google.html. [return to text]
Re-envisioning the Internationally Sophisticated Student: Champlain College’s Global Modules Project
by Gary Scudder and Jennifer Vincent
Introduction
In response to the demands of an increasingly interrelated world, there is not a college or university that is not grappling with the challenges of producing more internationally sophisticated students. To that end, Champlain College, a small baccalaureate college in Burlington, Vermont, has spent the past five years completely restructuring its core curriculum to best prepare students of the twenty-first century for their role as global citizens. A key component of this new core curriculum is the college’s innovative Global Modules (GMs) project, where Champlain students connect with students at various international universities for short, thematic, course-embedded, online discussions. Starting in the spring 2008 semester Champlain started positioning the Global Modules as mandatory assignments in certain key required interdisciplinary courses. The goal is to create an integrated series of progressive assignments based on global dialogue carried throughout the university experience.
Before discussing the Global Modules project and its role in Champlain’s new core curriculum, it might be a good idea to step back and take a look at a more traditional solution to global learning: study abroad. While the advantages of studying abroad are well-documented and Champlain continues to support students’ active participation in it, we feel that offering study abroad alone is not enough. Many factors, ranging from financial considerations to tightly-structured degree requirements, combine to limit participation in such programs. We must also realize that study abroad experiences are often singular, isolated events that come late in the curriculum, usually in the third year, and typically exclude areas like the Middle East or Africa. The latest figures from the Institute of International Education show definite advances over the last decade, but also some limitations. In 2007-2008, over 260,000 American college students studied abroad, an increase of 150% in a decade but still only about two percent of the total university population. There are some positive factors in this study, especially the dramatic percentage increases for destinations like China (19.0%), Costa Rica (13.2%), South Africa (15.0%), India (19.8%), Brazil (7.9%) and Russia (8.2%), which shows that American students are increasingly choosing non-traditional study abroad locations. Still, an examination of locations by region shows that the destinations of choice remain overwhelmingly Eurocentric, with 56.3% of American university students studying abroad in Europe in 2007-2008. This compares to 4.5% for Africa (up from 2.8% in 1998-1999), 11.1% for Asia (up from 6.0% in 1998-1999), 15.3% for Latin America, 1.3% for the Middle East and 5.3% for Oceania.1 This limited diversity is unacceptable if we are to prepare students for the global challenges of the twenty-first century. As stated by NAFSA: Association of International Educators and the Alliance for International Educational and Cultural Exchange, two expert organizations deeply committed to international exchange and study abroad:
We no longer have the option of getting along without the expertise that we need to understand and conduct our relations with the world. We do not have the option of not knowing our enemies–or not understanding the world where terrorism originates and speaking its languages. We do not have the option of not knowing our friends–or not understanding how to forge and sustain international relationships . . .2
The need for increased diversity in the destinations of study abroad students was also cited as a major challenge by the Commission on the Abraham Lincoln Study Abroad Fellowship Program in their 2005 publication “Global Competence National Needs.”3
Champlain College has decided that not only is it important for students to have an international experience, but it is essential for every student to have an international experience. To that end, we have initiated an ambitious program of embedding Global Modules across the curriculum. Participation in the Global Module project not only raises cultural awareness for all students early in their college careers, it also allows our students to communicate with students from all over the world. Global Modules are an online global-learning solution that allows for the free exchange of ideas and opinions between domestic and international students that can be incorporated into any class. Using Global Modules involves very little training, preparation or class-time. Finally, it is important to keep in mind that the Global Modules are not designed to replace study abroad. Instead, one of our hopes is that by requiring students to communicate with other students from around the world early in their university career it will actually increase the number who study abroad, as well as enhance their study abroad experience.
History
As almost anyone associated with higher education knows, one of the biggest reasons why international initiatives collapse is their cost and complexity. With that in mind the Global Modules are designed to be simple, flexible and inexpensive. We give students, both at Champlain and abroad, access to a Global Modules Web site designed and run by the college’s faculty members. Once a semester the classes “meet” online for assignments, usually in four week blocks. Global Modules are designed to link the students and faculty at international educational institutions for shared readings, discussion and teamwork. Their readings, chosen through consultation among the faculty at the different universities, are designed to challenge unspoken cultural assumptions as well as promote critical thinking and collaborative learning. The key is to choose readings and assignments that force the students to work together to cooperate and solve problems, and in the process come to grips with their national or regional biases. While the Global Modules can be adapted to any number of specific situations, they have traditionally taken this form:
-
- Week 1: Students post introductions and initial perceptions
-
- Week 2: Shared reading assignments; general philosophical discussion
-
- Week 3: More focused, country-specific analysis; problem solving
-
- Week 4: Critique and summary; reflective piece
The goal is to create a system that allows for a detailed and engaging dialogue, but is also flexible to fit into a variety of different courses.
At Champlain we ran our first Global Module in Spring semester 2003. We linked two Seminar in Contemporary World Issues classes that were being taught in Burlington and at our campus in Dubai, United Arab Emirates. The students in the two locations shared a common reading on the Grameen Bank, the Bangladeshi organization that gives micro-loans to the poorest of the poor. To get a loan from the Grameen Bank, lendees have to agree to sixteen resolutions, which are really a means of societal transformation. The first part of the Global Module assignment was an online discussion of the article and what the students thought of the Grameen Bank. We then broke the students into virtual groups that were half-Burlington and half-Dubai. The group assignment was for each group to come up with their own list of ten resolutions, post them, critique the work of the other groups, and then reflect on what they had learned. By focusing on the Grameen Bank the students were forced to address issues of poverty, aid, gender inequality, and work together in international groups to solve problems. Not surprisingly, the two groups approached this issue in very different ways and thus learned from each other. The extraordinary outpouring of student interest and enthusiasm from the very first experimental Global Module let us know that we had stumbled across a potentially valuable mechanism for bringing students together in a virtual classroom. Since that initial semester, we have run hundreds of Global Modules on a diverse topics, such as human rights, gender issues, ethics, globalization, community, terrorism, medical ethics, concepts of the self, and perceptions of Arabs in film. We have dramatically increased our team of international partner institutions to include such schools as Nelson Mandela Metropolitan University in South Africa,the Higher School of Economics and St. Petersburg Polytechnic University in Moscow, Haigazian University and Lebanese American University in Lebanon, Klagenfurt University in Austria, Ghana University in Ghana, Zayed University in the United Arab Emirates, Al Akhawayn University in Morocco, Kenyatta University in Kenya, the University of Alcala in Spain, Bethlehem University in Palestine, Corvinus University and Pazmany Peter Catholic University in Hungary, and the University of Jordan in Jordan.
Global Modules
It might be useful to take a look at the inner workings of a four-week Global Module. Weeks one and four are largely boiler-plate. In the first week the students get to know each other, with each student posting an introduction and their initial perceptions of the other country. Week four mainly consists of summary and critique, along with reflective pieces for the Champlain students that serve both for grading as well as institutional assessment. Weeks two and three focus on discussion about the assigned readings. Below, for example, are the second and third weeks of a Global Module on ecological sustainability that was written by our colleague, Cyndi Brandenburg:
Week 2
This week we begin our discussion of ecological and carbon footprints. We will be using four short articles. The first is “Measuring Footprints: A Tale of Two Families”); the second entitled (“Big Foot”); the third is entitled “UAE Beats Americans’ Environmental Harm”; and the fourth is entitled “Why Bother?”. Please follow these links and read the four articles.Once you have read the texts you will answer a series of questions. You will be required to post answers at least twice, although you can contribute more often if you wish. You can either post an original answer to a question or comment on the posting of another student. Either way, your postings should be detailed and analytical. At least one of your posts should be a response to another student’s posting. In addition, at least one of your posts should be completed in the first half of the week. If you are late posting for the week do not simply answer a question that has already been answered by another student–contribute in a new way. Build upon your fellow students’ answers. Think of it as the class as a whole answering the question.
- What do the terms ecological footprint and carbon footprint mean? What type of activities contribute to it and why?
- Compare your life to the two individuals in the “Measuring Footprints” article. To which one are you most similar? Would you be willing to live like Jyoti if you knew that it would significantly improve life for the next generation? What comforts of your life are you willing and unwilling to give up?
- What are possible solutions for reducing carbon emission? Are they viable?
- If carbon usage and emissions had a specific price tag attached, who do you think would be most seriously affected? Do you think a “carbon tax” is socially just? Why or why not?
- Does it make sense for an individual to adopt a “greener” lifestyle if his or her greater community doesn’t embrace change as well? Why or why not?
Week 3
Let’s continue our discussion this week, focusing on specific examples from our two countries. Work on the following question. Be sure to post at least twice this week. Remember, at least one of your posts should be a response to another student’s posting— and at least one should be completed in the first half of the week.
- Go to Global Footprint Network and The Independent Footprint Calculator and calculate your ecological footprints using both sites. Don’t worry that the sites are limited to certain geographical regions. How big is your ecological footprint? How does the data gathered from these two sites compare?
- Can you suggest specific actions for reducing footprints on an individual level?
- Can you suggest specific actions for reducing footprints on a community level?
- Can you suggest specific actions for reducing footprints on a national level?
- Can you suggest specific actions for reducing footprints on a global scale?
As you can see from this example, the Week 2 activities are a more general, often philosophical discussion of a topic, while the Week 3 assignments give students a chance to bring in examples from their own countries and do some problem solving.
Global Modules in the Curriculum
Participation in Global Modules has enriched the educational experience of the American and international students involved. The Global Modules have internationalized the curriculum, fostered critical thinking, and inspired much needed dialogue between students and faculty members from different parts of the globe. Champlain College is so dedicated to the approach that it became a key element in the institution’s new core curriculum implemented in spring 2008. The first core curriculum class with an embedded Global Modules is the Concepts of Community course, which is normally taught in the second semester of the first year. We prepared a number of community-based topics that gave participating professors a variety of options. As part of this initial launch of the Global Modules in the new core curriculum, five hundred first-year Champlain students linked up with five hundred international students from universities in twelve different countries. Embedding the Global Modules in the Concepts of Community course was only the first step in a much more ambitious plan. Since that time we have embedded the Global Modules in Capitalism & Democracy, a second-year course, and Human Rights & Responsibilities, a third-year course. In each instance the GM is a required assignment of every student and constitutes ten percent of the grade.
Embedding the Global Modules across the curriculum provided several challenges, one of which was to ensure a diversity of discussion topics–and our professors, both here and abroad have worked assiduously to prepare a wide variety of topics. A brief look at the choices for this semester alone gives a sense of the diversity of options. For example, in the first year Concepts of Community classes the following topics are discussed: changing interpretations of liberalism and conservatism; the ways that festivals reflect societal norms; the interplay between economics and politics; ethical decision making; the worlds if Islam and Christianity; the culture of violence; ecological sustainability; social differences as expressed in the business community; marriage and family; Muslims in America; and divorce and society. In the second-year Capitalism & Democracy classes students are discussing societal transformation in the UAE and the US, the worldwide financial crisis, political and societal space as reflected in suburbia, democracy and the Internet, critiques of capitalism and medical ethics. In the third-year Introduction to Human Rights classes students discuss corporate culture by focusing on multinational corporations, consumerism and democracy in a digital age, and women in crime and punishment.
A second challenge is how to make the GMs both integrated and progressive. Now that we are in our third year of including these as required assignments in the core classes, we have come to think of their progression in this fashion: in the first year it is enough that the students recognize the similarities and differences between different cultures; in the second-year GMs we expect students to try to understand why the similarities and differences exist, and whether they are societally- or individually- based; and by the third year we expect the students to be able to discuss how their own personal behavior is impacted by this knowledge, both personally and professionally. At the end of every Global Module the students write a reflective piece that discusses these concepts.
By spring 2010 the Global Modules have expanded to require participation by every first, second and third year student. The international partner universities for this semester include: Nelson Mandela Metropolitan University (South Africa), State University Higher School of Economics (Russia), Al Akhawayn University (Morocco), University of Jordan (Jordan), Pearl Academy (India), Haigazian University (Lebanon), Corvinus University (Hungary), St. Petersburg Polytechnic University (Russia), Zayed University (United Arab Emirates), American University in Cairo (Egypt), Gulf University of Science & Technology (Kuwait), University of Alcala (Spain), Pazmany Peter Catholic University (Hungary), Lebanese American University (Lebanon) and Klagenfurt University (Austria).
What We’ve Learned
Administrative Support: Not surprisingly, a project of this size requires a consistent vision and extraordinary support from upper administration. Fortunately, the Global Modules project has received constant and generous support from President Dave Finney, Provost Robin Abramson, Associate Provost Michelle Miller, and Core Division Dean Elizabeth Beaulieu, who have provided financial support in the form of course releases and a travel budget and maybe more importantly, moral support by taking every opportunity to keep the GMs on the college agenda and rallying the troops.
Technology: Running the Global Modules requires a delicate technological balancing act. We need enough tools to carry on the discussions, but we cannot run a system that is such a bandwidth hog that it precludes the participation of some of our international partners with more limited technological infrastructures. Our current Web site, our third, makes use of vBulletin, a simple bulletin board approach that allows for dependable, asynchronous discussion. We will occasionally make use of video conferences (for example, in a recent two-week period our colleague Chuck Bashaw carried out Skype videoconferences with universities in South Africa, India and Russia) but don’t require it. So in regards to technology, the simpler the better.
Planning: It is Champlain’s belief that the best approach for providing a sustainable program is the creation of a smaller inner circle of linked universities. This does not diminish Champlain’s vision of acting as the facilitator of a much larger network of domestic U.S. and international universities, but in the short term, a more cohesive smaller network makes curricular planning more manageable. The dream would be the creation of an integrated consortium of eight to ten universities with a partially integrated curriculum. The advantages to this approach are obvious. Most importantly, the existence of this inner core of universities would make planning much easier if our international universities took the approach of officially embedding the Global Modules in certain key courses, which would allow us to plan out semesters, if not years, in advance.
Faculty Support: We have been fortunate in that Champlain has the reputation, richly deserved, of being a very nimble school that adapts quickly to changing professional and pedagogical worlds. Consequently, the institution, and especially the faculty members, are less tradition-bound and much more open to change. The positive response of the faculty to the Global Modules project, which required them to work with new international partners as well as teach partially online, is a statement to their extraordinary dedication to their students. The faculty members have been a part of the planning process from the beginning and we constantly look for their feedback. In choosing a Global Module topic we give faculty members the choice of either selecting one of the dozens of GMs that we have run in the past or writing an entirely new one, and this helps to increase faculty buy-in by providing secure time-tested options as well as protecting academic freedom.
International Partners: Quite simply, getting a project like this started, especially for a small school like Champlain, requires spending time overseas finding partners. When we started this project no one had ever heard of Global Modules or Champlain College, and thus we had to prove ourselves to an entirely new audience. We have developed a list of criteria that we use when sizing up potential partners–ranging from their technological infrastructure to their history of international programs to their English proficiency to the political freedom for discourse in the country–and thus we’ve been pretty successful. That said, nothing replaces devoting the time and resources to visiting new universities and presenting at international conferences and revisiting standing partners time and again. With most of our international partners, especially in the Middle East, the personal contact is irreplaceable. Keep in mind that we would not have a Global Module network without our international partners, and that every GM is team-taught with a professor from South Africa or Russia or Morocco, and spending time at the foreign universities running workshops and drinking innumerable cups of tea is a necessity.
Notes
1. Institute of International Education, Open Doors Report (New York, NY: IIE, 2008). [return to text]
2. American Association of Colleges and Universities, College Learning for the New Global Century (Washington, DC: AAC&U, 2006). [return to text]
3. United States Commission on the Abraham Lincoln Study Abroad Fellowship Program, Global Competence & National Needs (2005).[return to text]
The Mixxer Language Exchange Community
Description
The Mixxer is a social networking site designed for language learners. Dickinson College places a heavy emphasis on international education, its study abroad programs, and foreign languages. The Mixxer allows us to create real world language use in our classrooms with native speakers using Skype. The site has many of the same functionalities as Facebook with blogs, friend requests, and a messaging system; however, what makes it different is that users search for potential language partners based on their native language and the language they are studying. When they find a potential partner, they send a message proposing times to meet and eventually communicate via Skype. Though not required, the usual arrangement is to meet for an hour with each partner, spending thirty minutes speaking in their native language and thirty minutes in their target language.
The Mixxer also includes functions for foreign language teachers. Teachers can search for other teachers interested in class-to-class exchanges. They can organize and oversee their students’ blog posts. In addition, they can organize “events” where native speakers are invited to contact students in their class via Skype at a specific time. With more than 40,000 Mixxer users, it is now possible for any language teacher to organize a language exchange for their students at almost any time. This is especially helpful for less commonly taught languages in Asia and the Middle East where time differences make most traditional class-to-class exchanges very difficult.
Origin
The idea for the project grew from the collaboration of myself, the language technologist at Dickinson College, and a Japanese instructor, Akiko Meguro in 2005. Professor Meguro had heard about text chat exchanges done here at the college via NetMeeting between an intermediate French class and an English class in France. She wanted to do the same for her classes, but there were several obstacles in replicating the project in Japanese. The first was the Japanese writing system. Written Japanese consists of three character sets: hiragana, katakana and kanji. Switching between character sets, in addition to learning kanji (Chinese characters), is significantly more complicated than the Roman alphabetical system. Because of these character sets, typing is not usually taught until the second semester. Language exchanges for first-year courses would have to be audio exchanges done during class to avoid the necessity of typing and to provide help to students who may have trouble understanding or communicating. Unfortunately, the popular audio messengers at the time such as “Yahoo Talk,” “MSN Messenger,” and “IChat” often had difficulty connecting or maintaining an audio connection due to firewalls and network configurations. The second major hurdle was the thirteen-hour time difference between the east coast of the U.S. and Japan, which made finding potential partners with matching class hours very difficult.
The arrival of Web 2.0 offered some solutions. One of these technologies, Skype, enabled us to have reliable voice communications to Japan. Skype is a voice over IP application, often called an audio messenger, that allows for free calls between computers. We chose Skype over the other audio messengers for several reasons:
- Skype used what is called p2p technology, or peer-to-peer, meaning there is no central server. This enables it to reliably connect computers on different networks with little regard to the configuration or firewall settings on either network.
- Skype had a very large and international user base, which meant we had a large pool of native Japanese speakers from which to draw who were already familiar with the technology.
- Skype could be set to connect over a specific internet port. On a campus network, this meant we were able to reserve bandwidth for the language exchanges by setting the Skype clients in the language labs to use the port that was assigned the highest priority.
With the arrival of Skype, we had a reliable tool for audio communication, but we still needed a way to find partners for our students. I decided to create the Mixxer, a social networking site that would be solely for those interested in language exchanges via Skype. The initial version was extremely simple. It was little more than a searchable database with the front end created using the .NET framework and an Access database on the back end. Users could search profiles that were separated into two categories: individual learners and teachers. Individual learners could search the database by native language and language sought; teachers could search for other classes based on language criteria and student ages.
The initial challenge was garnering publicity for the site in order to populate the database with enough language learners and teachers so as to be useful. Looking back, I could have developed a far more effective marketing strategy by taking advantage of the blogosphere, our own Web site, listservs, and other social media. Instead I haphazardly searched forums for posts of people looking for language partners and offered my site as a suggestion. I was, in effect, recruiting users one at a time. Fortunately I did eventually reach a critical mass and the site was able to grow on its own. Even more importantly, Skype, which was barely out of beta at this point, began receiving a tremendous amount of publicity. Educational blogs began writing about the possibility of using the service as a language learning tool. Because the Mixxer had been created very early on in the development of Skype, it ranked very highly for searches such as “language exchange Skype,” “learn language Skype,” etc. This created a cycle of links for the Mixxer, a high ranking in Google searches, and more users.
Once we had a sizable database of language learners and teachers, we were able to find partner classes for many of our language classes and offer students the possibility of conducting language exchanges outside of class. For one year, Japanese conducted class-to-class exchanges with an English class in Japan. However, maintaining these exchanges was difficult. The time difference meant that our students, myself, and the professor had to meet at 9 p.m. to speak with the class in Japan. In addition, the number of students who would show up on their side and at what time was very unpredictable. At times, we would have students show up in the evening and be unable to speak with anyone for the entire hour. Over time, this proved to be a fairly common experience. Class-to-class exchanges were often difficult to maintain over various semesters due to schedules and time zones, but also because of varying expectations. While it was a required and integral part of our courses, other schools sometimes viewed the exchanges as optional for their students and were unsupervised by their instructors.
Because of these difficulties, Professor Meguro began utilizing a Japanese social networking site, Mixi, to recruit individual native speakers who were interested in practicing their English. Mixi makes this possible through community and event functions that allow users to create groups and organize themselves around a common topic. Professor Meguro started a group focused on English language practice, that she then used to propose an “online meeting” for the community. Our class time would be posted as the time for the online meeting, those interested would send me a message via Skype, and their names would be distributed to our students.
This method worked well, but once the community became very large, we wanted to set up a registration system allowing us to match the number of native speakers with our class size. We set up a registration and event function within the language exchange site, the Mixxer, in order to do this. By adding this functionality to the Mixxer, we were also able to offer the same function to any language class on campus with relatively short notice.
Drupal as a Development Platform
For two years, the site grew at a very good pace. With over 20,000 users in the database, I was able to offer language exchanges to any foreign language professor at the college interested in connecting their students with native speakers. Last year I began looking at expanding functionality. I wanted users, including our students, to be able to maintain blogs on the site that could be reviewed by their professor or native speakers. I also wanted to allow users to create groups, whether they were peer study groups or classes created by professors for their students. Finally, I wanted teachers and professors outside of Dickinson to be able to arrange language exchange events for their classes. Up until then, I had organized all of the exchanges by running a query on the back end database that sent an email to potential language participants. If I were to open up this process to other institutions, I would need to develop a front end that automated this process.
Because I was the only person working on the site and my time for the project was restricted to summers, creating the additional features in VB.NET was not feasible. Starting over in a different platform seemed daunting as well, but I knew the change would only become more difficult as time passed. I began looking at platforms that would allow for the easy creation of a social networking site and would be fundamentally customizable, since the entire site was oriented around each user’s target and native language–not the type of criteria that comes “out of the box” with pre-made sites. I also wanted to use something that was open source and had an active user base. This would ensure that I could obtain the software for free, be able to make any necessary changes, and hopefully be able to rely on future upgrades and avoid having to switch platforms in the immediate future.
I looked at ELGG and Joomla, but I finally settled on Drupal. Both Drupal and Joomla have an active user base and are module based, which allows the creator of the site to customize the site by adding functions created by the community. When I made a list of the additional functions I needed to recreate my current site along with the groups, blogs, and event creation, I felt Drupal provided the best collection of modules. And since we already had a previous version of Drupal running on campus, there was the possibility of help from colleagues if I ran into trouble.
The transition of the .NET site to Drupal, including content and the additional functions, took me about two months, which was better than I had expected. Until this point, not only was I unfamiliar with Drupal, but I had also never written any code in PHP, used MySQL, or worked with Linux. Most of my time was spent sifting through possible modules and testing their functionality along with configuration settings. In the end, I added less than ten lines of custom coding to the site. The rest of the changes were made by uploading modules and selecting configuration settings on a form. It would have taken at least twice as long for me to have created the additional functionality from scratch in .NET on the old site, and now with my understanding of Drupal and its parts, additional changes will come much faster. Once I had created the new site, I was also able to find modules that allowed me to import the content from the old site. When the new site went live, I had some performance issues since I was unfamiliar with PHP caching or diagnosing slow queries in MySQL, but these proved to relatively minor issues. Both have since been resolved as our server group has learned more about LINUX, and I have gained additional experience working with a MySQL database.
The new Mixxer site in Drupal has been a great success. Traffic is up 66% from this same time a year ago and we doubled our user base. Because professors can now organize events on their own, we greatly increased the number of classes that integrate exchanges into their classes from a handful each semester to fifteen or twenty classes. At the same time, I’ve been able to reduce the time I spend organizing and starting each exchange. I would recommend Drupal to anyone looking for a system that allows users to organize themselves and collaborate on a given subject.
Foreign Language Instructors Interested in Using the Mixxer
The Mixxer is open to any language learner or teachers. Teachers are asked to create an account at www.language-exchanges.org and then send me an email requesting a teacher account. Once registered, they can search for other classes interested in having an exchange or to set up an event for their own class by inviting individual students to contact their students during the class hour. The process for doing so is simple. The instructor creates a page describing the time and topic of the exchanges, and then invites Mixxer users who match the language profile. It is recommended that these invitations are sent at least one week in advance. In the message, a link is included where the native speakers can register using their username, Skype name and e-mail address. When enough native speakers have registered, the teacher closes the registration. An email is automatically sent to those who have signed up one day before the event to confirm the exchange and instruct participants to send a Skype text message five minutes before the event to the instructor’s Skype address. On the day of the exchange, these Skype names are collected and distributed to the students as they enter the computer lab. In case that the number of students and native speakers do not match perfectly, students can participate in a Skype conference call, that may include two students and one native speaker. More detailed instructions about setting up an exchange are posted on the site.
I also recommend that the students produce something from these exchanges such as a summary of their exchange. One option is to have students send their reports directly to the instructor; however, it is also possible to have them report on their exchanges via the blogs within the Mixxer. Students and their partners are then able to read each of the reports and provide comments that frequently encourage further interaction outside of class. In order to encourage this interaction, students may write a “thank you” message to their partner as well, so that the partner can find the student’s profile within the site. Once they’ve made this initial contact on the site and become Mixxer “friends,” they will each be notified when the other posts additional content. After several exchanges, the students become members of this virtual community and their relationships will extend beyond the classroom. We have had students maintain contact with their language partners over several semesters, or even a couple of years until a semester abroad, when they were finally able to meet in person.
Some professors also have their more advanced students conduct content-based interviews with native speakers. In this case, students sign up as individual learners on the Mixxer. They then contact native speakers about doing an exchange and set up a time to meet. It is important that students contact more than one native speaker and do so well in advance of the actual due date of their project, and that they also fulfill their promise to give their partner equal time practicing their target language.
Conclusion
Our principal goal in having the language exchanges was to increase the amount of verbal practice in the target language. This was clearly a success with students fully immersed in the target language for roughly twenty-five minutes each class. But we were also interested in additional benefits provided by the exchange, particularly on student motivation in the classroom and their interest in study abroad opportunities. In fall 2008, we surveyed eight classes and 103 students who had used the Mixxer for language exchanges. Results were quite positive. Roughly 90% of students stated that they enjoyed the exchanges and found them useful. Equally encouraging was the 89% positive response rate to the question whether their confidence in speaking had increased. Finally, and somewhat of a surprise, 30% of students said the exchanges made them more likely to decide to study abroad; 70% said the exchanges had no effect and 0% indicated a negative response. Professors also reported anecdotal evidence that students using Mixxer were more motivated, knowing they would be applying what they learned in the classroom to a “real-life” situation. Several students in each class maintained contact with their partner outside of class. In some cases, these additional exchanges amounted to several additional hours of practice within the target language each week. Also of note, we had two students this year who reconnected with their former Skype language partners while studying abroad.
The effect these exchanges have on the format of the instruction of the classes depends mostly on their frequency. For the Japanese department, these exchanges form the communicative goal for each chapter. They have a language exchange every two weeks with questions and conversational topics drawn from the material they’ve learned in a given chapter. Other languages such as Spanish and German will only have two or three language exchanges per semester and the exchanges are often used as supplemental cultural components for the course.
This coming year I hope to extend the language exchanges from roughly fifteen to twenty intermediate courses to include more beginning level courses. We organized exchanges for second-semester Spanish students this year and the professors were surprised not only by how well the students did, but also by their reactions to the exchanges. The faculty had feared some of the students would feel overwhelmed and frustrated by the experience; instead the students asked immediately afterward about future exchanges.
I am also hoping to increase the number of professors from other institutions involved in exchanges. Outside of Dickinson, several colleges and universities have used the system to find partner classes, but only Oberlin College, Franklin and Marshall and Illinois Wesleyan use the site regularly. This is partly due to lack of awareness, but an improved interface and better description for setting up the language exchange events–improvements planned this summer–would also help. Anyone interested in connecting their language students with native speakers should feel free to contact me at bryantt@dickinson.edu.
Plato’s Allegory of the Cave in Second Life
by Jack Green Musselman and Jason Rosenblum
Pause for a moment and imagine that your life consists of shadows on the wall of a cave, though to you “cave” just means the world you see at the bottom of a long tunnel. You know nothing of the world outside since you are chained next to others who are sitting beside you on a rock that faces the cave wall. There’s a fire burning behind you, but you don’t know that it’s there. There are figures outside who stand in front of the fire at the mouth of the cave–they’re the ones whose shadows are in front of you. But, you don’t know what the figures are–or that they even exist. Imagine you could free yourself and walk outside. What would you see? What would you think of your life inside the cave? What would you say to those you left behind? Would they believe you if you told them they still lived in a cave? What would you think of the world, once you were free to look around? Now imagine that you are taking a philosophy class. What if you could really come one step closer to experiencing Plato’s Cave? What if you (or your virtual representation) could play the role of someone in the cave, see the shadows, walk outside and reflect on the experience?
Figure 1. Outside of the Second Life cave, Plato’s Allegory of the Cave, St. Edward’s University
The Cave allegory is the famous story from The Republic.1 This allegory is often used in philosophy classes to represent the state of ignorance we experience until we are educated in college by leaving our own personal caves and learning about the world around us. While that’s a useful allegory, Second Life (SL) lets students experience a virtual cave, escape, and then try to convince others that the world outside is brighter than they think. Students can be asked to describe what they missed by not leaving the cave, why they won’t return to live in that dark and limited world, and what else in their lives is like living in ignorance in the cave. The point is to help students realize that we all live in caves of ignorance or half-truths unless and until we can get up and go out to see and examine how things in other places and walks of life really are.
Why Second Life?
SL is often described as a multi-user virtual environment (MUVE)2 or virtual world. According to Dieterle and Clarke, MUVEs are virtual environments that allow for synchronous communication between multiple people, interaction in a virtual context with “digital artifacts” and experience “modeling and mentoring” real-world problems.3 From a teaching and learning perspective, SL is also a learning environment that offers what Bransford et al describe as a “system of interconnected components” that provide a learner with knowledge and assessment-based focus.4 Our application (known as a sim) of Plato’s allegory requires learners to challenge their existing attitudes and beliefs as they participate, while simultaneously receiving expert guidance and–outside of the cave– having opportunities for formative assessment. This application of the Cave allegory therefore enables the instructor to construct a SL environment that is both learner- and knowledge-centered.
Instructor interaction is critical to the student experience. From the time students emerge in the cave, they follow a preset instructional sequence and are guided through their experience in and out of the cave. This type of guided instruction is not only active5 but also experiential. According to Kolb, “learning is the process whereby knowledge is created through the transformation of experience. Knowledge results from the combination of grasping experience and transforming it.” 6 It is our intention that students understand Plato’s allegory as a participant in it while being guided through a process to examine their perspectives on life–and even reshaping them.
Role-play is critical to student success in this application of SL. Research into enactive roles to foster argumentative knowledge construction in SL reveals that students who engage in virtual discussions “identify closely with the character they are enacting within the SL virtual environment and are better able to develop multiple perspectives…”7 To support students through the role-play process, they will be repeatedly prompted to reflect on their experience. Moreover, Scanlan and Chernomas suggest that the process of reflection is cyclical, starting with an awareness of the present that through critical analysis connects the present with the past and future.8 As they play the role of a cave resident, students have the opportunity to reflect on their life chained inside the cave while looking at shadows on the wall, and once freed will look back on their experience in the cave and speculate as to what others still living inside think of life on the outside.
Figure 2. Philosophy students reflecting on their Cave experience
Students who learn the Cave allegory can, of course, imagine how experiences in the world are like living in ignorance, usually by projecting themselves into the lives of the prisoners that Plato paints in The Republic. For example, students who read the Cave allegory might think that until they thought about their religious faith in college courses, they were comparatively uninformed or not yet really enlightened about how rich and robust that faith could be, much like Plato’s characters until they climb out of the cave and see the light. There is also empirical evidence to suggest that adding Second Life experiences as instructional supplements to academic texts improves learning, while supporting “multiple modes of information”9 delivery. Thus, we propose teaching the allegory in an academic course (to a group of students) while adding a guided reflection in a Second Life Plato’s Cave (as an option for some students) to determine if there is any difference in assessments of learning as measured by formative pre-test and post-test written assignments.
Learning and Assessment in Liberal Arts
Such a formative assessment of learning is not limited to philosophy classrooms. St. Edward’s University is a liberal arts college with a mission to teach students “critical and creative thinking as well as moral reasoning, to analyze problems, propose solutions and make responsible decisions.”10 Since the creation of universities in Europe, critical thinking and moral reasoning have been taught in philosophy courses due to their emphasis on logic, ethics and the history of ideas. In some courses, Plato’s Cave has been used as an allegory for how reason can enlighten the mind and reveal the truth behind one’s everyday experience. Many philosophy instructors no doubt teach Plato’s Cave by comparing it to the way we experience film in a dark theater, pointing out that the real objects in the film are not actually present but rather pictures on the screen. However, Jack (who teaches philosophy courses) wanted a more robust account of Plato’s Cave that would bring it to life beyond such straightforward textual and logical description, thus making the Cave’s powerful philosophical point in a more vivid, thus effective, way.
As a result in our discussions about what Second Life could offer teachers, we focused on Plato’s Cave. When Jason (an IT staff member specializing in research and development of emerging educational technologies) suggested creating Plato’s Cave in SL and using it to provide a more sophisticated, first-hand visual experience of what the cave feels like for those trapped inside, it seemed like a perfect fit. Recent reports published by the Pew Internet & American Life project support this approach, for young adults are likely familiar with both social media11 and gaming applications.12Thus, it is feasible that the use of these tools implies a familiarity with the experience of “virtual identity” both online and in game-play, making it easier for students to identify first-hand with the prisoners in Plato’s Cave. We hope this association will provide a rich comparison to off-line states of ignorance and truth in a way that lectures and discussion, by themselves, do not.
Figure 3. Philosophy students in class engaged in the Cave simulation
Starting then in fall 2010, all twenty-eight students in Jack’s ethics course will read and discuss Plato’s Cave early in the semester. All students will write a one-page, double-spaced paper where they address both what they take to be Plato’s main point in the allegory and how the class will have (by that date) enlightened them, getting them out of their cave (or not) on some ethical issue. Toward the end of the semester they will re-write that paper as well. However, before writing the second paper, half of the students selected at random will obtain Second Life accounts and training with their new SL avatars and, after signing the appropriate institutional review board approval and consent forms, will have a guided session with the instructor in the SL cave. The instructor will then use the same grading rubric to assess every student’s two Cave papers. The four-part rubric scores, on a one to five scale for each category, a clear and narrow thesis; accurate use of moral theories; logical argumentation; and clear and grammatical English.
With these students’ permission their essays will be part of a proposal for a presentation at the 2012 bi-annual conference of the American Association of Philosophy Teachers (AAPT). Students taking part in the cave exercise will also be asked to provide qualitative, formative evaluations about how well the SL cave did (or did not) serve course objectives.
Working and Sustaining Project Development
This project was built on the St. Edward’s University SL space, located on Teaching 3, in the New Media Center (NMC) Consortium SL space. (We also recently learned there was a different project on the Cave, developed by faculty from the University of Massachusetts on the Caerlon sim in Second Life, that was available in Spring 2009.)13 Our Cave simulation was inspired by other SL simulations such as Dante’s Inferno, Genome Island, the Edgar Allen Poe House and the Exploratorium, but in our case was directed at improving instruction in those classes, especially in philosophy but in other disciplines, too, teaching the Cave.
All of these SL simulations in particular demonstrate how immersive experiences can be built to support learning. Interacting with the “conversational rocks” in Dante’s Inferno or conducting an “experiment” with genetic crosses in Genome Island are examples of this type of interactivity. An overarching vision guiding the development of the SL cave was building a virtual version that presented a believable environment allowing students to interact and participate in the Cave simulation. We hope that type of immersion will spark the imagination and promote a sense of fantasy to support an immersive learning experience.14
Three main development challenges came to mind: where would we build the cave on the St. Edward’s University sim? How could we build a cave that was prim-efficient (a prim is the basic building of SL objects and is limited in size)? And finally what would be needed to make the sim interactive? Jason chose to double the size of our SL plot to 8192 square meters and devote half of the space to the cave, building it above ground using sculptys (prim objects shaped using an image map). The rocks that make up the cave wall and the structure inside are made using layers of sculptys. These basic sculpty building blocks were purchased and then manipulated with standard SL building tools. This process resulted in using fewer prims than would be possible with standard SL building tools.
Figure 4. Cave components designed with sculptys
Using this approach, Jason designed a cave that was theoretically large enough to hold a class, and with seating to accommodate seven students. Animation “poseballs” were purchased and attached to a virtual “chain” that ran the length of the rock bench, positioned in front of the cave wall.
Figure 5. Rock bench inside the cave
Participants interact by “sitting” on a poseball, which moves their avatar in a seated position with their hands pinned behind their back. The avatar’s view is then directed forward, facing “shadows” that appear on the wall.
Figure 6. Using a poseball to “sit” in the cave
The shadows were constructed from public domain images of President Obama and others in his administration. The images were first edited in Adobe Illustrator and a particle animation technique was used to randomly display any one of several images. Several “emitters” were used in SL to spray the shadows along the wall of the cave. A large campfire object was then purchased and placed at the entrance of the cave.
Figure 7. A particle emitter was used to display cave shadows
The process to research and develop the cave in SL took Jason (a part-time IT staff member) roughly 500 hours from beginning to end, and spread out over just under a year. Software costs were nominal. The research and development process consisted of brainstorms, proposals, research into existing SL sims, and climbing a technical learning curve. Basic knowledge of SL land management, building techniques, and scripting using Linden Scripting Language (LSL) were required to understand how objects and animations in SL are built and managed. However, we anticipate that ongoing technical costs to maintain the sim (aside from land lease fees) will be nominal unless new features are added.
Challenges to Integrate Into the Classroom
One of the main challenges will involve soliciting and training traditional undergraduates for the exercise. Our ethics course is a required class in our general education curriculum and may be the only ethics class students ever take. As such, the reading and writing requirements in moral theory and applied ethics can be demanding. Other philosophy classes could face even more challenges if they cover classical, modern and contemporary texts in ethics, history, epistemology and metaphysics. In his course, Jack does not want to overtax ethics students and will therefore select, at random, fourteen students in one class to train in SL and learn in the SL Plato’s Cave, offering as incentives free avatars in SL through the NMC, a more robust learning experience and one point extra credit on their paper. The University’s expert IT training staff will be also available to help orient students to Second Life. It may well be difficult for even half of the fourteen to train at the same time, so some may do so one-on-one with IT staff members. The students will also be asked not to enter the SL Plato’s Cave until they take part in one of the two guided instruction sessions during a class period–one for each group of seven sitting in the cave–prior to the deadline for the second paper.
Outcomes and Evaluation
Outcomes from one trial exercise in March 2010 with graduate students in a curriculum course suggest the SL teacher should provide explicit, “play-by-play” instructions from start to finish. Teachers should begin with specifically asking each avatar to only sit facing the shadows in the cave. Students should then be guided through each step of the role play to get the most out it.
Another trial run with four students in a spring 2010 ethics course suggested something similar. After two participants had forgotten their SL passwords and three could not get their headphones to work, Jack ran a 25-minute exercise using text chat for Socratic dialogue. Instead of only facing the shadows to experience life in the cave, some students also opened other computer programs or read hard-copy textbooks during the trial run. Thus, instructors might want to create a short exercise for participants to complete as they enter the cave to keep them on task.
Figure 8. Philosophy students participating during the cave exercise
Next fall the students’ first and second Plato’s Cave papers will be compared, using the same rubric, to help determine if the SL experience improves learning. These results will not, strictly speaking, be the function of a large-scale rigorous valid and reliable statistical study in part due to the small number of students participating. That is, fourteen undergraduate students out of twenty-eight in one class are not properly representative of the 3,537 traditional students, average age 20.4, enrolled at our college.15Likewise, as researchers have noted in similar studies, SL students may do better because they spend more time thinking about the written exercise in the first place.16 The SL students may also, due to a social experience of the cave off-line students do not have, learn better because of this very collective (as opposed to online) experience.17 Thus, it may not be the SL cave per se, but other causes like extra time or social learning that provide better explanations for future results. Other studies might include larger, more randomly generated samples characterized by demographic data from collegiate major to GPA to provide more scientific explanations of outcomes.
Apart from such studies, we hope instructors who find Plato’s Cave an instructive allegory for explaining how people travel from ignorance to truth will also find the SL cave an engaging and vivid landscape for painting Plato’s allegorical picture in a way that is worth a thousand words. For teachers who want students to understand the point of Plato’s text and also to feel that they are making a personal trip from darkness to light, we hope the SL cave provides a powerful pedagogical tool for how education can transform our views of the world. To that end, access to the St. Edward’s University space in Second Life is not restricted and the Cave simulation is open to all teachers simply by contacting us.18
Notes
1. Plato, in The Republic Book VII, 514a-520a. [return to text]
2. John Waters, “A ‘Second Life’ For Educators,” T H E Journal 36, no. 1 (2009): 29. [return to text]
3. Edward Dieterle and Jody Clarke, “Multi-user virtual environments for teaching and learning, Encyclopedia of multimedia technology and networking, 2nd ed., ed. M. Pagani (Hershey, PA: Idea Group, 2005), 1. [return to text]
4. J.D. Bransford, A. L. Brown, and R. Cocking, How People Learn: Brain, Mind, Experience, and School, ed. J.D. Bransford, A. L. Brown, and R. Cocking (Washington, D.C.: National Academy Press, 1999), 122. [return to text]
5. Bonwell, Charles C. and James A. Eison, “Active Learning: Creating Excitement in the Classroom.” ERIC Digest, ed. E. C. o. H. Education (Washington D.C., 1991): 3. [return to text]
6. David Kolb, Experiential learning: experience as the source of learning and development: (Prentice-Hall, 1984), 41. [return to text]
7. Azilawati Jamaludin, Yam San Chee, and Caroline Mei Lin Ho, “Fostering argumentative knowledge construction through enactive role play in Second Life,” Computers & Education 53, no. 12 (2009): 327. [return to text]
8. Judith Scanlan and Wanda Chernomas, “Developing the reflective teacher,” Journal of Advanced Nursing 25, no. 6 (1997): 1140. [return to text]
9. Phillip C. Wankat and Frank S. Oreovicz, Teaching Engineering (Knovel, 1993); Joel S. Greenstein, Harskin Hayes, Jr., Benjamin R. Stephens, and Chris L. Peters, “The Effect of Supplementing Textual Materials with Virtual World Experiences on Learning and Engagement,” Proceedings of the Human Factors and Ergonomics Society 52nd Annual Meeting (2008): 5. [return to text]
10. “Mission Statement: St. Edward’s,” http://www.stedwards.edu/aboutus/mission.htm (accessed Mar. 15, 2010). [return to text]
11. Amanda Lenhart, “Social Media and Young Adults” (2010), http://www.pewinternet.org/Reports/2010/Social-Media-and-Young-Adults.aspx. [return to text]
12. Amanda Lenhart, Sydney Jones, and Alexandra Macgill, “Adults and video games,” (2008),http://www.pewinternet.org/Reports/2008/Adults-and-Video-Games.aspx. [return to text]
13. Georg Janick and Gary Zabel, “Where in Plato’s Cave is Second Life?” (2009), http://openhabitat.net/wp-content/uploads/2009/09/SPH090503.pdf (accessed 4/12/2010). [Return to text]
14. Paul Toprac, The Effects of a Problem-Based Learning Digital Game on Continuing Motivation to Learn Science, Curriculum and Instruction (Austin: University of Texas, 2008). [return to text]
15. “St. Edward’s University Facts and Figures,” http://www.stedwards.edu/aboutus/facts.htm, (accessed Mar. 18, 2010). [return to text]
16. Joel S. Greenstein, Harskin Hayes, Jr., Benjamin R. Stephens, and Chris L. Peters, The Effect of Supplementing Textual Materials with Virtual World Experiences on Learning and Engagement, Proceedings of the Human Factors and Ergonomics Society 52nd Annual Meeting (2008), 622. [return to text]
17. Ibid, 623. [return to text]
18. To visit SEU’s Cave in Second Life: Navigate to: http://maps.secondlife.com/secondlife/Teaching%203/83/233/24 and click “Visit This Location” to launch Second Life and teleport to the sim. [return to text]
The ERIAL Project: Ethnographic Research in Illinois Academic Libraries
by Andrew Asher, Lynda Duke and David Green
Librarians and teaching faculty often think they know how students conduct their research and many have specific ideas on how students ought to conduct their research. However, with the increased ability to access information online and the corresponding changes in libraries, the question of what actually happens between the time a student receives a class assignment and when he or she turns in the final product to a professor is especially compelling, and one that is not as straightforward as it first appears.
Two years ago, five Illinois institutions (Northeastern Illinois University (NEIU), DePaul University, Illinois Wesleyan University (IWU), University of Illinois at Chicago (UIC), and University of Illinois at Springfield (UIS)), began working together to investigate this issue. The Ethnographic Research in Illinois Academic Libraries (ERIAL) Project was organized around the following research question:
What do students actually do when they are assigned a research project for a class assignment and what are the expectations of students, faculty and librarians of each other with regard to these assignments?
The primary goal of this study is to trigger reforms in library services to better meet students’ needs. Traditionally, academic libraries have designed library services and facilities based on information gleaned from user surveys, usage data, focus groups, and librarians’ informal observations. While such tools are valuable, this project employed more user-centered methods to form holistic portraits of student behavior and needs, directly resulting in changes to library services and resources.
Genesis, Planning and Development of the Project
In 2007, while attending the Library and Information Technology Association National Forum, Dave Green, Associate University Librarian for Collections and Information Services in the Ronald Williams Library at Northeastern Illinois University, had the opportunity to hear Dr. Nancy Foster and her colleague, David Lindahl, make a presentation on the ethnographic studies conducted at the University of Rochester Libraries.
In February of 2008, the Illinois State Library, a Department of the Office of Secretary of State, announced the availability of Library Services and Technology Act Grants, using funds provided by the U.S. Institute of Museum and Library Services. Based on the intriguing work done by Dr. Foster and her colleagues, Green was eager to pursue an ethnographic study of NEIU students. After a flurry of email exchanges and phone conversations, Dr. Foster agreed to advise on the grant development, as well as act as a consultant for its execution.
With approval from the NEIU library dean, Green began working with the Metropolitan Library System in Chicago and Dr. Foster on a grant proposal. It became obvious that having several institutions partner in the research would make the proposal more competitive and greatly enrich the study. Green contacted colleagues at four universities (DePaul, IWU, UIC and UIS) and they agreed to participate in the project. Each university would have its own research team, consisting of a lead research librarian and two to five other individuals, the majority of whom would be librarians. The submitted proposal included a funding request of just under $180,000.
Initially, the most challenging aspect of the project was the crafting of a project schedule based on only nine months of funding. The tight timeline created two potential choke points for the project. The first was trying to hire two full-time anthropologists by mid-November, only six weeks after the beginning of the grant. The second challenge was getting the institutional review board (IRB) approvals in a timely manner. From previous multi-institution projects, Green knew that the timing of IRB approvals is sometimes unpredictable. As we awaited to hear a decision regarding the funding of the proposal, we turned our attention to these two concerns.
In order to hire the anthropologists by the target dates, Dr. Foster helped us devise several pre-grant tactics. During the summer, we sent announcements to relevant graduate departments at universities in the Midwest, announcing the potential of two full time positions in late fall, contingent upon confirmation of funding. In addition, because no activities could be funded by the grant if they occurred prior to October 1st, Green requested funding from the NEIU dean to place advertisements for the positions in September, in case we received advance notice that the grant would be funded. Even with these tactics in place, six weeks to interview potential candidates and bring them into the project was a tight schedule.
In late summer, the Illinois State Library contacted Green asking if parts of the grant proposal could be modified, based on reviewers’ comments. This signaled to the team that the proposal had a high chance of being funded, and in late August we were awarded the grant, with funding beginning on October 1st. A week after the grant formally began we started reviewing applicants for the two resident anthropologist positions. Dr. Foster reviewed the applications, identified the most promising candidates, and conducted telephone interviews with a handful of applicants. The top candidates were then invited to an in-person interview at the campuses of the hosting institutions (IWU and NEIU).
As a result, two excellent anthropologists, Dr. Andrew Asher and Susan Miller, were hired and we were able to meet our first project deadline. The anthropologists’ first major goal was to help each research team develop their IRB application. There was a lot of ground work that needed to be done to prepare for the research, but no research could begin until IRB approval was granted. As anticipated, the process went more smoothly for some teams than others.
Project Implementation
The grant proposal included a detailed project timeline and organizational structure. Each library had a research team consisting of several librarians, one designated as the lead research librarian for the group. In addition there was a coordinating team which consisted of the project manager and the two resident anthropologists, with Dr. Asher taking responsibility for the integrity of the project’s research design and data collection methodologies as the lead research anthropologist. Miller became the resident anthropologist for the three Chicago-area libraries, while Dr. Asher became the resident anthropologist for the two central Illinois libraries.
Figure 1. Project organizational structure
One of the major structural goals in the project was to streamline administration. The easiest way to do this was to centralize budgetary and reporting functions. All hiring, billing, equipment purchase, contracts, etc. were done by NEIU. Nothing was subcontracted to the partnering institutions. This significantly reduced the amount of potential bureaucratic gridlock for everyone.
On the other hand, managing the research process was ultimately in the hands of the two anthropologists working with the lead research librarians of each research team. The anthropologists were responsible for coordinating the efforts at the five institutions, maintaining a consistent methodological core to allow for cross institutional analyses, while simultaneously helping each institution to explore areas unique to their institution. In a sense, ERIAL consists of six projects.
Figure 2. Project research structure: five studies with a common core
The structure of the research was designed so that no one institution depended on the research of another. Thus, if an institution found that they could not continue to participate, it did not threaten the larger project. In fact, one institution was unable to receive IRB approval in a timely manner and if the project had not been awarded a second year of funding, they would not have been able to conduct any research.
By the end of January 2009, four months into the grant, it became clear that designing, implementing, and analyzing the results of the methodologies for a project with the size and scope envisioned by the research team would require work to continue beyond the June 30th deadline. In February, Green began conversations with the State Library about the possibility of a second year of funding. After submitting a second proposal, in March of 2009 we received notification of a second year of funding, this time for $160,000.
Project Management and Coordination
Even though the ERIAL participants are geographically scattered, the primary means of communication is face-to-face, supported with telephone conferencing. During the course of a month, there are on average about thirty regularly scheduled meetings:
a) Each institution’s research team meets on a weekly basis with their respective anthropologist.
b) The coordinating team meets once a week (the project manager and the resident anthropologist for the northern libraries meet in person and the resident anthropologist for the central libraries participates by phone).
c) The two resident anthropologists have a conference call once a week.
d) The coordinating team meets once a month with all the lead library researchers (the Chicago participants meet in person and the central teams participate by conference call).
These regularly scheduled meetings provide the backbone of communication for coordinating the grant efforts. Of course, in addition to the above activities, there is considerable ad hoc electronic and phone communication. To facilitate the work of the research teams, we used a secure Web-based project management and collaboration tool called BaseCamp. We also found the Web-based service DropBox useful for document sharing between sites (although we were sometimes frustrated by its weak version control). ConferenceCaller proved to be an inexpensive and reliable telephone conferencing service.
Although we had originally planned to rely on video conferencing for communication between remote sites, we found connecting different platforms with varying degrees of reliability to be unsatisfactory. During the first year of the grant, all team members met in Chicago for extended multi-day training sessions, (in January and May of 2009). Given that we had spent considerable time together in person working on various training activities, we could easily connect faces to voices and found phone conference calls to be entirely satisfactory and more efficient.
Research Methods
In order to obtain a holistic portrait of students’ research practices and academic assignments, the ERIAL Project developed a mixed-methods approach that integrated seven qualitative research techniques and was designed to generate verbal, textual, and visual data.1 While all five participating institutions committed to a core set of research questions and shared research protocols, the research teams at each university chose which methods would be best suited to their needs. The methods utilized by the five ERIAL institutions are summarized in Table 1 below.
DePaul | IWU | NEUI | UIC | UIS | Total | |
---|---|---|---|---|---|---|
Ethnographic Interviews | 57 | 54 | 56 | 61 | 55 | 283 |
Photo Journals | 11 | 13 | 10 | 10 | 10 | 54 |
Student Mapping Diaries | N/A | 24 | 10 | N/A | N/A | 34 |
Web Design Workshop Participants | N/A | 49 | 44 | N/A | N/A | 93 |
Research Process Interviews | N/A | N/A | N/A | N/A | 19 | 19 |
Cognitive Maps | 37 | 44 | 37 | N/A | 23 | 141 |
Retrospective Research Paper Interviews | N/A | 9 | N/A | N/A | N/A | 9 |
Total | 115 | 223 | 167 | 81 | 107 | 693 |
Table 1The ERIAL Project’s principal methodology was a 45-60 minute ethnographic interview which was conducted with students, librarians, and teaching faculty at all five universities. These interviews followed a common structure and utilized open-ended questions intended to elicit specific examples describing students’ experiences undertaking research assignments, as well as how librarians and faculty members interact with students during the research process. In total, 161 students, 75 teaching faculty, and 48 librarians participated in these interviews.
Two additional interviewing methods focused on students’ research practices: the research process interview and the retrospective research paper interview. The research process interview asked students to allow an ERIAL anthropologist to accompany them while they conducted research for an assignment they were currently working on. Participants were asked to proceed with their research as normal and to reflect aloud about the processes they used to locate resources and materials. This activity was one of the most successful techniques of the ERIAL Project and was especially useful in gathering firsthand data about the approaches students employ to find information. In the retrospective research paper interview, students were asked to give a step-by-step account of how they completed a previous research assignment while drawing each step on a large sheet of paper, thus producing both a narrative and a visual account of the assignment from beginning to end.
To gain a better understanding of everyday student life, the ERIAL Project utilized photo journals and mapping diaries. In the photo journal activity, students were given a digital camera and a list of photographs to take, including views of work spaces, communication and computing devices, books, and favorite work/study locations. These photographs were then used as prompts in an interview that addressed the processes and tools students used to complete their assignments. In the mapping diaries activity, students were given a campus map and asked to record their movements over the course of a day, noting the times and places they visited and their purpose for going there. Students were then asked to participate in a follow-up discussion of their map in which they were asked a series of explanatory questions about locations they visited.
In order to investigate the characteristics that define students’ “mental image” of their university’s library, the research teams utilized a cognitive mapping activity in which participants were asked to draw a map of the library from memory. Students were given six minutes to complete the task, and asked to change the color of their marker every two minutes, an approach that provided both spatial and temporal data about how students conceptualize library spaces. Students completed this activity away from the library itself, so that the results would not be affected by visual cues.
Finally, faculty, students, librarians, and library staff participated in Web site design focus groups, in which participants were asked a series of brainstorming questions to generate the features that would be included on a “perfect” library Web site. Participants were then asked to design a mock-up of a library homepage and to describe why they chose particular design elements.
The data collection for all institutions was completed in February 2010, with just under 700 data collection events. All the research activities were recorded and transcribed, followed by content coding using Atlas.ti, a qualitative analysis software package. The results were then analyzed for themes and patterns by the five institutional research teams. For institutions interested in the details of this process or conducting similar investigations, the ERIAL Project is developing a methodological toolkit which describes the development of an ethnographic study from start to finish. The toolkit will be available in June 2010. For more information, see the project’s Web site, www.erialproject.org.
Summary Findings
At the beginning of the ERIAL Project, we expected to find students struggling with the technology of library searches: the various and fragmented databases and interfaces contained in any university library. However, we found that once students had some training with the library’s interfaces, they were not generally struggling with tools and technology, which, with some exceptions, worked well and were reasonably user-friendly. Instead, we observed widespread and endemic gaps in students’ understanding of the basic concepts of academic research, including: (1) an inability to correctly read and understand citations, (2) little or no understanding of cataloging and information organization systems, (3) no organized search strategies beyond “Google-style” any word, anywhere searches, and (4) poor abilities in locating and evaluating resources (of all types).
Almost without exception, students exhibited a lack of understanding of search logic, how to build a search to narrow/expand results, how to use subject headings, and how various search engines (including Google) organize and display results. As one student mentioned while conducting a search of library databases, “Apparently you don’t have much on Rock and Roll,” not realizing if she changed her search term (i.e. to rock music), she would have encountered excellent sources for her assignment. Similarly, another student lamented the dearth of information while searching library databases for information about women in 1940’s era baseball-–all while her mouse was hovering over the subject heading “All-American Girls Professional Baseball League.”
Although technological solutions that provide more intuitive research tools might allow instructional focus to be shifted from dealing with mechanical problems to addressing conceptual issues, these solutions are still unlikely to effectively address students’ needs. In fact, easier information access and more robust search capabilities provided by tools such as federated search, Google scholar, or Web-scale discovery tools, may actually compound students’ research difficulties by enabling them to become overwhelmed even more quickly by a deluge of materials they are unprepared to evaluate.
Addressing the shortcomings in students’ information literacy and critical thinking abilities will therefore require broader educational and curricular solutions in which the library is a key player within a multifaceted approach that involves many university stakeholders, including students, faculty, and administrators, as is illustrated in the following example from the ERIAL study.
Why Don’t Students Utilize Librarians?
While the majority of students we interviewed struggled with one or more aspects of academic research, very few students sought help from a librarian. In fact, one of the most striking aspects of the ERIAL study was the near-invisibility of librarians within the academic worldview of students, and is symptomatic of students’ general belief that librarians do not possess the disciplinary expertise necessarily to provide sufficient assistance with research assignments. When asked if she had ever asked a librarian for help with a paper, a sophomore in international studies replied, “Not really actually. I’ve never done that. I always assume librarians are busy doing library stuff, and it’s just not the first thing that pops into my head when I think of a librarian, like helping with papers or paper writing.”
Confusion about what librarians do and who and where they are hinders students from asking questions and obtaining the help they need. A senior psychology major noted, “I don’t know where the librarians here are. There’s someone that sits at the information desk, and I don’t know if he’s a librarian. I see him help people with research a lot so I think he is. But I would never go to [a librarian’s] office and knock on their door and say, ‘help me out’ which [would] just [make] me feel bad.”
Despite this confusion about the academic role of librarians and caution in approaching them for assistance, the minority of students who had developed a relationship with a librarian reported high levels of satisfaction with the help provided, returned repeatedly for help other assignments, and recommend librarians to their peers. Furthermore, students who had participated in instruction sessions with a librarian exhibited markedly better research skills than those who had not (although even these students often did not remember basic or specific concepts, or apply them correctly). One student commented, “I understand that [librarians] are not magicians or something, but sometimes they seem like it.”
These observations, of course, beg the question of how to raise the profile of librarians in students’ academic practices. Finding a way to leverage students’ positive experiences so that they recommend library services to their peers is certainly an important outreach area for the ERIAL libraries. However, our research suggests that a more effective approach requires the involvement of teaching faculty.
The ERIAL Project observed that professors often play a central role in brokering the relationship between students and librarians. Students routinely learn about librarians and library services directly from a professor’s recommendation, or through librarians’ in-class information sessions. These introductions are especially important during freshman year, when it is critical for students to build effective study habits and academic relationships. A psychology student observed, “It would probably be nice if the professors worked the librarians into the classes when people are freshmen. When they first get to school to kind of go over all that kind of stuff. That way [librarians] have the opportunity to tell you things. Because I guarantee you that I didn’t know that there was a psychology librarian staff member until first semester, junior year. And by then most of my study habits were formed, or [my] study approaches for research were formed.” Students view professors as experts, and when the professor specifically recommends a librarian, students highly value this advice. Professors therefore regularly act as gatekeepers who mediate when and how students contact with librarian as they are working on research assignments. In this way, the attitude of professors towards librarians is a key determining factor in developing student/librarian relationships.
Based on our observations, addressing students’ instructional needs in academic research, information literacy, and critical thinking requires principally social solutions. Given librarians’ structural placement as marginal to students’ academic world, librarians cannot effectively address these needs without active participation from teaching faculty. As librarians build relationships with teaching faculty, they will also build relationships with students. Administrators can also contribute to these relationships by supporting curricular initiatives that reinforce collaboration between librarians and teaching faculty, and that promote the participation of librarians throughout students’ course of study.
Conclusions
The ERIAL Project has provided much needed insight into how our students engage with the process of research. By utilizing ethnographic research methods, rather than more traditional methods, we have developed a more nuanced, robust view of our students and their relationship with the library.
Although the specific mission of any given liberal arts institution will differ, there are a few core goals that one expects to see included in most mission statements. For example, Illinois Wesleyan’s mission statement includes the desire to foster critical thinking, effective communication and a spirit of inquiry, to deepen a student’s knowledge in a chosen discipline and to prepare students for democratic citizenship and life in a global society. Like most libraries at liberal arts institutions, the Ames Library faculty and staff are committed to furthering these institutional goals by serving the scholarly needs of the Illinois Wesleyan University community. In particular, library faculty strive to teach students core information literacy skills, elements of the research process, and how to use the tools of scholarship. A student’s ability to master these skills is critical for achieving many of the stated goals of the institution.
Based on our findings, the Ames Library is actively engaged in re-thinking how we offer some of our services, what new resources we need to make available, and how to build stronger relationships with teaching faculty across the curriculum. We are confident that the changes we are implementing as a result of this study will significantly enhance our ability to connect with students and support the mission of our institution.
For more information about The ERIAL Project, see www.erialproject.org.
Notes
Funding for this grant was awarded by the Illinois State Library, a Department of the Office of Secretary of State, using funds provided by the U.S. Institute of Museum and Library Services, under the federal Library Services and Technology Act (LSTA).
1. For the photo journals, mapping diaries, Web design workshops, space design workshops and retrospective research paper interviews, the ERIAL project adapted protocols developed by Nancy Foster and the “Studying Students” research team at the River Campus Libraries of the University of Rochester. For more information on the University of Rochester study, see Nancy Foster and Susan Gibbons, Studying Students: The Undergraduate Research Project at the University of Rochester (Chicago: Association College and Research Libraries, 2007), http://docushare.lib.rochester.edu/docushare/dsweb/View/Collection-4436. [return to text]
The Collaborative Liberal Arts Moodle Project: A Case Study
by Joanne Cannon, Joseph Murphy, Jason Meinzer, Kenneth Newquist, Mark Pearson, Bob Puffer and Fritz Vandover
What is CLAMP?
The Collaborative Liberal Arts Moodle Project (CLAMP) is an effort by several schools to support continued and sustainable collaborations on Moodle development at liberal arts institutions. Moodle is an open-source learning management system designed with social constructivist pedagogy as part of its core values. With highly-customizable course pages, faculty can organize course material by week or by topic and add modules, resources and activities that help students meet learning objectives by encouraging collaboration and interaction. While the lack of licensing fees initially attracts many campuses, the flexibility of working with an open source tool also becomes a real advantage, allowing for additional customization to meet the specific needs of the institution.
Moodle is well-supported through its core developers and the large community at moodle.org, but CLAMP has a different focus: the issues and challenges unique to four-year liberal arts colleges using Moodle. By creating a smaller network of Moodle users with a tighter focus on the liberal arts, we are able to undertake development projects which none of us could accomplish alone. CLAMP develops community best practices for supporting Moodle, establishes effective group processes for documentation and fixing bugs, and better connects our institutions to the thriving Moodle community worldwide. Put briefly, by partnering programmers and instructional technologists across multiple institutions, CLAMP lowers the practical barriers to supporting and adapting this open source software.
To better understand CLAMP, it is helpful to look at the lexical components of the acronym:
- Collaborative: True participatory collaboration between member institutions is the motor of the project through a consensus process. Artifacts collaboratively produced from online and in-person gatherings are significant, benefiting all liberal arts Moodle institutions.
- Liberal Arts: While the liberal arts educational model is almost exclusively represented by institutions in the United States, we believe that the core values of this model–“critical thinking, broad academic interests, and creative, interdisciplinary knowledge” are embraced by many educational institutions worldwide.1 They are also critical for the Moodle community. Indeed, a cursory dig into the support forums of the moodle.org developers, users, and managers mother site exposes rich seams of liberal arts values in the strata of developers, users, and managers. Here you find core characteristics of a liberal arts education reflected in both the outcomes and the artifacts of CLAMP activities (such as the Moodle Liberal Arts Edition, bug fixes, documentation) and the process by which they are produced.
- Moodle: As the premier open source learning management system, Moodle is a model of the open source sharing, cooperative and empowering collaborative ethic. And for CLAMP, the relationship with the larger Moodle community is symbiotic and synergistic–all bug fixes are reported back to the Moodle tracker for inclusion into the core, the Moodle Liberal Arts Edition is made freely available, and CLAMP members take an active role in voting on issues raised in the development community.
- Project: While the National Institute for Technology in Liberal Education (NITLE) has nurtured CLAMP for the past year through the NITLE’s Instructional Innovation Fund, the universal approach of CLAMP broadens its appeal to campuses beyond NITLE and even beyond the confines of North America. It is important to note, however, that our focus is exclusively on liberal arts educative goals. While we certainly recognize K-12 concerns, research university needs and distance education imperatives, these are not addressed through this project.
The technical culmination of these efforts over the last year is the Moodle Liberal Arts Edition distribution. It includes all third party modules and add-ons commonly used by our institutions; bug-fixes of critical importance to our schools; functions that simplify the user’s experience; and backend tools to give Moodle administrators better information about how their systems are being used. Although all CLAMP bug-fixes are contributed back to the Moodle core project, this distribution gathers the collective work and wisdom of the institutional network, simplifying the job of finding and installing each vetted patch or module.
Origins of the Project
CLAMP traces its origins back to 2006, when a small group of representatives from liberal arts institutions gathered at Reed College to discuss the potential of a collaboration focused on improving Moodle. The following year, as Moodle was being adopted by a growing number of institutions across the country and the world, this circle of schools was expanded and the NITLE showed interest in providing support. An active support network developed as NITLE arranged two Moodle user community meetings and provided infrastructure for the online NITLE Moodle Exchange. Leading members of the NITLE Moodle Exchange organized two collaborative programming and documentation events called “Hack-Doc Fests” to bring programmers and educational technologists from twelve different institutions together in one place. Six of the schools–Earlham College, Lafayette College, Luther College, Kenyon College, Macalaster College, Reed College and Smith College–received a $43,000 award from NITLE’s Instructional Innovation Fund to continue and formalize their efforts.
Collaborative Projects
Since its inception, CLAMP has initiated three major projects: a Web-based workspace, a usability testing initiative, and the collaborative Hack/Doc Fests described in greater detail below. The Web-based workspace was essential to all three projects; we needed a private place to discuss, document and track our projects, as well as a front-end tool for sharing our completed efforts with the world. The end result was CLAMP-IT.org, which uses Redmine (an open source, Web-based collaboration suite) to provide project committees with online discussion forums, project trackers, and wikis, and WordPress (an open source, light-weight content management system) for public content.
To guide our development efforts, we designed a usability testing initiative to identify problem areas in Moodle. This involved asking faculty and students to complete tasks they were likely to encounter in Moodle, such as uploading files and grading assignments, and for students, responding to forum posts and editing personal profiles. After identifying the tasks, we purchased screen capture software for Windows (Camtasia Studio) and Mac (Screenflow), and conducted ten tests at five colleges. The results of these tests helped guide our development efforts.
The Hack/Doc Fest is a unique effort that brings coders and instructional technologists together to fix bugs, develop new functionality, and document features in Moodle. The name comes from “hacking” (a slang term that refers to digging into code to find and fix problems) and “documentation” (referring to the instructional technologist side of the event). But this is not a conference or a workshop. Instead, it is a chance for Moodle enthusiasts to focus on getting work done. Prior to the events we poll our community for features they would like added or documented. When we arrive on site, we spend an hour or so triaging these requests, factor in our own ongoing projects, and then launch into three days of coding and writing. The event has become the cornerstone of our collaborative efforts, as it gives us dedicated time to focus on improving the software and adding functionality essential to our campuses.
Photo: Courtney Bentley (Lafayette) works on a documentation project at Hack/Doc Fest at Smith College while Charles Fulton (Kalamazzo), Jason Alley (Lafayette), Dan Wheeler (Colgate) and Cort Haldman (Furman, back row) discuss a coding project.
Not surprisingly, the Hack/Doc Fest played a pivotal role in both the CLAMP-IT.org and usability efforts. Our initial experiments with Redmine began at Hack/Doc Fest at Reed College in January 2009 and the first drafts of our usability test scripts were written there as well. As the Spring 2009 semester progressed, we increasingly relied on Redmine to coordinate our efforts. By our second Hack/Doc Fest at Smith College in June 2009, we were able to use the software to track all of our bugs, features, and documentation requests. By the end of the event we had created or upgraded a host of new tools including the following:
- Census report: a tool for auditing active courses in Moodle
- Simple file upload: a script allowing faculty to quickly upload files to Moodle, bypassing the normally cumbersome upload script
- Current course block: a content block that allows Moodle to display courses from the current and upcoming terms
- TinyMCE integration: a new WYSIWYG editor that replaces the outdated, buggy editor that ships with Moodle
- Random course generator: a program that randomly creates hundreds of courses and assignments in Moodle, allowing developers to quickly test a medium-scale LMS installation
- Code repository: established this version control repository based on the open source Subversion software that allows our developers to easily make and track code changes
On the instructional technologist side, new documentation was created explaining how to use Moodle 1.9.5’s newly-revised gradebook, which introduced a number of new features and usability enhancements. Based on feedback from the larger CLAMP community, the instructional technologists also crafted documentation for Moodle’s “groups” and “roles” functions. (Groups and roles are powerful concepts in Moodle and allow administrators and faculty to customize the LMS to their needs, but they can also be a challenge to implement. The documentation addresses that.)
As our online and off-line efforts progressed, it became clear that we needed some mechanism for collecting and distributing our work. This realization led to the creation of our liberal arts edition of Moodle, which pulls together our finished code into one easy-to-install package available for download from CLAMP-IT.org. Our Subversion repository and documentation, crafted to be generic enough to be used at any college, is available through the Web site as well.
Participation and Funding
Two years ago programmers and instructional technologists from a half-dozen colleges saw a need–and an opportunity–to fix bugs and brainstorm solutions that were unique to using Moodle in a liberal arts environment. It was an ad hoc meeting, with each college paying its own way and the agenda created on a day-to-day basis. Flash forward to the present day–we have had seventeen NITLE member institutions participate in CLAMP, including attending a Hack-Doc event, contributing to the discussion regarding prioritizing needs, performing usability testing, fixing bugs in Moodle or creating and sharing documentation of the gradebook, groups and other features. What started off as a singular event has quickly become something more and in the process we have needed to find a way to govern and fund ourselves effectively.
To that end, we established a steering committee consisting of representatives from seven member colleges. This group organizes Hack-Doc events, prioritizes major projects, and handles financial matters. We have also established a number of other committees, including ones dedicated to code development and documentation. On the funding side, we established a membership fee of $300. Based on an initial membership of fifteen colleges, this will provide us with sufficient funds to cover our basic operating expenses, such as clamp-it.org, the code repository and licensing fees.
Once registered with CLAMP, colleges receive access to the development server that hosts our Web site and groupware suite. In addition to typical development tools, the development server provides custom commands that automate all of the busywork involved in setting up a new instance of Moodle. If, as happens quite frequently, a developer needs to test something in a “clean” installation of Moodle, she can create one in seconds with one command and then delete it just as easily when she’s done. The combination of single sign-on and Moodle-oriented workflow tools alleviates some of the major pain points for system administrators and developers alike.
Logistical Lessons Learned
Any collaboration across institutions will have significant logistical hurdles to overcome. Since CLAMP has always desired to build a broad and active membership, we gave these areas particular attention from the start. Here we present some of our lessons learned, in the belief that CLAMP’s experiences, and even our hardships, can be useful information for other collaborative open source projects.
Misadventures in Web Conferencing
Clearly we need to be able to talk to one another from our different campuses. For that, the CLAMP steering committee relies on bi-weekly Web conferences in order to manage the planning, oversight, and logistics. NITLE’s current Web conferencing software, Marratech, has provided CLAMP with a fairly stable platform from which to conduct these sessions. But Google’s recent acquisition of Marratech and decision to discontinue supporting it in the future compelled us to explore other options, including DimDim (http://www.dimdim.com), Twiddla (http://www.twiddla.com/) and Skype (http://www.skype.com). Unfortunately, we have yet to find a viable alternative to Marratech. DimDim uses a “few-to-many” presentation style that only allows for four speakers–not enough for our five- or six-person committee meeting. Twiddla has an excellent whiteboard, but poor audio quality. Skype is useful for small conference calls, but doesn’t scale well and lacks white board support. While Marratech would occasionally crash and had audio-issues, it worked best for our purposes. Fortunately, NITLE has recently begun rolling out Elluminate and we are hoping that this online conferencing tool will be the magic bullet we’ve been looking for.
Our misadventures illustrate another advantage to our collaboration: bringing together technically-savvy individuals ready and willing to try out new software. Not only were we willing to try new things, we weren’t afraid to fail while doing. We braved microphone malfunctions, software crashes, terrible audio and video quality and other setbacks that casual users might not have had the patience for. The lesson is twofold: if you have the people and the time, use them to try new things. And if you begin collaborating extensively on one project, you can’t help but start to do the same on others.
Finding Convenient Meeting Times
Colleges and universities are lucky to have set academic calendars that govern the ebb and flow of their work, but that doesn’t mean it is easy finding blocks of time to work with one another. Navigating our weekly schedules for video conferencing is challenging enough, but finding a three or four days of time when our member colleges can get together for our Hack/Doc Fest work sessions is far more difficult. Not only do we need to work around academic terms, but we have to anticipate Moodle’s release cycle and consider our own internal software release cycles. For example, meeting in August would not only put us in the middle of the start-of-term crush for most of our colleges, but any code and documentation we started developing then would likely not be ready in time for September. Coupled with the tendency of Moodle to release new editions in late spring and early summer, we decided an early June session was best.
Software bugs wait for no developer, and we knew that we would need to follow up our summer session with another meeting six months later. Finding a time during the academic year was difficult; we finally scheduled the follow-up in January 2009, and had good participation from the our member institutions while June saw a greatly increased participation from both new and returning institutions. The increased June participation is due in part to heavier marketing of the Hack-Doc Fest, our choice to schedule Hack/Doc immediately after NIS Moodle Camp, and a general influx of NITLE schools beginning to use Moodle. We have discovered that it is important not to underestimate the potential for scheduling conflicts. What may be four weeks of pristine January quiet for one college may be a frenzy of winter terms, off-campus programs and special events at another school.
Development Collaboration Platform
It is impossible to collaborate without communication. And when those collaborators are spread out across the country and working on a dozen-odd projects simultaneously, it is essential that they have some way of keeping track of who is doing what. Finding the software tools to support this collaboration was an essential step for CLAMP.
We began by using the NITLE Moodle Exchange (NME)–a Moodle instance hosted by NITLE–to plan our Hack/Doc Fests and report bugs while turning to a Google Code Project to provide version control. We quickly outgrew both. While Moodle is useful for online conversations, its sub-par wiki and lack of a bug tracking tool would have played havoc with our developers’ workflows. At the same time, our long-term plans involved deploying a public Web site of our own, outside of the NME. We considered using Moodle for this, as it supports a barebones home page news forum, but we felt it would be an awkward fit. Moodle is about enabling classroom conversations, not serving Web pages to an anonymous public.
Because of our two very different needs, we decided to use two different tools. WordPress, an open source, lightweight content management system, powers our public Web site. For our ongoing development needs we turned to Redmine, for its robust wiki and issue tracking tools, as well as integration with version control. Tight integration between Redmine’s components means that it easily and automatically builds hyperlinks between the bugs, wiki pages, files and other resources that it’s tracking. Finally were were able to bind Redmine and WordPress together using a single signon solution called CoSign. This was critical, as it prevented a proliferation of one-off usernames and passwords and insured that people could spend their time working, not trying to remember their login information.
The system wasn’t perfect. Redmine has a learning curve, and it took a concerted effort by our developers and instructional technologists to learn the system. Even once we had mastered Redmine’s feature set, we had to spend considerable time organizing it and figuring out what workflows would be best for code, documentation, and event planning. In the end, it worked well. We used it extensively at our fourth Hack/Doc Fest at Smith College to track our progress. That in turn meant it was easy to pick up where we left off when we returned to our home campuses and concentrated on finishing the work we had started at Smith. The key wasn’t the software though–it was identifying what we needed to make our online collaboration work, and then finding the tools that fit those needs.
Usability testing
It is easy to complain about the shortcomings of an application’s user interface, but harder to quantify those shortcomings. One of CLAMP’s premier objectives in 2008-2009 was to conduct usability tests with faculty and students at our member campuses. The goal of these tests was to identify problem areas in Moodle to be fixed at one of our Hack/Doc Fest sessions. Before we could conduct the tests though, we had to establish a protocol: What questions would be used? How would the tests be administered? What software would we use to capture the results?
Developing the scripts was relatively straightforward; we drew from our own previous experiences and insights from Steve Krug’s Don’t Make Me Think, Louis Rosenfeld and Peter Morville’s Information Architecture for the World Wide Web.2 Establishing the testing protocol was more difficult. After a few test runs, we determined that having two facilitators–one taking notes, the other one conducting the test–worked best, but as with video conferencing, finding the right software was a time-consuming ordeal.
While open source worked well for our Web ventures, after quite a bit of research and testing, we concluded that open source options for screencasting software were unreliable and limited in functionality. Ultimately we chose Screenflow for the Mac platform and Camtasia Studio for the PC, for their ease of use and their ability to cleanly capture the screen action with a video inset of the user from the computer’s Webcam. After several tests the formula for software setup was finalized allowing a smooth use of either package.
A best practice in usability testing is to not assist users when they have trouble but to objectively document their problem-solving process. As the usability tests unfolded, we found this approach was difficult to implement on our campuses. The instinct to help people solve a problem is strong, particularly among instructional technologists, and all of us had to balance the need to gather usability information against faculty or student frustration with the test. This tension led to very different approaches to conducting the tests that only became apparent afterward. While we were still able to gather useful data from the tests, this is an area we will need to discuss before the next round of testing.
As for the tests themselves, they revealed what might have been obvious–that the instructor’s task to use Moodle is more complicated and therefore, more difficult to master than the student’s task. Particularly troublesome areas included the gradebook, the file upload interface, and the collaborative editing of documents within Moodle.
We are still analyzing the student results, which points to another major issue with usability testing that we hadn’t anticipated: the time it would take to analyze data. We conducted a total of ten tests at five colleges, with each test generating twenty to sixty minutes worth of video footage. While our facilitators did take notes on each presentation, none of those notes were time-indexed. This in turn made it difficult for anyone other than the facilitator to go back and look for a specific problem mentioned in the notes, and sitting down to watch all of the video would take days. We also ran into issues with the quality of the video results; while most turned out fine, some experienced technical issues that caused the audio to be lost. Going forward we plan to review our usability tasks, create a more detailed testing protocol, and come up with a system for time-indexing the videos on the fly.
Conclusion
The goal of the Instructional Innovation Fund grant from NITLE was to turn a loosely-knit group of higher education institutions with a shared interest in Moodle into a coherent, sustainable association. We have accomplished that, creating not only an organizational framework for carrying our work forward, but a software framework as well. With the release of the Moodle Liberal Arts Edition, we have provided a mechanism for distributing our work back to the larger Moodle community as well as to our peer institutions. Along the way, we have discovered the limitations of videoconferencing, the value of open source, Web-based collaboration tools, and the inadequacies of open source video-capture software. We’ve also learned much about what works and what doesn’t when conducting usability tests.
Looking to the future, our next challenges are clear. Solidifying our software development protocols to streamline releases of future versions of the Moodle Liberal Arts is of critical importance. We have the tools, now we just need to master them. Similarly, with a large number of new schools joining our ranks, we’ll need to ensure that our participation model holds up, and that the new recruits feel every bit as involved as the old ones. As part of that, we will also need to review our approach to Hack/Doc Fests to ensure they remain well attended and productive even as many of our institutions face declines in funding. Pursuing more regional Hack/Doc Fests is one option. Looking for additional grant opportunities is another.
We are optimistic that we will be able to achieve these goals. CLAMP’s established a two-year track record of working together, both online and off. While the financial landscape has changed, our commitment to working together to improve Moodle for our respective campuses has not.
Notes
1. Michelle Glaros, “The Dangers of Just-In-Time Education,” Academic Commons (6/10/2005), http://www.academiccommons.org/commons/essay/jit-education . [return to text]
2. Steve Krug, Don’t Make Me Think! A Common Sense Approach to Web Usability, 2nd ed. (Berkeley: New Riders Pub, 2006); Louis Rosenfeld and Peter Morville, Information Architecture for the World Wide Web, 3rd ed. (Sebastopol, CA: O’Reilly, 2007). [return to text]
The History Engine: Doing History with Digital Tools
by Robert K. Nelson, Scott Nesbit, and Andrew Torget
The History Engine as a Teaching Exercise
In a recent article about the contours of history department curricula across the country, Steven D. Andrews notes that
Many students do not “do history” until deep into their college careers, sometimes in the last semester of their senior year. It is only then, in some kind of seminar class, that students experience the process so familiar to historians: identifying their own questions, selecting their own sources, pursuing those sources and constructing arguments, documenting the research process, producing multiple drafts and rewrites, and finally presenting the work in a formal document. For some students, the first comprehensive use of the skills of a historian may be the final act of their education.
This delay in introducing to students the practices of historical inquiry is at odds with what many, perhaps most, historians would prefer, a lamentable if understandable product of the distinct goals of lower- and upper-division courses. The former tend to emphasize, as Andrews suggests, “accumulation of information” about historical context, the latter the acquisition of the “thinking skills” of historical research, reasoning, and argumentation. It is often logistically challenging, sometimes impossible, to ask students to “do history” in lower-division classes simply because there is a lot of information for them to accumulate. Covering, say, roughly two centuries of American history in a survey course affords little time to ask students to engage in original research and formulate their own questions. The length of the “formal document” that Andrews mentions–perhaps a fifteen-page term paper or an even longer seminar paper that’s modeled on the articles that historians themselves produce–doesn’t help. It is, more often than not, simply impractical to ask students in lower-division history courses to engage in that kind of time-intensive, ambitious research and writing exercise (to say nothing of the daunting prospect of grading many longer student research papers in larger sections).1
One of the primary goals of the History Engine project has been to design a research and writing exercise modest enough in its analytical scope and its length that it allows students to “do history” long before a senior seminar or capstone course. (Another important goal, discussed below, is to capture this research to amass a large history archive.) The History Engine is an online archive consisting of thousands of “episodes” written and contributed by undergraduates. What we call “episodes” are concise historical narratives, short micro-histories about small moments in American history. An episode is much shorter than a fifteen- or twenty-page seminar paper; it’s roughly five hundred words in length. It does not draw upon a large number of sources requiring extensive research; instead, a typical episode generally is based upon a single primary source and one or two secondary sources. An episode doesn’t make an ambitious argument about some major question in American history; its scope is much more modest. Rather than a thesis-driven essay, an episode is instead an exercise in historical storytelling, a short analytical narrative unpacking a story from a primary source. An episode, for example, would not make an argument about the causes of the Civil War but might, say, recount the departure of a group of Southern settlers for the Kansas Territory in 1856 and place that event within the context of the conflict between antislavery and proslavery forces to control that territory.
A couple examples provide a sense of the scope and nature of episodes. An episode entitled “Southern Outrage” written by a student at the University of Richmond focuses on an 1835 letter to the editor in a Richmond newspaper that condemned Northern abolitionists; it explores how a Southerner turned the abolitionists’ critiques of anti-abolitionists and economic boycott tactics on their head. Another written by a Furman University student, “Local Chinese React to Imperial Decree,” is transnational in its focus, exploring the reaction of Chinese immigrants in New Orleans in 1911 to an imperial decree from the Qing Dynasty instructing them to cut off their queues.
But despite the brevity of an episode, its composition remains an intense and rigorous exercise in historical research, writing, and analysis. In fact, we have learned that writing succinctly often takes a great deal more thought than writing longer essays, and work in archives rarely proves to be an easy task. To produce their episodes, all students are asked to do original research using primary sources; many are directed or encouraged by their instructor to dive into local historical archives or special collections to find their primary source or sources. Primary source research is, of course, often simultaneously exhilarating and disorienting. It’s a more direct way of encountering the past and often prompts more questions than it answers. Once a student finds an interesting and evocative source that she would like to place at the center of her episode, she then turns to the secondary literature to understand, perhaps, something intriguing but confusing in her source, or, maybe, to situate her particular episode within a broader historical context. Typically, after she composes her episode she would upload it into the History Engine database as a draft (available to her instructor but not the public) along with associated metadata (dates, locations, tags, and citations). Her instructor might review the episode and ask for revisions, or might have students peer review each others’ episodes. After being vetted for accuracy and quality by the instructor, the episode is published, making it publicly available on the History Engine site.
The Engine‘s History
The first iteration of the History Engine was produced in 2005 at the University of Virginia and initially tested and refined in Ed Ayers’s lecture course “The Rise and Fall of the Slave South.” Like most digital history projects, the History Engine is a product of extensive collaboration. The development of the initial iteration of the project and its use in Professor Ayers’s course was only possible because of the contributions of a number of partners at UVA. The Digital Research and Instructional Services group at UVA’s Alderman Library with support from the Virginia Center for Digital History developed the first Web application and provided server space. Special collections librarians helped students in their research, providing orientations, suggesting sources, and providing extra staffing on the days immediately before the assignment deadline when large numbers of students descended on their holdings.
From the beginning, the History Engine was always envisioned as a multi-school project that would enable undergraduate students to share their work with students from multiple universities. That was made possible in 2006-2007 through funding awarded by the National Institute for Technology in Liberal Education (NITLE). That award funded the refinement and expansion of the application software. As important as the monetary funding was, the relationship to NITLE also connected the project with faculty at a number of NITLE-affiliated colleges and universities. During the fall semester of 2007, four faculty at Furman University, Rollins College, Wheaton College, and Juniata College began using the project, with a handful of faculty from other colleges and universities joining since then.
The feedback from faculty who used early versions of the History Engine in their classrooms has been extraordinarily useful as we continue to revise and expand the project. Most reported that composing such short narratives challenged their students to engage in more careful writing, and that introducing undergraduates to primary source research required more instructor guidance than a traditional essay assignment. Based on their experiences, we developed a teacher’s guide outlining best practices for bringing the project into the classroom. Such feedback, as an informal means of measuring learning outcomes, suggested that the project’s emphasis on active learning and development of critical thinking skills resonated in the classroom. In a recent article reflecting on their use of the project, a collection of NITLE-affiliated teachers concluded that the History Engine “presented us new ways to teach the concept of historical significance” that “energized our teaching and intensified our students’ encounters with the past.”2
Since the summer of 2008 the History Engine has been hosted, redeveloped, and expanded at the Digital Scholarship Lab at the University of Richmond. During that period the Web application software for the History Engine has been completely redeveloped, making it more stable, modular, and extensible.3 More exciting than these largely invisible changes have been the additions of historical visualizations–maps, timeplots, tag clouds–that situate dozens or even hundreds of episodes in relationship to one another spatially, temporally, and topically.
The History Engine as a History Archive
These visualizations begin to realize the History Engine’s other main goal: to build a large history archive that would be both interesting and useful to students, the general public, and historians. Currently the History Engine contains several thousand episodes, and we hope it will eventually contain tens of thousands of episodes. Taken together, these collected episodes represent a fine-grained account of U.S. history. Even with tools as simple as a basic text search, the History Engine database has the potential to become a large interpretive finding aid for historical sources located in archives and libraries across the country.
One of the exciting aspects of this project is the possibility for leveraging the metadata associated with each episode to produce historical visualizations. When a student uploads an episode into the History Engine they include a number of pieces of metadata—the time and place the episode happened as well as tags or keywords that identify the issues addressed within the narrative. During the last year we have been working on developing visualization tools that use this metadata to allow users to navigate through and see patterns amid the complexity of the History Engine‘s thousands of episodes.
At times, mapping episodes reveals context that would otherwise be difficult to glean from the text of the episodes alone. For example, one episode located in Brooke County Virginia tells of the 1855 expulsion of five northern students for suspected abolitionism. That antislavery activism had infiltrated a religious college in the U.S.’s largest slave state only five years before the Civil War is, at first glance, surprising. When plotted on a map, however, the episode is more explicable and carries additional meaning. Brooke County, one discovers, was in the far northern tip of what is now West Virginia. It’s just forty miles from Pittsburgh, farther from what would soon be the Confederate capital, Richmond, than it was from one of the hotbeds of abolitionist activity, Rochester, New York. Mapping the episode suggests other conclusions not explicitly suggested in the text of the episode itself, namely how far north slavery reached.
Figure 1: Mapped location of Brooke, Virginia
Once mapped, this episode suggests not simply how bold the young antislavery students were but how little room for compromise on the issue existed in even the most distant, peripheral corner of the South.
Visualizing how episodes align over time is likewise revealing. The History Engine‘s timeline reveals that students have written about debt most often when investigating the 1870s and 1880s, times of dizzying economic dislocation and concurrent political fights over the possibilities of debt adjustment.
Figure 2: Search results for “debt” displayed on timeplot
What student narratives reveal is how the effects of public and private borrowing rippled across the Gilded Age. Episodes detail, through the diary of a company clerk, the collapse of the Northwest Pacific Railroad, which declared bankruptcy following the Panic of 1873. This panic caused the failure of some of the nation’s largest companies. But as episodes mapped onto the timeline show, it also led to the collapse of local credit markets and, ultimately, helped bring about the end of Reconstruction as white northern Republicans became more concerned with economic recovery in the North than with black civil rights in the South. These forces converge in some episodes; one narrates the 1877 seizure of the property of Martin Joson, a Natchitoches, Louisiana freedman, on account of his debts to a white neighbor, showing how the tightening of local credit hit southern black landholders especially hard as they lost power and influence at the state and federal levels of government.
We have high hopes for the History Engine as we continue to develop the project. As the archive grows to include tens of thousands of episodes, we hope it becomes a valuable finding aid and a rich vein for historical visualization. Perhaps more important than the History Engine as a product–as a digital archive–is the opportunity it offers undergraduate students as a learning exercise that provides them an opportunity to “do history,” to actively grapple with the remnants of the past and the work of historians in order to make sense of and better understand some aspect of American history. For any instructors reading this who would be interested in having their students participate in and contribute to the project–or just want to offer a comment, suggestion, or critique–contact us through the History Engine website.
Notes
1. Stephen D. Andrews, “Structuring the Past: Thinking about the History Curriculum,” The Journal of American History (March 2009), http://www.historycooperative.org/journals/jah/95.4/andrews.html. [return to text]
2. Lloyd Benson, Julian Chambliss, Jamie Martinez, Kathryn Tomasek, and Jim Tuten, “Teaching with the History Engine: Experience from the Field,” Perspectives (May 2009), http://www.historians.org/perspectives/issues/2009/0905/0905for15.cfm. [return to text]
3. For those interested in the technical aspects of the Web application, it’s built using a number of open source resources: the code is PHP using the CakePHP MVC framework, the database is MySQL, and APIs used include Google Maps and the Simile Project’s Timeplot. [return to text]