Putting Study Abroad on the Map

by Jeff Howarth, Middlebury College

“Each year about 60% of the junior class at Middlebury studies abroad in more than 40 countries at more than 90 different programs and universities.”

When I read this sentence on the Middlebury College Web site, I thought to myself: that’s a dataset that my students ought to map. I knew that there had to be a dataset behind that sentence, something that the author could summarize by counting the number of different countries, programs and students. But I imagined this dataset could show us much more if we represented it spatially and visually rather than just verbally. I didn’t know exactly what it might show, but I knew that my cartography students could figure that out as long as I taught them the technical skills for handing the data and the general concepts for visualizing multivariate data. What they decided to make with this knowledge was up to them.

Increasingly, teaching cartography involves training students on specific software platforms while communicating more general principles of the craft. This presents the need to design instructional materials that connect technical skills with thematic concepts while allowing students to creatively achieve the broader educational objectives of a liberal education. As an instructor of cartography at Middlebury College, I have largely followed a project-based learning approach focused on the process of cartographic design. My learning objectives seek to link techniques and concepts in an extended creative procedure that involves data management, problem setting, problem solving and reflection. At different steps along the way, the students must make their own design decisions, applying the available means to their chosen ends. Here, I describe the case of mapping the study abroad program in order to illustrate the general approach of integrating technical and conceptual teaching through design problems.

The Project

I gave the students a very simple prompt: Design a layout for the Web that explores how the Study Abroad Program connects Middlebury students to the rest of the world. The students also received a spreadsheet, supplied by Stacey Thebodo of the Study Abroad Program, listing all students who had studied abroad between 2006 and 2010. In addition, the students received some geographic data, including country boundaries, in a common GIS format. Like all the projects in the course, this assignment provided students with an opportunity to apply topical and theoretical concepts that had been introduced in lecture and readings. For that week, the topic concerned spatial representations of multivariate data based largely on the Jacques Bertin’s theory of graphics.1 The three learning objectives of this assignment each connected theory to technique at different steps of the creative procedure: 1. demonstrate data management skills for cartography, specifically how to transform a list of nominal data into a statistical map; 2. identify the components of information to visualize in order to satisfy a purpose for an intended audience; 3. solve the problem given real-world constraints (available data, software, time and knowledge).

Data Management

The dataset came packaged as a spreadsheet with columns for semester, year, student name, major, minor, gender, program name, city, and country. The first problem was to reformat this dataset into something that could be mapped, which could be completed with two technical operations–linking the country or city names to geographic coordinates that could be plotted on a map and transforming nominal data into quantitative data.

The students were familiar with both the purpose and procedure of the first operation as it had been introduced in a previous assignment. They knew that descriptions of locations in an attribute table, like country names, could be joined to a separate file with corresponding geographic coordinates of each location in order to plot them on a map. But that alone would not get them much closer to visualizing the dataset, as they would wind up with a lot of overlapping geographic features, one for every row in the database. It would be far more preferable to format the dataset so that each row represented a different geographic feature (e.g. country) and each feature had attributes like the total number of students or the total number of programs. Then the students could make a map that showed spatial variation in these quantities.

To do this, the students needed to transform nominal data into quantitative data, which was a new problem. It introduced a new technical procedure with a theoretical concept that had been introduced in lecture that week. Technically, it involved using spreadsheet functions to ‘pivot’ data, summarizing instances of one category by instances of another category (e.g. counting the number of students per country). Conceptually, however, it involved defining the core theme that the students wanted to map, or what Bertin called the ‘invariant’ of information: the base concept that does not vary across spatial, temporal or thematic dimensions and by its invariability allows us to recognize the components of information that do vary. And this conceptual side of the problem made the task a bit more difficult than simply repeating the technical steps that I had demonstrated for ‘pivoting’ data.

The intuitive unit of the study abroad dataset was ‘student who studies abroad,’ but the dataset did not necessarily come structured in a way that let us map this. It was essentially organized by semester: for every semester between 2006 and 2010, it recorded each student studying abroad. This meant that if a student studied abroad for an entire year (two semesters) then they would be listed in the dataset twice and simply counting the number of students during the pivot operation would generate error. There were a number of possible fixes for this but they all required the student to think about balancing what they could do given the dataset and what they should do to achieve their purpose and help the reader interpret their map.

Setting the Problem

Once the students had seen the dataset and were shown how they could manipulate it so that it could be mapped, their next problem was to decide what they wanted to show on their map. In general, the students had to consider how to apply the means available to them (the dataset and their technical skills) to one or more ends, but these ends–the goals they sought to achieve by making their map–were decisions that they had to make on their own.

Throughout the course, I asked the students to consider their audience, media and theme when deciding what kind of map they would make. What kinds of questions would the people using this map with this kind of media want to answer about this theme? In this case, what would visitors to the study abroad Web site want to know about the program that a map could help them understand?

This tested students’ understanding of how relationships between variables help a reader make inferences from a graphic. This, of course, is the underlying principle of simple graphics, like scatterplots and charts, but in this case the students had more than just two variables that they could let the reader visually compare. The dataset included attributes for region, country, program, year, semester, gender and major. In addition, the students could generate new attributes from these, such as changes in number of students over time. What combination of variables would allow the reader to answer questions they might have? Or better yet, what might provide answers to questions that the reader might not even have thought of?

Problem Solving

As students begin the process of making a map layout, the workflow becomes less linear as the student must coordinate high levels of interacting elements. During this phase, students will not be able to work through the problem one step at a time, but rather must shift into a mode of reflective action, as they evaluate how each decision interacts with other decisions that they have made or will make, constantly adapting these pieces of the design to improve the quality of the whole.2 Their work during this phase thus reflects conceptual understanding at two levels: the individual components of the map and their interaction as a whole.

In this case, students demonstrated their comprehension of lower-level concepts in two ways. First, the students needed to choose one or more visual variables (e.g. shape, size, hue, value, texture) to represent each component of the dataset, evidencing both their conceptual understanding of Bertin’s theory of visual variables and their technical ability to implement these concepts with graphic design software. Second, the students needed to provide geographic context for the symbolized data by making a base layer. This evidenced their conceptual understanding of cartographic principles, such as projection and generalization, and their technical ability to implement these concepts.

As students implement these lower-level concepts and begin to produce a design, they confront conceptual problems of design that result from the interaction of their lower-level decisions. These include concepts like figure-ground, contrast, balance, as well as knowing when to use an inset or when to use small multiples because a single graphic is simply trying to say too much. These concepts are difficult to master by following simple rules but rather mature through thoughtful reflection during the process of design.


In addition to the map layout itself, I also required students to submit a written discussion of their design process. My objective was to provide another means to distinguish between a student’s comprehension of a concept and their ability to implement the concept with technical operations. I asked students to describe the decisions that they made during the design process and to relate these decisions to concepts that were introduced in lecture and readings. The short reflective write-up provided students with an opportunity to communicate their understanding of theoretical content even if the could not apply this understanding in their layout due to technical shortcomings.


Throughout the course, my joy of receiving thirty uniquely creative expressions of student work at the end of each week was countered by the dilemma of pegging each to a standard evaluation scale. My main objective when grading was to recognize both the student’s conceptual and technical understanding during each phase of the project–data management, problem setting and problem solving–using both their map layout and written discussion. For this assignment, I focused on the following:

  1. Is the thematic unit clearly defined and intuitive for the intended audience?
  2. How many components are visualized and what kinds of inferences can the reader make by relating these components?
  3. Do the visual variables help the audience interpret the components?
  4. Does the base map demonstrate comprehension of cartographic principles?
  5. Does the map composition demonstrate comprehension of higher-level graphic design principles?

Each of these questions relates an aspect of the work to one or more concepts from the course.


This map designed by Marty Schnure ’11 map shows a simple message well.
Figure 1. Map by Marty Schnure ’11

Marty has simplified the information content of her layout by removing the time component and aggregating by region. She uses intuitive visual variables (width of lines to represent magnitudes, hues of polygons to represent regions). She also uses a projection that is appropriate for this spatial extent. Her map is especially pleasing because she also demonstrates higher-level concepts of graphic design: her color scheme draws from the palette of the Middlebury Web site, her layout expresses symmetrical balance and she’s using contrast to effectively distinguish figure from ground.

Like Marty, Jue Yang ’12 also used flow-lines to represent numbers of students traveling abroad, but she added another component to this information and shows this data at two levels of spatial aggregation. Her flow lines originate from Middlebury aggregated by region and then branch midway to quantify the proportion of students studying in each country.
02jyang.jpgFigure 2. Map by Jue Yang ’12.

By designing her origin as a pie chart, which she repeats at a larger scale in the upper corner, she quietly urges her readers to compare the regional pattern while also providing a very subtle legend to her color scheme. She’s also made several decisions that evidence good cartographic principles. For one thing, she’s removed Antarctica, which makes sense for a lot of reasons: no students study there, the projection distorts the poles and would have made the continent funny-looking, and it frees up space for her flow lines to Oceanic countries. She’s also hidden an artifact that can be seen on most of the other student maps. The country boundary data has more detail than necessary for mapping at this scale. This makes some coastlines, like the west coast of Canada, accumulate line weights and appear as distracting blobs rather than crisp boundaries. Jue’s creative solution to this problem was to use white boundaries for countries and white fill for her oceans. This visually simplified coastlines without any laborious data transformations.

Several students increased the information content of their graphics by representing temporal components. Thom Corrado ’11 visualized the number of students studying in each country for each year of the dataset. 03tcorrado.jpg
Figure 3. Map by Thom Corrado ’11.

Thom developed an original scheme that used size to represent the number of students and color to represent the year. This allows the reader to infer changes in the study over time with any single country and also to compare the numbers of students studying in different countries for any single year. His insertion of an inset map evidences his awareness of a higher-level design problem resulting from the popularity of Europe, where circles representing number of students each year would overlap and obscure the underlying country boundaries.

The layout developed by Katie Panhorst ’10 layout was one of the most ambitious efforts due to the number of components that she included. She shows two temporal components (year and semester), two spatial components (country and region), and one thematic component (program sponsor). Her design uses small multiples arranged in a grid to reveal temporal components. Her thematic component allows the reader to interpret the quantitative data in a new way by correlating the number of students to the presence of Middlebury-sponsored programs.
Figure 4. Map by Katie Panhorst ’10.

Some students chose to represent change rather than time. This involved calculating the difference between the number of students studying in different countries or regions over two consecutive years and then representing the change symbolically. Jordan Valen ’10 offered one creative solution that used proportionally-sized arrows to represent change. This allows the reader to recognize patterns of change: Latin America, Europe and Asia seem to be largely consistent over time, Africa and Oceania fluctuate from one year to the next, while the popularity of the Middle East appears to be on the rise.
Figure 5. Map by Jordan Valen ’10. 

Lessons Learned

There are four key lessons that I’ve gained from this project-based approach to teaching cartographic design:

  1. Show students how to solve problems, but allow students to set the problems to be solved. While some liberal arts students may appreciate that a course allows them to list commercial software names under the skills section of their resume, simply training students how to use software falls outside the traditional scope of a liberal education. Providing students with the technical skills to solve problems while allowing them to set the problem to solve will foster student creativity and a learning environment characterized by exploration and discovery.
  2. Integrate your teaching of technique and theory, but separate your evaluation of technical skills and conceptual knowledge. The disparity of technical skills in a classroom can challenge both the evaluation of student work and the motivation of students to work. Some students will feel disadvantaged if their peers have had prior experience with a particular software, while those that enter the classroom with experience won’t be challenged if they are simply being shown how to push buttons that they have already learned to push. Additionally, a student will feel frustrated if their mastery of a complicated tool constrains their opportunity to demonstrate their comprehension of concepts. The reflective write-up describing design process is one strategy to tease apart these two kinds of knowledge, but I found that some students, even those with much to gain from a verbal description of their thinking, seemed to treat this part of the assignment with less effort than the map product itself. This may have been due a failure on my part to clearly communicate the importance of this part of the assignment or it may reflect a more intrinsic bias on the part of some students to focus on the product of design rather than the process.
  3. Design is reflective action and reflection takes time. This assignment required the students to commit a significant amount of time. In part, this stems directly from my expectation that students set their own problems. Problem-setting requires students to take the time to explore the dataset in order to discover its possibilities. Also, because the students decide what to make, they also have to decide when they’re done. Any student who has learned how to efficiently meet a professor’s expectations will find it difficult, if not frustrating, to decide when to stop working on their own. But independently of having students set problems, the complex nature of cartographic design, where elements interact with each other, and one decision influences both past and future decisions, translates into time needed to reflect and adapt. In particular, high-level design concepts, such as contrast and balance, are not dependent on a correspondingly high-level of technical knowledge that is difficult to master. Rather, they rely on students taking the time to consider and resolve them. As such, these gestalt concepts underlie the most common design flaws in student projects.
  4. Provide project topics that engage students.
    This last point is by no means novel in a liberal education but it should not be ignored when developing topics for student projects. The study abroad project provides an example of a dataset that students were drawn to explore. Many had studied abroad, so many started by looking themselves up in the dataset. This provided an opportunity to discuss key cartographic concepts, like data integrity and abstraction, as the row of fields attached to the dot on the monitor didn’t quite map to the richness of their memory. They became curious about how popular their program was and what places were less traveled. And they became interested in sharing this with other students and promoting the college program. It’s a useful case in the larger pedagogy of teaching techniques at a liberal arts college: give students problems that connect to their experience and involve both problem setting and solving. Many will recognize that visualizing quantitative data is a creative act.

1.  Jacques Bertin, Semiology of Graphics (Madison, WI: The University of Wisconsin Press, 1983). [return to text]
2.  Donald A. Schön, The Reflective Practitioner: How Professionals Think in Action (New York: Basic Books, 1983). [return to text]

Simple Animations Bring Geographic Processes to Life

by Christopher L. Fastie, Middlebury College


It seems we spend a lot of time teaching about things that we can’t easily observe, maybe because students are already familiar with processes they see operating around them, or because previous teachers have already harvested those low-hanging fruit. Processes that are obscure because they are small, large, slow, fast, or distant in time or space require more careful explanation. Some of these processes can now be revealed using digital technologies. I used Google Earth to model a very large process that took place 13,500 years ago. I used a global positioning system (GPS) receiver to map a series of glacial features in west central Vermont and transferred the results to Google Earth. I then added graphical models of the retreating Laurentide glacier and associated pro-glacial lakes and rivers which shaped the mapped features. Animated flyovers of the augmented Google Earth surface at different stages of the reconstructed glacial retreat were saved as video files and incorporated into an explanatory video. I have presented this video both before and after student field trips to the study area with good results. Subsequent upgrades to Google Earth allow animated flyovers to be recorded and played back in the free version of the program. This offers a streamlined creation process and the potential for a more interactive and collaborative experience.

Click on the video link below to view.Old, Flat, and Unconsolidated: Salisbury’s Gravelly Past from Chris Fastie on Vimeo.

Science instruction benefits greatly from graphical demonstrations of physical structures and processes. Current textbooks are elaborately illustrated and associated Web sites sometimes include animations of important general processes, but ready-made animations of more specific processes or locally relevant examples are rarely available. Software for producing custom animations is becoming more user-friendly, but the cost and training commitment still prevent wide adoption. Google Earth is a free program that is based on animation of the earth’s surface and that includes tools sufficient for creating simple animations of many social, geographic, geologic, and ecological processes. The professional version (Google Earth Pro), which is not free, adds the capability to save these animations as video files that can be viewed separate from the program.

Geomorphology and Google Earth

Most geomorphic processes, by definition, include movement of material at the earth’s surface, and are therefore well suited for animated representations in Google Earth. Extant geomorphic features can be difficult to observe in the field because they are large, subtle, or obscured by vegetation. Google Earth is an effective way to highlight such features before they are visited in the field, or afterwards when observations can be summarized and interpreted. By animating the time course of development of such features, geomorphic processes and concepts can be effectively revealed.

Glaciers shape the landscape as they flow, but evidence of glacier advance is often obscured by more recent features produced during glacier retreat. The last part of the Laurentide ice sheet to retreat from Vermont was a lobe of ice in the Champlain Valley. As the length and thickness of this lobe diminished, great sediment-laden rivers pouring from the glacier and from the surrounding barren landscape flowed through and alongside the ice. The Champlain Valley drains to the north, and the glacier impounded a temporary body of water called Lake Vermont which rose to a level several hundred feet higher than the current Lake Champlain. Some of the water flowing south into this lake flowed alongside the glacier and built gravelly floodplains between the newly exposed valley walls and the ice. As the glacier continued its retreat, these flat surfaces were abandoned when the river found a lower course next to the ice. Remnants of these surfaces, called kame terraces, are conspicuous features of the Champlain Valley. When the glacial rivers reached the level of Lake Vermont, they built sandy deltas into the lake. These fine-grained deposits were left high and dry when Lake Vermont eventually drained as the impounding ice retreated far enough north.

Modeling Landscape Features

In 1998, I moved into a house at the eastern edge of the Champlain Valley and began to explore the neighborhood. The landscape was dominated by the steep lower slopes of the Green Mountains, but these bedrock slopes were interrupted by dozens of flat, level terraces that appeared to be built of unconsolidated material (sand, gravel, boulders, etc.), instead of solid bedrock. I am a plant ecologist by training, not a geologist, but I began to sketch the extent and location of these flat places to see if the larger pattern held clues to their origin. The sketch maps on paper were a key element of the discovery process because the pattern of the flat areas, which are spread along miles of valley edge, was difficult to see without them. Dense forest covers most of the area and the resolution of the existing topographic maps was insufficient to reveal the subtle terraces. It is possible to identify some of the larger terraces from the air or from stereo aerial photographs, but most terrace margins and their relative heights cannot be discerned well. I assumed that no one had ever mapped these terraces before, so my map would be the first opportunity to study their landscape-level pattern in detail.

The evolving paper map allowed me to begin to reconstruct the progressive positions of the glacier margin and the associated routes of the ice-marginal river that must have created the kame terraces. It required considerable imagination to visualize the massive glacier redirecting a swollen, turbulent river along a hillside that today is three hundred feet above the valley floor. The map was good data, but to explain the complex course of events that played out over many decades and affected many square miles of hillside, it was just a start.

In 2007, I acquired a consumer GPS receiver which had two crucial features. It could produce tracklogs of walking tours by recording location coordinates at ten second intervals and the Garmin Mapsource software it came with had a menu item called “View in Google Earth.”  So I could walk the margins of a kame terrace with the GPS recording, upload the tracklog to a PC using Mapsource, and then see the tracklog in Google Earth. Google Earth allowed the terrace margins to be displayed on a recent color aerial photo stretched over the three dimensional topographic surface of the study area. This digital landscape could be viewed from any angle and any height above the surface, and one could “fly” over the scene at will. This encouraged me to make digital tracklogs of all the terraces I had found. Without the tracklogs displayed, the terraces could not be discerned in the crude Google Earth topography, which is just a digital version of the mid-twentieth century USGS topographic maps. As the terraces accumulated in Google Earth, I realized that the animated movie of ice, rivers, deposition, and erosion that had been playing in my mind for several years might be successfully shared with others.

Google Earth incorporates simple drawing tools that allow lines and shapes to be placed on or above the digital landscape surface. Three-dimensional objects can be represented by extending lines from objects down to the ground surface. Far more elaborate 3-D objects can be created using the free program Google SketchUp, but all of the objects created for this project were done with the tools included in Google Earth. I used these tools to trace all  the terrace margins imported from Mapsource, creating horizontal polygons in the shape of each terrace. I used the option to extend lines down to the ground surface to give each terrace a solid appearance. The resulting shapes are crude representations of the actual terraces (which do not have vertical sides, and are not all perfectly level) but provide a bold display of the overall pattern formed by the terraces.

I also used Google Earth’s drawing tools to make simple models of the glacier, Lake Vermont, other pro-glacial lakes, and meltwater rivers as I envisioned them at three different times during the formation of the terraces. This allowed the geomorphic features along a four mile stretch of hillside to be put into the context of the retreating ice margin and the associated lateral displacement of an ice-marginal river. I could now display three stages of the landscape process that had shaped my backyard 13,500 years ago.

To bring the process to life, I used the Movie Maker tool in Google Earth Pro to record flyovers of the augmented landscape at different stages in the reconstructed landform-building process. Due to the large scale of the study area there is great explanatory power when the view zooms from the regional to the local and then to a detail, for example, of a river’s route along the glacier. Google Earth allows any view of the digital landscape to be saved by “snapshotting” the view of a saved “placemark.”  The program will automatically and smoothly “fly” from one placemark view to another and these single flights formed the content of most of the video clips I produced. A few dozen of these clips were edited together using Adobe Premiere Pro. By inserting cross-fades between identical landscape views depicting different stages in the process, simple animations of the landscape development could be produced.

Presenting the Results

I first presented a draft of the video after students in my class at Middlebury College spent a January day exploring the snow-covered landforms. We made multiple stops to see several key parts of the study area and were still thawing out when we piled into my office to watch the video consisting only of  the silent flyovers from Google Earth. I think the students were able to more meaningfully synthesize their field observations after seeing the animated landscape. The reward was probably greatest for those students who had been working hard during the trip to make sense of the individually mundane features. I assume that the video allowed everyone to attach some additional geomorphological significance to the flat surfaces we had visited. During this field trip, we collected some new video of ourselves which was later incorporated into the final version of the video along with other footage and a narration.

For a subsequent class field trip to this area, I asked a new group of students to watch the video beforehand. By this time, a completed twelve-minute version of the video was available online. Viewing the video gave them a context for understanding what they later saw in the field and established a shared baseline of knowledge. I asked students a year later whether viewing the video before or after the field trip would have been more productive and the consensus was that before was better. The primary reason given was that the subject was sufficiently novel and obscure that every explanatory aid was welcome. Viewing the video first also allows a class to quickly address more complex issues such as the relationship between geomorphic origin and vegetation. However, some students recognized that the process of struggling to make sense of confusing field observations has pedagogical value. The video presents a compelling explanatory model, so it eliminates the need for students to assemble and test their own. Waiting until after the field trip to view the video has great potential for classes with the background and motivation to benefit from a puzzle-solving exercise.

In May 2009, Google Earth 5 was released with a new feature that allows flyover tours to be saved and played back within the program. The tour is not saved as video, but as a set of instructions that the program interprets in real time. While creating the tours, drawn objects (e.g., rivers or kame terraces) can be toggled on or off, creating simple animations. Photographs or videos can be displayed sequentially at designated places in the landscape. Narrations or music can be created and saved with a tour. This new feature offers an alternative method of sharing explanatory flyovers and animations.

Learning to save and distribute tours is easier than learning to save video clips and produce online videos and can be done with the free version of Google Earth. Without programming, tours can be embedded on Web pages where they play automatically in a window. The window is a working instance of Google Earth, so if the tour is stopped the user can interact with the digital landscape without having Google Earth installed (a free Google Earth browser plug-in is required). Tour files can also be distributed directly to users who can interact with them using Google Earth. The keyhole markup language (KML) files which encode the tours are usually small and easy to distribute to others. In addition to watching the recorded tour, users with Google Earth installed can experiment by toggling features on and off or creating their own new features. This creates the opportunity for interactive and collaborative projects. An advantage of KML tours over tours saved as video files is that it provides a view of the full resolution Google Earth landscape, not a compressed video version, and displays the most current aerial photos. Soon after I completed the video about glacier features, Google Earth updated the photo coverage of Vermont with higher quality, more recent images, instantly changing the video’s status to outdated. A primary disadvantage of distributing KML files to others is that there is less control over the viewing experience, which depends on the user’s operational knowledge of Google Earth, and settings in Google Earth (and of course, Google Earth must be installed). For examples of the tours I created, see www.fastie.net. You can also download the .kmz file for viewing in Google Earth.

Learning to view the landscape in Google Earth is fun and easy. Learning to produce and save video clips or KML tours is more of a challenge.  Google’s online help and tutorials are a start, but you should plan for some trial and error if you want to produce something other than the simplest result. If there is someone on your campus who can help you get started, you might be able to climb the steepest part of learning curve in an hour. Otherwise, plan for some additional learning time. Although the required commitment is not trivial, the models and tours you create can be used year after year to give students valuable insight into geographic patterns and processes that no one has witnessed firsthand.

SmartChoices: A Geospatial Tool for Community Outreach and Educational Research

by Jack Dougherty, Trinity College

SmartChoices, a Web-based map and data sorting application, empowers parents to navigate and compare their growing number of public school options in metropolitan Hartford, Connecticut. A team of students, faculty, and academic computing staff at Trinity College developed this digital tool in collaboration with two non-profit urban school reform organizations: the Connecticut Coalition for Achievement Now (ConnCAN) and Achieve Hartford (the city’s public education foundation). While English and Spanish-speaking parents learned how to use SmartChoices through a series of hands-on workshops, my students and I simultaneously collected data to better understand the “digital divide” and factors influencing parental decision-making on school choice. Overall, our project supports two liberal arts learning goals: to deepen student interactions with members of our urban community, and to nurture student participation in creating original research for real audiences.

The idea for SmartChoices began during a conference call with community partners a few weeks before the fall 2008 semester. Marc Porter Magee from ConnCAN and I were brainstorming about a possible collaboration between his education reform group and my Cities, Suburbs, and Schools undergraduate seminar at Trinity. Building on Trinity’s long-standing Community Learning Initiative, I designed this interdisciplinary seminar as a team research workshop, where we read historical and social science studies on schooling and housing and then design local research projects to test the application of research findings to metropolitan Hartford. Our region is a land of extremes: Hartford is one of the nation’s poorest cities, located inside a belt that includes some of the wealthiest suburbs. A year earlier, while learning basic GIS skills, my students created thematic maps to explore city and suburban differences in educational resources and outcomes, using data provided by ConnCAN. We all sensed the power of maps, and sought to build on our relationship by going a step further.

Marc and I agreed that the expansion of public school choice would soon become the most pressing issue for Hartford parents, because each family’s number of options was dramatically increasing, for two reasons. First, the Sheff v O’Neill school desegregation case created more interdistrict choices. Based on a 1996 ruling, the court mandated Connecticut to create more magnet schools (designed to attract both city and suburban students), encourage suburban districts to accept more city student transfers, and begin counting public charter and technical school students when calculating racial integration goals. Second, the Hartford Public School launched its district-wide school choice program. The district replaced neighborhood school assignment with a citywide lottery, required for all students who completed their current school’s last grade level, and optional for any students who desired to change schools. Suddenly, Hartford parents who were accustomed to sending their children to the neighborhood school were surrounded with more choices, and now when their child finished elementary or middle school, they were required to submit a choice application to advance to the next grade level. All together, a typical Hartford parent of a child entering the 6th grade now faced over thirty different school options. Moreover, competition between interdistrict and district providers meant that there were two different major application processes–and a host of minor ones–each with their own application form and procedures. While public school choice was intended to improve educational opportunity, it quickly became overwhelming.

“Do you think you and your students could design a brochure to show Hartford parents their school choices?” Marc asked.

I explained that there was no way to create one printed document that showed parents their exact set of eligible choices. We needed a dynamic system to deliver the right school data–and only that data–for each family, based on their residence and child’s age. First, parents wanted to see only those schools that offered their child’s grade level, and these varied widely across the two hundred public schools in the metropolitan region (ranging from K-2, K-5, K-6, 3-5, 5-8, 6-8, 7-12, 9-12, and so forth). Second, the Hartford Public Schools divided the city into four zones, guaranteeing bus transportation only for students attending schools within their residential zone, provided they did not live so close that they could walk. Third, across the region, many schools were limited to enrolling students from designated attendance zones or school districts. Yet public school choice happened so fast that most Hartford parents, particularly new arrivals with limited literacy skills, had little sense of where their interdistrict and district school options were located.

“The only way we can do this is to create a Web site,” I replied, “and it needs to show parents their eligible schools on a map, in relation to where they live.” We agreed to cooperate on attempting to build a pilot version during the fall semester, with Trinity designing the technology and ConnCAN providing school data and community support. The fact that my graduate training had focused on history and sociology (not computer science), and that I had acquired only “advanced beginner” GIS and HTML skills during the past decade at Trinity, should have made me think twice before leaping. But I was fascinated by the idea of blending a much-needed community outreach project with a research tool to better understand how parents from different neighborhoods made school choices.

Prior to this conversation, I had read innovative research studies on parents, information, and school choice. In Washington, DC, Jack Buckley and Mark Schneider created a Web site where users could compare different public schools (traditional and charter), while researchers monitored mouse clicks and search patterns.1 The authors found that parents using the site displayed racial preferences: when comparing two schools with comparable achievement levels, parents were more likely to drop the school with a higher percentage of black students. Later, I became aware of a related study by Justine Hastings and Jeffrey Weinstein in Charlotte-Mecklenberg, North Carolina, where researchers experimented with providing school data to parents in different paper formats.2 They discovered that low-income parents who received a list of schools ranked by test scores were more likely to choose higher-performing ones than a control group which received an alphabetical list, without test data. Furthermore, Trinity economics professor Diane Zannoni, our undergraduate co-authors and I published an article that analyzed how much money suburban homebuyers were willing to pay for a comparable home on the more “desirable” side of an elementary school attendance line, and connected this trend to the growing availability of school-level data on the Internet.3

Fortunately, my Trinity colleagues and students shared in the enthusiasm and hard work to create the SmartChoices Web site. When parents type in a child’s home address and grade level, the site displays all of their eligible district and interdistrict public schools on an interactive Google Map, as well as a table for sorting and comparing distance from home, racial balance, and student achievement levels. Additional links point users directly to individual school Web sites, application forms, and transportation information. David Tatem, academic computing instructional technologist, helped me to conceptualize the interactive map and school database, and provided GIS support. Undergraduate research assistants Jesse Wanzer and Nick Bacon digitized school attendance boundaries. My seminar students compiled address and demographic data for over two hundred schools in the city and nearby suburbs. Devlin Hughes concentrated on refining the user interface as a case study for her senior thesis on data visualization, with assistance from Trinity’s social science data coordinator, Rachael Barlow. Another student, Christina Seda, provided the Spanish translations. Jean-Pierre Haeberly, the college’s director of academic computing and an exceptionally talented programmer, developed the Web application. Based on Web 2.0 design principles, SmartChoices exists on a three-tier server architecture, which integrates the Web server (for the search page and interactive map) with the application and database servers. Asynchronous requests permit the user to initiate searches and view results without having to reload the page, as in a traditional form-based Web site. To encourage other regions to create similar Web sites, we are distributing SmartChoices code as free, open-source software upon request by email <SmartChoices@trincoll.edu>.
Figure 1. Smart Choices Web interface

Prior to our public launch, ConnCAN community organizer Lourdes Fonseca helped organize a series of focus groups to receive feedback from Hartford parents and administrators of different school choice programs. My seminar students designed interview guides and guided participants through the pilot site, while recording how users interacted with and interpreted school data on their screens. We made several revisions to make the site as user-friendly as possible for Hartford parents, including many who have little or no experience with computers. We also faced difficult choices when deciding which school-level data categories to feature, since we committed to developing a site that would fit on display screens no larger than 1024 pixels wide. School choice administrators sometimes requested revisions that would serve their particular program’s needs over others, or feature promotional material. Some education officials expressed concern about direct school-to-school comparisons of test scores or student racial composition. As a result, we took the position that SmartChoices would stand as an independent project, not affiliated with any school, district, or choice program. Furthermore, we committed to reporting data obtained from public sources of information, such as the state department of education or school Web sites. By providing the most comprehensive source of public school choice information, SmartChoices has filled the role of a “consumer reports” service for public education in the Hartford metropolitan region.

After our public launch in early 2009, the Achieve Hartford local education foundation joined the project to fund research and community outreach. Our primary research questions were: Who uses SmartChoices, and how does digital information influence parental decision-making? My Trinity students and I organized a series of parent training workshops to collect both qualitative and quantitative data, and ConnCAN contracted with community organizers from another Hartford organization, the Voices of Women of Color, to assist parents at public libraries and to bring laptop computers into people’s homes through school choice “house parties.” Print, radio, and television media also broadcast features about the Web Site.

Who used SmartChoices, and where did they search? In our full report, we analyzed Web site statistics and found that during the five-month choice application period in 2009-10, over 3,385 distinct searches were conducted on SmartChoices. Over three-quarters of these searches were conducted for addresses in the city of Hartford, while the remainder included addresses in suburban towns and outside our coverage area. The dot distribution map illustrates the geographical spread of SmartChoices usage across urban and suburban areas. The grade levels most commonly searched were Kindergarten (16 percent) and 9th grade (14 percent), which matches the most common grade-level entry points in the system.

Figure 2. Distribution of SmartChoices searches

How did people use SmartChoices, according to Web statistics? We created a sorting feature that allowed users to organize their search results in five different categories: school name, distance from home, racial balance, test scores, and test gain over the previous year. The Web site randomized how each user’s initial results were sorted, to determine which categories were most frequently selected. Among users who sorted results, the most popular categories were Distance (25 percent) and Test Goal (24 percent), with Test Gain and Racial Balance trailing behind. However, we observed that most users never sorted their results (70 percent of the 3,385 distinct searches), perhaps because they did not see the sort button, nor understood how it worked.

Rather than simply waiting for users to find and visit our site, Trinity students and I organized ten hands-on workshops (in both English and Spanish) in Hartford to train parents how to use the site, while interviewing them in depth about their decision-making process. Our sample of 93 workshop interview participants was limited to parents of children entering elementary school (grades PreK-8) in the next academic year. Each workshop participant interacted one-on-one with a trained Web site guide, in front of a computer, for about fifteen to forty minutes, and gave informed consent to be interviewed. Each guide followed a script that asked parents to list their top-choice schools (before and after using SmartChoices), and walked users through the Web site while explaining what data labels meant. About half of these interviews took place in workshops at local neighborhood schools, while the other half occurred during larger regional school choice fairs. At the neighborhood events, our most successful workshops were organized with the assistance of Hartford Public School Family Resource Aides (FRAs), who helped us arrange access to school computer labs and attract interested parents with bilingual flyers. Note that these workshops were not located at representative locations across the city (due to research design and logistical issues). Furthermore, all workshop participants were self selected, meaning they voluntarily responded to a neighborhood workshop flyer or walked up to our regional school choice fair tables. By definition, self-selected participants are not necessarily representative of the Hartford-area population at large, limiting the interpretation of our results.



SmartChoices parent workshops. Photos by Nick Lacy.

How did the SmartChoices workshop influence participants’ decision-making? Before introducing the Web site, our interviewers asked a pre-workshop question: for one child in your family, what are your top choices for schools next fall? After hands-on Web searching and sorting, we asked it again as a post-workshop question. When we compared participants’ pre- and post-workshop responses for their top-choice schools, we found that the total sample divided into roughly equal thirds. About one-third changed their top choice, meaning the workshop experience led them to switch from one school to another. About one-third clarified their top choice, meaning they began with no response or one that was too vague for an application form (“the school near Walmart”) but eventually selected a specific school. Finally, about one-third did not change their top choice.

For the thirty-two workshop participants who changed their top choices, we compared their initial selection to their final selection, to measure the relative influence of the four key data categories in the SmartChoices search results. To compare pre/post-workshop responses across different categories, we expressed all in common units, based on one-third of a standard deviation of the mean difference. On this scale, Test Goal (69 percent) and Test Gain (64 percent) were the most influential categories in this sample, followed by Racial Balance (47 percent). Interestingly, Distance was the least influential category in this phase of the analysis, because roughly equal portions selected new schools that were farther, closer, or a similar distance to their homes.

Does this mean that school distance from home does not matter to parents? Absolutely not. When we compared how workshop participants sorted results, we found that Test Goal and Distance were virtually tied (at 23 and 22 percent, respectively), followed by the other categories. Given that parents often make trade-offs between distance and school quality factors they value, we infer that SmartChoices helped workshop participants to identify desirable schools that were located closer to, or a similar distance from, their initial top-ranked school. In other words, we suspect that the SmartChoices map and distance calculator helped workshop participants find “good schools” (however they defined them) that they were not previously aware of.

Does increased public school choice improve education for all? SmartChoices cannot answer this policy question, because this project only considers families who seek to make a choice and self-selected to try our website. For our next research project, Diane Zannoni and I wish to conduct a spatial analysis of who does (and does not) participate in school choice, either by submitting an application or by exiting the district. We are also deeply interested in spatial research that uncovers racial and social stratification as a result of choice.

Nevertheless, the movement for public school choice has attracted multiple supporters in our politically divided nation, particularly in metropolitan Hartford. Advocates of the Sheff ruling support voluntary interdistrict magnet schools and city-suburban transfers as the most viable means to racially integrate schools. At the same time, market-oriented advocates embrace public school competition as a means to empower urban parents to exit low-performing schools and enter those more likely to reduce the achievement gap. “Choice” has become such a politically popular label in metropolitan Hartford that it appears in the name of at least three distinct entities: the Open Choice city-suburban transfer program, the Regional School Choice Office, and the Hartford Public School’s “All-Choice” initiative.

We cannot ignore the influence that the Internet has had on consumerist activity in “shopping” for public schools. Google, the ubiquitous search engine, recently reported that the category of “school comparisons” was the leading type of public data search conducted on its Web site in November 2009.4 In their report, Google defined “school comparisons” as any search on education from PreK to higher education, such as: “Douglas County schools” or “top law schools.” Indeed, other categories might have ranked higher if Google had not broken out certain subgroups of searches, such as separating “cancer” from “health” searches in general.  But the report confirms that citizen-consumers are eagerly looking to the Internet to help them make judgments about comparing the relative qualities of different educational options.

Whether or not one supports public school choice, it exists and continues to grow in our nation’s urbanized areas. To participate in these application processes, families need access to reliable information to make informed decisions about public schools. To be sure, some information flows through parents’ social networks: the opinions of trusted relatives and neighbors, conversations with principals and teachers, and personal visits to schools. But other sources of information–such as student achievement, racial balance, distance from home, and program offerings–are more readily available on the Internet.

Yet access to information, and knowledge about how to search and interpret Web sites, is not uniformly distributed. The “digital divide” was more commonly discussed a decade ago, but it has not disappeared, and remains as one of the most challenging barriers in the twenty-first century knowledge-driven economy. While working on the SmartChoices project, we were struck by the difficulty of obtaining reliable, current data on the scope and size of the digital divide in the Hartford region. In 2007, the US Census Current Population Survey posed this question to a national sample: “Do you (or anyone in this household) connect to the Internet from home?” The proportion responding “Yes” who resided in the city of Hartford ranged from 34 to 55 percent, while those living in the three-county Hartford metropolitan statistical area ranged between 75 to 92 percent. The range in estimates is due to the large number of people whose responses were omitted because they answered “No” or did not respond to the initial question, “Do you access the internet from any location?” Therefore, if we include these omitted responses, the results point to the low end of the estimated range. In addition, we still lack comprehensive data on the true scope of adult literacy–particularly computer literacy–among residents of the city of Hartford, compared to the metropolitan region or state. Based on our first-hand experience with the SmartChoices parent workshops, we witnessed a wide range of computer ability between adults who self-identified as new versus regular users.

As the “SmartChoices” name clearly implies, familiarity with the World Wide Web has become a necessary ingredient to be an informed consumer of public education in Greater Hartford. The rapidly expanding (and constantly changing) set of public school options, as well as differences between competing choice providers and their eligibility guidelines, made it nearly impossible for us to communicate with parents through a paper booklet or catalog. We created SmartChoices as a dynamic Web site–with an interactive map of school locations, distance-to-home calculator, and transportation links–because we could not conceive of a way to adequately present the key information that each parent needed on paper. Furthermore, beginning in January 2010, the Hartford Public School Choice Office shifted from paper-only to Web-only applications. For families in our urban setting, learning how to navigate the Internet is not an option, but a requirement.

To be sure, digital tools like SmartChoices are only valuable to people who have access and knowledge of how to use them. In our parent workshops, my Trinity students observed significant differences between participants who had greater familiarity with computers and higher levels of education. If school choice is expected to improve public education for all, then community outreach needs to focus on novice computer users, with information literacy to help users understand and interpret key data categories (in English and other languages), as well as hands-on guidance on Web skills such as sorting data and following through with online applications. Liberal arts college students, staff, and faculty already enjoy most of these skills, and we can learn a great deal about our broader communities if we find meaningful ways to engage with them on these important issues.

1. Jack Buckley and Mark Schneider, Charter Schools: Hope or Hype? (Princeton, NJ: Princeton University Press, 2007). [return to text]
2. Justine S. Hastings and Jeffrey M. Weinstein, “Information, School Choice, and Academic Achievement: Evidence from Two Experiments,” Quarterly Journal of Economics 123, no. 4 (November 2008):1373-1414, posted online 15 October 2008. [return to text]
3. Jack Dougherty, Jeffrey Harrelson, Laura Maloney, Drew Murphy, Russell Smith, Michael Snow, and Diane Zannoni, “School Choice in Suburbia: Test Scores, Race, and Housing Markets,” American Journal of Education 115 (August 2009): 523-548, published online 4 June 2009. [return to text]
4.”Statistics for a Changing World: Google Public Data Explorer in Labs,” Official Google Blog (8 March 2010), http://googleblog.blogspot.com/2010/03/statistics-for-changing-world-google.html. [return to text]