The Difference that Inquiry Makes: A Collaborative Case Study on Technology and Learning, from the Visible Knowledge Project

This collection of essays from the Visible Knowledge Project is edited by Randy Bass and Bret Eynon, who served together as the Project’s Co-Directors and Principal Investigators. The Visible Knowledge Project was a collaborative scholarship of teaching and learning project exploring the impact of technology on learning, primarily in the humanities.  In all, about seventy faculty from twenty-two institutions participated in the Visible Knowledge Project over five years. Participating campuses included five research universities (Vanderbilt University, the University of Alabama, Georgetown University, the University of Southern California, Washington State University, and the Massachusetts Institute of Technology), four comprehensive public universities (Pennsylvania’s Millersville University, California State University (CSU)–Monterey Bay, CSU Sacramento, Ohio’s Youngstown State University, and participants from several four-year colleges in the City University of New York system, including City College, Lehman, and Baruch), and three community colleges (two from CUNY–Borough of Manhattan Community College and LaGuardia Community College, and California’s Cerritos College). In addition to campus-based teams, a number of independent scholars participated from a half dozen other institutions, such as Arizona State and Lehigh University.

The project began in June 2000 and concluded in October 2005. We engaged in several methods for online collaboration to supplement our annual institutes, including an adaptation of the digital poster-tool created by Knowledge Media Lab (Carnegie Foundation), asynchronous discussion, and web-conferencing. For more detailed information, see the VKP galleries and archives at http://crossroads.georgetown.edu/vkp/Note: You can find pdf files formatted for printing attached at the end of each article.

Capturing the Visible Evidence of Invisible Learning

0 Comments | 6366 Page Views
This is a portrait of the new shape of learning with digital media, drawn around three core concepts: adaptive expertise, embodied learning, and socially situated pedagogies. These findings emerge from the classroom case studies of the Visible Knowledge Project, a six-year project engaging almost 70 faculty from 21 different institutions across higher education. Examining the scholarly work of VKP faculty across practices and technologies, it highlights key conceptual findings and their implications for pedagogical design.  Where any single classroom case study yields a snapshot of practice and insight, collectively these studies present a framework that bridges from Web 1.0 to Web 2.0 technologies, building on many dimensions of learning that have previously been undervalued if not invisible in higher education.

Reading the Reader

0 Comments | 3166 Page Views
Many teachers wonder what happens (or doesn’t happen) when students read text. What knowledge do students need, gain, or seek when reading? Through VKP’s early emphasis on technology experimentation, Sharona Levy adapted a proven reading method of annotation from paper to computer. Through using the comment feature in Word, students’ reading processes became more transparent, explicit, and traceable, allowing her to diagnose gaps in understanding and to encourage effective reading strategies.

Close Reading, Associative Thinking, and Zones of Proximal Development in Hypertext

0 Comments | 3563 Page Views
How can we teach students to slow down their reading process and move beyond surface-level comprehension? Patricia O’Connor’s Appalachian Literature students co-constructed hypertexts which capture the connections readers make among assigned texts, reference documents, and multimedia sources. These hypertexts became more than artifacts of student work; rather, they became collaborative, exploratory spaces where implicit literary associations become explicit.

Inquiry, Image, and Emotion in the History Classroom

0 Comments | 2467 Page Views
With increased online access to historical sources, will students “read history” differently among such artifacts as text, image, or video? Questioning his own assumptions of students’ abilities to analyze historical sources, Peter Felten conducted pedagogical investigations to understand student interpretation of a variety of sources. Designing the use of visual artifacts in the classroom helped students learn not only how to interrogate and interpret primary sources, but also how to construct original arguments about history. Students’ understanding of history deepened while they became emotionally engaged with the material.

From Looking to Seeing: Student Learning in the Visual Turn

0 Comments | 3332 Page Views
Rather than simply using primary source images as illustrations for his course on Power, Race, and Culture in the U.S. City, David Jaffee wanted to teach his students how to interpret visual texts as a historian would. By paying close attention to his students’ readings of images, Jaffee was able to develop ways to scaffold their analysis, teaching them how to move beyond “looking” at isolated images to “seeing” historical context, connection and complexity.

Engaging Students as Researchers through Internet Use

0 Comments | 3416 Page Views
Effective habits of research begin early and should be practiced often. Unearthing discoveries, making connections, and evaluating judiciously are research traits valued by Taimi Olsen in her first-year composition course. Not only should these research habits exist in the library, but Olsen advocates the application of these habits in online archives hones students’ abilities to become expert researchers.

Trace Evidence: How New Media Can Change What We Know About Student Learning

0 Comments | 2465 Page Views
Clicker technology, often used in large-enrollment science courses, works well when every question has a single right answer. Lynne Adrian wanted to find out whether clickers could be used in disciplines which raise more questions than answers, and how illuminating the gray areas between “right” and “wrong” could help her students think critically about American studies. She found that the technology allowed her to preserve traces of the otherwise ephemeral class discussions, enabling her to analyze the types of questions she was asking in class and to track their effects on students’ written work throughout the semester.

Shaping a Culture of Conversation: The Discussion Board and Beyond

1 Comments | 6655 Page Views
What happens when the discussion board goes from being just an assignment to a springboard for intellectual community? Foreseeing many benefits to cultivating discussion among his English students, Ed Gallagher worked to develop frameworks to articulate why discussion is not only central to the learning process in the classroom but also beyond its walls. A higher level of critical analysis, reflection, and a synthesis of multiple perspectives turned class discussions into artful conversations.

The Importance of Conversation in Learning and the Value of Web-based Discussion Tools

0 Comments | 3221 Page Views
In this essay Heidi Elemendorf and John Ottenhoff discuss the central role that intellectual communities should play in a liberal education and the value of conversation for our students, and we explore the ways in which web-based conversational forums can be best designed to fully support these ambitious learning goals. Coming from very different fields (Biology and English Literature) and in different course contexts (Microbiology course for non-majors and Shakespeare seminar), they nonetheless discover core values and design issues by looking closely at the discourse produced from online discussions. Centrally, they connect what they identify as expert-like behavior to the complexities of intellectual development in conversational contexts.

Why Sophie Dances: Electronic Discussions and Student Engagement with the Arts

0 Comments | 1386 Page Views
Paula Berggren struggled to engage her students in critical thinking about unfamiliar art forms, until she posed a simple question on the class’s online discussion board: “Why do people dance?” She found that the students’ responses, rather than being just less-polished versions of what they might write in formal essays, warranted close analysis in their own right. In subsequent teaching, Berggren continues to incorporate some version of a middle space for student work, which not only increases students’ engagement but also allows her to observe and document their thought processes.

Connecting the Dots: Learning, Media, Community

0 Comments | 1143 Page Views
Sometimes the research question you ask isn’t the one you end up answering. Elizabeth Stephen recounts how a debate about the use of films in a freshman seminar led to an experiment in forming a community of scholars which could be sustained over time and across distances. Creating online spaces for students in this group to share their reflections with one another strengthened the ties among them, while allowing Stephen to analyze the multiple elements, both academic and social, which made this a successful learning community.

Focusing on Process: Exploring Participatory Strategies to Enhance Student Learning

0 Comments | 1781 Page Views
Confronting the challenge of improving student writing in a large sociology class, Juan José Gutiérrez developed a software-based peer review process. He required students to evaluate one another’s papers based on specific criteria and to provide constructive feedback. He found that not only did this process help with the logistics of paper-grading, but it also allowed him to adapt his teaching to address specific concerns indicated by qualitative and quantitative analysis of the peer reviews.

Theorizing Through Digital Stories: The Art of “Writing Back” and “Writing For”

0 Comments | 3058 Page Views
Discovering how digital stories engage students in critical, theoretical frameworks lives at the center of Rina Benmayor’s work. Through her course, Latina Life Stories, Rina asked each student to tell his or her own life story digitally and then situate the story within a theoretical context. While this process engaged students to theorize creatively, it also allowed her to document methods to recognize the quality of student work resulting in a flexible and intuitive rubric to use beyond this experience.

Video Killed the Term Paper Star? Two Views

0 Comments | 1947 Page Views
Two instructors from separate disciplines discuss what happens when alternative multimedia assignments replace traditional papers. Peter Burkholder found the level of engagement to change dramatically in his history courses while Anne Cross experienced new avenues for talking about sensitive subjects in sociology. Together, both professors explore the advantages and opportunities for video assignments that challenge students to synthesize information in critical and innovative ways.

Producing Audiovisual Knowledge: Documentary Video Production and Student Learning in the American Studies Classroom

0 Comments | 4598 Page Views
Traditionally, academic institutions have segregated multimedia production from disciplinary study. Bernie Cook wondered what his American Studies students would learn from working collaboratively to produce documentary films based on primary sources, and what he in turn might find out about their learning in the process. Students created documentary films on local history, and wrote reflections on their creative and critical process. Not only did students report tremendous engagement with the topics and sources for their projects, they also indicated satisfaction at being able to screen their work for an audience. By allowing his students to become producers of content, Cook enables them to participate fully in the intellectual work of American Studies and Film Studies.

Multimedia as Composition: Research, Writing, and Creativity

0 Comments | 3673 Page Views
Viet Thanh Nguyen reflects on a three-year experiment in assigning multimedia projects in courses designed around the question “How do we tell stories about America?” Determined to integrate multimedia conceptually into his courses, rather than tacking it onto existing syllabi, Nguyen views multimedia as primarily a pedagogical strategy and secondarily a set of tools. Exploring challenges and opportunities for both students and teachers in using multimedia, he outlines principles for teaching with multimedia, and concludes that, while not for everyone, multimedia can potentially create a transformative learning experience.

Looking at Learning, Looking Together: Collaboration across Disciplines on a Digital Gallery

0 Comments | 1345 Page Views
What does it mean for two community college colleagues, teaching in very different disciplines, to work together on a Scholarship of Teaching and Learning (SoTL) project?  What happens when they join together to examine their students’ work, their individual teaching practice, and the possibilities for collaborative research?  And what do they learn when they undertake an electronic publication of that work in a digital gallery?

“It Helped Me See a New Me”: ePortfolio, Learning and Change at LaGuardia Community College

0 Comments | 3725 Page Views
What happens if we shift the focus of our teaching and learning innovations from a single classroom to an entire institution? What new kinds of questions and possibilities emerge? Can an entire college break boundaries, moving from a focus on “what teachers teach” to a focus on “what students learn?” Can we think differently about student learning if we create structures that enable thousands of students to use new media tools to examine their learning across courses, disciplines, and semesters? Bret Eynon explores these questions as he analyzes the college-wide ePortfolio initiative at LaGuardia Community College. Studying individual portfolios and focus group interviews, he also examines quantitative outcomes data on engagement and retention to better consider ePortfolio’s impact on student learning.

From Narrative to Database: Multimedia Inquiry in a Cross-Classroom Scholarship of Teaching and Learning Study

0 Comments | 2855 Page Views
Michael Coventry and Matthias Oppermann draw on their work with student-produced digital stories to explore how the protocols surrounding particular new media technologies shape the ways we think about, practice, and represent work in the scholarship of teaching and learning. The authors describe the Digital Storytelling Multimedia Archive, an innovative grid they designed to represent their findings, after considering how the technology of delivery could impact practice and interpretation. This project represents an intriguing synthesis of digital humanities and the scholarship of teaching and learning, raising important questions about the possibilities for analyzing and representing student learning in Web 2.0 environments.

Multimedia in the Classroom at USC: A Ten Year Perspective

0 Comments | 3993 Page Views
Does multimedia scholarship add academic value to a liberal arts education? How do we know? Looking back at the history of the Honors Program in Multimedia Scholarship at USC, Mark Kann draws on his own teaching experience, discussions with other faculty members, and the university’s curriculum review process to explore these questions. He describes the process of developing the program’s academic objectives and assessment criteria, and the challenges of gathering evidence for his intuitions about the effects of multimedia scholarship. Finally, Kann reports on the program’s first student cohort and looks ahead to the future of multimedia at USC.

Capturing the Visible Evidence of Invisible Learning

by Randy Bass and Bret Eynon

Note: This is a synthesis essay for the Visible Knowledge Project (VKP), a collaborative project engaging seventy faculty at twenty-one institutions in an investigation of the impact on technology on learning, primarily in the humanities. As a matter of formatting to the Academic Commons space, this essay is divided in three parts: Part I (Overview of project, areas of inquiry, introduction to findings); Part II (Discussion of findings with a focus on Adaptive Expertise and Embodied Learning); Part III (Discussion of findings continued with a focus on Socially Situated learning, Conclusion). A full-text version of this essay is available as a pdf document here.
Here, in this forum as part of Academic Commons, the essay complements eighteen case studieson teaching, learning, and new media technologies. Together the essay and studies constitute the digital volume “The Difference that Inquiry Makes: A Collaborative Case Study of Learning and Technology, from the Visible Knowledge Project.” For more information about VKP, see https://digitalcommons.georgetown.edu/blogs/vkp/.

Déjà 2.0
Facebook. Twitter. Social media. YouTube.Viral marketing. Mashups. Second Life. PBWikis. Digital Marketeers. FriendFeed. Flickr. Web 2.0. Approaching the second decade of the twenty-first century, we’re riding an unstoppable wave of digital innovation and excitement. New products and paradigms surface daily. New forms of language, communication, and style are shaping emerging generations. The effect on culture, politics, economics and education will be transformative. As educators, we have to scramble to get on board, before it’s too late.

Wait a minute. Haven’t we been here before? Less than a decade ago, we rode the first wave of the digital revolution–email, PowerPoint, course web pages, digital archives, listservs, discussion boards, etc. As teachers and scholars, we dove into what is now called Web 1.0, trying out all sorts of new systems and tools. Some things we tried were fabulous. Others, not so much. Can we learn anything from that experience? What insights might we garner that could help us navigate Web 2.0? How can we separate the meaningful from the trivial? How do we decide what’s worth exploring? What do we understand about the relationship of innovations in technology and pedagogy? What can we learn about effective ways to examine, experiment, evaluate, and integrate new technologies in ways that really do advance learning and teaching?
The teaching and research effort of the Visible Knowledge Project (VKP) could be a valuable resource as we consider these questions. Active from 2000 to 2005, VKP was an unusual collective effort to initiate and sustain a discipline-based examination of the impact of new digital media on education. A network of around seventy faculty from twenty U.S. colleges, primarily from American history and culture studies departments, gathered not only to experiment with new technologies in their teaching, but also to document and study the results of their inquiries, using the tools of the scholarship of teaching and learning. In this collaborative and synoptic case study, under the title The Difference that Inquiry Makes, we try to capture and make sense of the visible evidence of this relatively invisible learning as it emerged over five (and more) years of collaborative classroom inquiry. We share participants’ reports on key elements of the VKP inquiry, and integrate their reports into a framework that can help us learn from this experience as we navigate a fast-changing educational landscape.

Invisible Learning
What do we mean by “invisible learning?” We use this phrase to mean at least two things. First, it points us to what Sam Wineburg, in his book Historical Thinking and Other Unnatural Acts, talked about as “intermediate processes,” the steps in the learning process that are often invisible but critical to development.1 All too often in education, we are focused only on final products: the final exam, the grade, the perfect research paper, mastery of a subject. But how do we get students from here to there? What are the intermediate stages that help students develop the skills and habits of master learners in our disciplines? What kinds of scaffolding enable students to move forward, step by step? How do we, as educators, recognize and support the slow process of progressively deepening students’ abilities to think like historians and scholars? In VKP, from the beginning, we tested our conviction that digital media could help us to shine new light on–to make visible–and to pay new attention to these crucial stages in student learning.

Second, by invisible learning we also mean the aspects of learning that go beyond the cognitive to include the affective, the personal, and issues of identity. Cognitive science has made great strides in recent years, scanning the brain and understanding everything from synapses and neurons to perception and memory. Educators are still struggling to grasp the implications of this research for teaching and learning. However, perhaps because it is less “scientific,” higher education has paid considerably less attention to (and is even less well prepared to deal with) the role of the affective in learning and its relationship to the cognitive. How does emotion shape engagement in the learning process? How do we understand risk-taking? Community? Creativity? The relationship between construction of knowledge and the reconstruction of identity? In VKP we explored the ways that digital tools and processes surfaced the interplay between the affective and the cognitive, the personal and the academic.

Visible Evidence
Education at all levels has largely taken on faith that if teachers teach, students will learn–what could be seen as a remarkable, real-life version of “If you build it, they will come.” In recent years, calls for greater accountability have produced a new emphasis on standardized testing as the only appropriate way to assess whether students are learning. Meanwhile, growing numbers of faculty in higher education have taken a different approach, engaging in the scholarship of teaching and learning–using the tools of scholarship to study their own classrooms–to deepen their understanding of the learning process and its relationship to teacher practice. Spurred by the ideas of Ernest Boyer and Lee Shulman of the Carnegie Foundation for the Advancement of Teaching, faculty from many disciplines have posed research questions about student learning, gathered evidence from their classrooms, and gone public with their findings in countless conference presentations, course portfolios, and scholarly journals. This movement, with its focus on classroom-based evidence, provided key tools and language for the Visible Knowledge Project. It allowed VKP faculty to study the impact of new technologies on learning and teaching, and it also helped us frame questions about problems and practice, inquiry and expertise that remain critical as we move into a new phase of technological innovation and change.2

The Visible Knowledge Project
The Visible Knowledge Project emerged in 2000 from the juxtaposition of these two powerful yet largely distinct trends in higher education–the scholarship of teaching and learning movement and the initial eruption of networked digital technologies into the higher education classroom. Responding to a dynamic combination of need and opportunity, faculty engaged in multi-year teaching and learning research projects, examining and documenting the ways the use of new media was reshaping their own teaching and patterns of student learning. Participating faculty came from a wide range of institutions, from community colleges and private liberal arts colleges to research universities; from Georgetown and USC to Youngstown State, the University of Alabama, and City University of New York (CUNY). Meeting on an annual basis, and interacting more frequently in virtual space, we formed our research questions representing a broad spectrum, shared ideas about research strategies, discussed emerging patterns of our evidence, and formulated our findings. The digital resources used ranged from Blackboard and PowerPoint to interactive online archives and Movie Maker Pro. The VKP galleries (https://digitalcommons.georgetown.edu/blogs/vkp/) provide a wealth of background information, including lists of participants, regular newsletters, and reports from more than thirty participants, as well as a number of related resources and meta-analyses.3

The VKP ethos was formed by a belief in the value of messiness, of unfolding complexity, of adventurous, participant-driven inquiry that would inform the nature of the collective conversation. A few scientists and social scientists entered the group and helped create exciting projects, but the vast majority of the participants were from the fields of history, literature, women’s studies and other humanist disciplines. While technology was key to our raison d’être, our inquiries often evolved to focus on issues of pedagogy that transcended individual technologies. We wanted to learn about teaching, to learn about learning. We wanted to go beyond “best practice” and “what worked” to get at the questions about why and how things worked–or didn’t work. In some cases, we went further, rethinking our understanding of what it meant for something to “work.” Our questions were evolving, shaped by the exigencies of time and funding as well as our on-going exchange and new technological developments. We struggled with ways to nuance and realize our inquiries, to come up with workable methods and evidence that matched our changing and, we hoped, increasingly sophisticated questions.

Over the course of the Project, we found that participants’ teaching experiments started to group in three areas:

  1. Reading–Engaging ideas through sources/texts: As VKP took shape at the end of the twentieth century, the great museums, universities, and research libraries of this country were mounting their collections on the Web. Web sites such as the American Memory Collection of the Library of Congress vastly expanded the availability of archival source materials on the Web. It was a time, as Cathy Davidson put it recently, of digitally-driven “popular humanism.”4 Responding to this opportunity, VKP’s historians and culture studies faculty explored the effectiveness of active reading strategies using primary sources, both textual and visual, for building complex thinking. Introducing students to the process of inquiry, faculty tested combinations of pedagogy and technology designed to help students “slow down” their learning, interpret challenging texts and concepts, and engage in higher order disciplinary and interdisciplinary practices.For example, Susan Butler, teaching an introductory history survey at Cerritos College, had her students examine primary sources on different facets of the Trail of Tears, made available online by the Great Smoky Mountains National Park, PBS, and the Cherokee Messenger; as students grappled with perspective and the evolving definition of democracy in America, Butler examined evidence of the ways that scaffolded learning modules that incorporated online primary sources could expand students’ capacity for critical analysis. Meanwhile, Sherry Linkon at Youngstown State used online archives to help students in her English course create research papers that contextualized early twentieth-century immigrant novels. And Peter Felten at Vanderbilt integrated online texts, photographs and videos into a history course on the 1960s, analyzing the ways students did–or didn’t–apply critical thinking skills to visual evidence.Across the board, the focus was less on “searching” and “finding” than on analyzing, understanding, and applying evidence to address authentic problems rooted in the discipline. Testing innovative strategies, faculty asked students to model the intellectual behaviors of disciplinary experts, focusing earlier and more effectively on the learning dimensions that characterize complex thinking. (For sample projects addressing these questions, see http://cndls.georgetown.edu/crossroads/vkp/themes/poster_showcase_reading.htm )
  2. Dialogue–Discussion and writing in social digital environments: As VKP faculty moved into the world of Blackboard and Web-CT, they explored ways that discussion and social writing in online environments can foster learning. Projects explored strategies for using online communication to make the intermediate processes of learning more visible and to provide opportunities for students to develop personal and academic voice. For example, Mills Kelly, teaching a Western Civilization survey at Virginia’s George Mason University, focused on the possibilities of using online tools, including the WebCT discussion board and a special GMU Web Scrapbook, as tools for enhancing collaborative learning. Meanwhile, Ed Gallagher at Lehigh University tested the impact of his detailed and creative guidelines for students in prompting more interactive and substantial discussion in an online context.In general, carefully structured online discussion environments provided students and faculty a context in which to think socially; they also allowed discussion participants to document, retrieve and reflect on earlier stages of the learning process. This ability to “go meta” offered a new way for students and faculty to engage more deeply with disciplinary content and method. Highlighting the scaffolding strategies that might maximize student learning, these projects gathered evidence of learning that reflected the social and affective dimensions of these digitally-based pedagogical practices. (For sample projects, see http://cndls.georgetown.edu/crossroads/vkp/themes/poster_showcase_discussion.htm)
  3. Authorship–Multimedia construction as experiential learning: As multimedia authoring became easier to master in these years, faculty became interested not only in creating multimedia presentations and Web sites; they also sought to develop ways to put these tools into the hands of students. Many VKP scholar-teachers were guided by the constructivist notion that learning deepens when students make knowledge visible through public products. In the projects clustered here, student authorship takes place in various multimedia genres of the early twenty-first century, including digital stories and digital histories, Web sites and PowerPoint essays, historically-oriented music videos, electronic portfolios and other historical and cultural narratives. The emergent pedagogies explored by these scholar-teachers involve multiple skills, points of view, and collaborative activities (including peer critique). For example, Patricia O’Connor had her Appalachian literature students at Georgetown University create Web pages about Dorothy Allison’s Bastard Out of Carolina, annotating particular phrases and creating links to historical sources and images, while she investigated the ways that “associative thinking” shaped students’ ability to make nuanced speculations about literary texts.
    Meanwhile, Tracey Weis at Pennsylvania’s Millersville University and several faculty at California State University at Monterey Bay gathered evidence on the cognitive and emotional impact of student construction of short interpretative “films,” or what we came to call “digital stories.” Examining the qualities of student learning evidenced through such assignments, these projects spotlight issues of assessment and the need to move beyond the narrowly cognitive quiz and the critical research essay to find ways to value creativity, design, affect, and new modes of expressive complexity. (For sample projects, see http://cndls.georgetown.edu/crossroads/vkp/themes/poster_showcase_writing.htm )

Naturally, these three areas of classroom practice–critically engaging primary sources, social dialogue, and multimedia authorship–converged in all kinds of ways. Some of the richest and most intriguing projects engaged students in a scaffolded process of collaborative research and writing, laying the groundwork for multimedia-enhanced performances of their learning. Our fluid categories were defined and redefined by the creativity of our faculty as they experimented within them.

The key to faculty innovations in VKP was not merely trying new teaching strategies but looking closely at the artifacts of student work that emerged from them, not only in traditional summative products such as student writing, but in new kinds of artifacts that captured the intermediate and developmental moments along the way. What did these artifacts look like? They included video evidence of students working in pairs on inquiry questions, as well as student-generated Web archives and research logs; they included careful analysis of discussion threads in online spaces and student reflections on collaborative work; they included not only new forms of multimedia storytelling but evidence of their authoring process through interviews and post-production reflections about their intentions and their learning. One of the consequences emerging from these new forms of evidence was that, as faculty looked more closely and systematically at evidence of learning processes, those processes started to look more complex than ever. The impact of transparency, at least at first, seemed to be complexity, which can be unsettling in many ways.

Pieces of Insight
This phenomenon had a significant impact on the kinds of findings and claims that emerged from this work. We set out looking for answers (“what is the impact of technology on learning?”) and what we mostly found were limited claims about impact, new ways of looking at student learning, and often dynamic new questions. In fact, the VKP projects followed a pattern typical in faculty inquiry.  Whatever the question that initiates the inquiry, it often changes and deepens into something else. For example, Lynne Adrian (University of Alabama) started off investigating the role of personal response systems (“clickers”) in a large enrollment Humanities course to see if the use of concept questions would increase student engagement, but was soon led to reflect much more interestingly on the purpose of questions in class and the very nature of the questions she had been asking for more than twenty years. Similarly, Joe Ugoretz (Borough of Manhattan Community College), in an early inquiry, hoped to study the benefits of a free-form discussion space in an online literature course, but got frustrated because the students would frequently digress and stray off topic; finally it occurred to him that the really interesting inquiry lay in learning more about the nature of digressions themselves, considering which were productive and which were not. The changing nature of questions, and the limited nature of claims, is not a flaw of faculty inquiry but its very nature. John Seely Brown describes the inevitable way that we build knowledge around teaching: “We collect small fragments of data and struggle to capture context from which this data was extracted, but it is a slow process. Context is sufficiently nuanced that complete characterizations of it are extremely difficult. As a result, education experiments are seldom definitive, and best practices are, at best, rendered in snapshots for others to interpret.”5

Here is where the power of collaborative inquiry came into play. That is, what emerged from each individual classroom project was a piece of insight, a unique local and limited vision of the relationship between teaching and learning that yet contributed to some larger aggregated picture. We had, in the microcosm of the Visible Knowledge Project, created our own “teaching commons” in which individual faculty insights pooled together into larger meaningful patterns.6 Each of these snapshots is interesting in itself; together they composite into something larger and significant. What follows below is our effort at putting together the snapshots to create a composite image in which we recognize new patterns of learning and implications for practice.

A Picture of New Learning: Cross-Cutting Findings

Collectively, what emerged from this work was an expansive picture of learning. Although we started out with questions about technology, early on it became clear that the questions were no longer merely about the “impact of tools” on learning; the emergent findings compelled us to confront the very nature of what we recognized as learning, which in turn fed back into what we were looking for in our teaching. Over the years, faculty experienced iterative cycles of innovation in their teaching practice, of reflection on an increasingly expansive range of student learning, and of experimentation shaped by the deepening complexity (and at times befuddlement) that emerged from trying to read the evidence of that learning. From this spiral of activity developed a research framework with broad implications for the now-emergent Web 2.0 technologies. We have come to articulate this range of cross-cutting findings under the headings of three types of learning: adaptive, embodied, and socially situated.

Briefly, by adaptive learning we mean the skills and dispositions that students acquire which enable them to be flexible and innovative with their knowledge, what David Perkins calls a “flexible performance capability.”7 An emphasis on adaptive capacities in student learning emerged naturally from our foundational focus on visible intermediate processes. What became visible were the intermediate intellectual moves that students make in trying to work with difficult cultural materials or ideas, illuminating how novice learners progress toward expertise or expert-like thinking in these contexts.

Our recognition of the embodied nature of learning emerged from this increased attention to intermediate processes–the varied forms of invention, judgment, reflection–when we realized that we were no longer accounting for simply cognitive activities. Many manifestations of the affective dimension of learning opened up in this intermediate space informed by new media, whether it was the way that students drew on their personal experience in social dialogue spaces, or the sensual and emotional dimensions of working with multimedia representations of history and culture. In these intermediate spaces, dimensions of affect such as motivation and confidence loomed large as well. We have come to think of this expansive range of learning as embodied, in that it pointed us to the ways that knowledge is experienced through the body as well as the mind, and how intellectual and cognitive thinking are embodied by whole learners and scholars.

Inasmuch as this new learning is embodied, similarly is it socially situated. Influenced by the range of work on situated learning, communities of practice, and participatory learning, our work with new technologies continuously brought us to see the impact new forms of engagement through media had on the students’ relative stance to learning. This effect was not merely a sense of heightened interest due to the novelty of new forms of social learning. Rather, what we were seeing was evidence of the ways that multimedia authoring, for example, constructed for students a salient sense of audience and public accountability for their work; this, in turn, had an impact on nearly every aspect of the authoring process–visible in the smallest and largest compositional decisions. The socially situated nature of learning became a summative value, capturing what Seely Brown calls “learning to be,” beyond mere knowledge acquisition to a way of thinking, acting, and a sense of identity.

These three ways of looking at pedagogies–as adaptive, embodied, and socially situated–together help constitute a composite portrait of new learning. Each helps us focus on a different dimension of complex learning processes: adaptive pedagogies emphasizing the developmental stages linking learning to disciplines; embodied pedagogies focusing on how the whole person as learner engages in learning; and socially situated learning focusing on the role of context and audience. In this sense, the dimensions are overlapping and reinforcing in any particular set of practices. For example, consider Patricia O’Connor’s work making use of Web authoring tools to lead students to engage in close reading of print fiction. Calling the activity “hypertext amplification,” O’Connor asks students to make increasingly sophisticated “associational” connections, to move from novice reading encounters with texts to more expert ones. She wants them to experience “associational thinking” on multiple levels, from the personal and emotional to the definitional and critical. Ultimately, students’ ability to engage fully along a continuum of expert practice is shaped by their knowledge that their Web pages will be public, and their presentations to their peers a social act. All three key dimensions are in play in her teaching practices, as in so many of the case studies coming out of VKP.

Nevertheless, we believe it is a valuable exercise to slow down and look closely at each of three areas, and to begin making sense of how each dimension might be better understood for its shaping influence on learning. We now explore each of these areas more fully below.

A Note on Findings
Because faculty inquiry lives at the boundary of theory and practice, we have chosen to present the findings in two forms: as conceptual findings (representing the way theory informed practice, and vice versa) and design findings (representing some of the key claims on practice made by these concepts and values about learning). As a further response to the challenge of representing collective findings in a messy research environment, we also present each area with a set of “tags,” keywords that help associate the findings with various trajectories. Finally, at the end of each finding description we link to several relevant case studies within this volume.

[jump to Part II]

Notes
1. Sam Wineburg, Historical Thinking and Other Unnatural Acts (Philadelphia: Temple University Press, 2001). [return to text]
2. Many good resources exist on the scholarship of teaching. Two essential resources can be found at the Carnegie Foundation for the Advancement of Teaching (http://www.carnegiefoundation.org/CASTL/) and the Scholarship of Teaching and Learning tutorial at Indiana University, Bloomington (http://www.issotl.org/tutorial/sotltutorial/home.html). [return to text]
3. In all, more than seventy faculty from twenty-two institutions participated in the Visible Knowledge Project over five years. Participating campuses included five research universities (Vanderbilt University, the University of Alabama, Georgetown University, the University of Southern California, Washington State University, and the Massachusetts Institute of Technology), four comprehensive public universities (Pennsylvania’s Millersville University, California State University (CSU)–Monterey Bay, CSU Sacramento, Ohio’s Youngstown State University, and participants from several four-year colleges in the City University of New York system, including City College, Lehman, and Baruch), and three community colleges (two from CUNY–Borough of Manhattan Community College and LaGuardia Community College, and California’s Cerritos College). In addition to campus-based teams, a number of independent scholars participated from a half dozen other institutions, such as Arizona State and Lehigh University.  The project began in June 2000 and concluded in October 2005.  We engaged in several methods for online collaboration to supplement our annual institutes, including an adaptation of the digital poster tool created by Knowledge Media Lab (Carnegie Foundation), asynchronous discussion, and Web-conferencing.  For more detailed information, see the VKP galleries and archives at http://crossroads.georgetown.edu/vkp/. [return to text]
4. Cathy N. Davidson, “Humanities 2.0: Promise, Perils, Predictions,”  PMLA 123, no. 3 (May 2008): 711. [return to text]
5. John Seely Brown, “Foreword,” in Opening Up Education: The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge (Cambridge: MIT Press, 2008). [return to text]
6. For a broader discussion of the “teaching commons,” see Pat Hutchings and Mary Huber, The Advancement of Learning: Building the Teaching Commons (San Francisco: Jossey-Bass, 2005). [return to text]
7. David Perkins, “What is Understanding?” in Teaching for Understanding: Linking Research with Practice, ed. Martha Stone Wiske (San Francisco: Jossey-Bass, 1998), 39-58. [return to text]

New Media Technologies and the Scholarship of Teaching and Learning: A Brief Introduction to this Issue of Academic Commons

by Randy Bass, Georgetown University

A Bridge to Know-ware
Higher education traditionally has found few systematic ways to build and share knowledge about teaching and learning. It is not surprising then that there has been relatively little interaction between those most interested in new technologies and those invested in the scholarship on teaching and learning. Of course there are examples where the two communities intersect, sometimes for robust conversations. Yet much of this talk stays at the level of individual experimentation and focuses on teaching and classroom practice, with very little attention paid to learning. For whatever reason, the quantity and quality of those conversations are far less that we might hope, given the social impact of new technologies and the growing urgency of conversations around active learning, accountability, and assessment.So, how do we make any headway in a landscape where applied knowledge about learning is inchoate, where forms of learning are expanding in ways higher education is poorly situated to accommodate, and the technological contexts are shifting rapidly and radically? We need, in short, to merge a culture of inquiry into teaching and learning with a culture of experimentation around new media technologies. Our ability to make the best use of any technologies to improve education hinges ultimately on the reciprocal capacities to bring our powers of inquiry to bear on educational technologies, as well as to bring the power of new technologies to bear on our methods of inquiry and our representation of knowledge about teaching practice.Slowing Down and Looking at Learning
In this issue of Academic Commons we take up these questions by looking at the possibilities for building knowledge around teaching and learning in a rapidly changing technological landscape. Through articles, case studies, interviews and roundtables, the January 2009 issue of Academic Commons explores the continuity of learning issues from Web 1.0 to 2.0 technologies, from online discussion tools, hypertext and multimedia authoring to emergent forms of electronic portfolios, blogs, social networking tools, and virtual reality environments. We take these up in the context of a dual challenge: to understand better the changing nature of learning in new media environments and the potential of new media environments to make learning–and faculty insights into teaching–visible and usable.
The issue opens with a bundled set of essays that form a synoptic case study of the Visible Knowledge Project (VKP), a five-year project looking at the impact of technology on learning, primarily in the humanities, through the lens of the scholarship of teaching and learning.  These case studies explore the ways that faculty inquiring into their students’ learning deepened and complicated their understanding of technology-enhanced teaching. Out of these classroom-based insights emerged a set of findings that constitute a research framework, clustering especially around three broad areas:

  1. Learning for adaptive expertise: the role of new media in making visible the thinking processes intrinsic to the development of expert-like abilities and dispositions in novice learners;
  2. Embodied learning: the impact of new media technologies on the expansion of learning strategies that engage affective as well as cognitive dimensions, renewed forms of creativity and the sensory experience of new media, and the importance of identity and experience as the foundation of intellectual engagement; and
  3. Socially Situated learning: the role of social dimensions of new media in creating conditions for authentic engagement and high impact learning.

These broad areas of learning provide a bridge from earlier technology innovation to current new media technologies. They also serve as a way of seeing the capacities of new social media in light of the learning issues intrinsic to disciplinary and interdisciplinary ways of knowing. In this sense, they provide a framework for understanding this period of transformation as one (as Michael Wesch puts it in this issue) where we are shifting from “teaching subjects to subjectivities.” This expansive conception of learning challenges us then not to cope with technological change, but shifts that are essentially social and intellectual. As Michael Wesch puts it in his commentary on the meaning of these changes, “Nothing good will come of these technologies if we do not first confront the crisis of significance and bring relevance back into education.  In some ways these technologies act as magnifiers. If we fail to address the crisis of significance, the technologies will only magnify the problem by allowing students to tune out more easily and completely.”

The six additional vision pieces in this issue all provide different lenses onto this transformation. Two pieces–one by Kathleen Yancey and another that is the transcript of the closing session at the ePortfolio conference at LaGuardia Community College in April 2008–look specifically at the current practices and potential of ePortfolios to provide a site that both serves the needs of students to represent themselves and their learning through an integrative digital space as well as the needs of institutions to find better ways to understand the progress of student learning and intellectual development. A key element in this transformation is shifting the unit of analysis from the learner in a single course to the learner over time, inside and outside the classroom. What does this shift imply for the ways we understand learning and development? If we accept this new learning paradigm, what kinds of accommodations do we need to make in our approaches to the curriculum, the classroom, the role of faculty, and the empowerment of learners?

Other pieces in this issue consider similar shifts. For example, in a sampling excerpt from their book Opening Up Education: The Collective Advancement of Education Through Open Technology, Toru Iiyoshi and M. S. Vijay Kumar look at the potential of “open content, opening technology and open knowledge” to transform higher education. “We must develop not only the technical capability but also the intellectual capacity for transforming tacit pedagogical knowledge into commonly usable and visible knowledge: by providing incentives for faculty to use (and contribute to) open education goods, and by looking beyond institutional boundaries to connect a variety of settings and open source entrepreneurs” (Iiyoshi and Kumar, coming in February).

Confronting our Biases
Yet, it seems all too clear that higher education is mostly unprepared to make the most of this new potentiality–of open education and an expansive conception of learning. Gathering and sharing knowledge about educational effectiveness is tricky in an environment in which we rush on to the “next new thing,” as new media pedagogies (as with other emergent pedagogies) often lead to forms of learning that do not neatly fit into traditional frameworks of disciplinary learning and cognitive and critical skills. These new forms of learning–including emotional and affective dimensions, capacities for risk-taking and uncertainty, creativity and invention, blurred boundaries between personal and public expression, or the importance of self-identity to the development of disciplinary understanding, etc.–traditionally have been invisible in higher education. As Bret Eynon and I point out in our synthesis essay for the Visible Knowledge Project, “when the invisible becomes visible it is often disruptive,” although usually in productive and generative ways.
That theme of generative disruption runs throughout all of these pieces in this issue, none more than in Cathy Davidson’s interview about “participatory learning and the new humanities,” where her celebration of the potential for “Humanities 2.0” is counter balanced by entrenched reluctance to rethink basic practices in our fields, especially around the ways we recognize expertise, collaboration, and creativity. As Davidson puts it (in ways that could speak for most of the authors here),

I guess part of me just doesn’t understand why this isn’t the most exciting time for all of us in our profession and why we aren’t figuring out ways that we can use this to bolster our mission in the world, our methods in the world, our reach in the world, our understanding of what we do and what we have to offer our students in the world? It just feels like we’re in an age where we educators should be the thought leaders and so many of us are futzing around the edges, and I don’t get it.

In this issue of Academic Commons we take the disconnection between experimentation with new media technologies and conversations about learning as a presenting symptom of what Davidson calls “futzing around the edges.” That is, we can only futz because we do not have a vocabulary or a tradition for engaging with learning in meaningful communal ways. In this environment it is especially important to flank classroom-based inquiry with institutional learning, where we can put into practice wide-scale views of learning outcomes as textured as those of faculty who look at learning in their own classrooms. Many of the pieces in this issue provide a starting point for these connections, whether looking at the best institutional practices around electronic portfolios (see Roundtable), or the aspirations of a national project developing flexible rubrics for evaluating the intellectual work of students over time and through diverse intellectual products (“Can We Bridge an Expansive View of Student Learning with Institutional Learning? The VALUE Project Thinks we Can, and Here’s How,” an interview with Terry Rhodes, coming in February), or the visionary specifications for a flexible repository for the scholarship of teaching and learning, linking local expertise to collective wisdom (Tom Carey, John Rakestraw, and Jennifer Meta Robinson, “Expanding the Teaching Commons in Web 2.0: A New Vision for a Scholarship of Teaching and Learning Repository,” coming in February).

From the local to the virtual, from classroom innovation to “opening up education,” this issue of Academic Commons seeks to make a modest contribution to these questions and our collective endeavor toward addressing them. What binds these case studies and vision pieces together are the aggregated bonds of the three dimensions of learning emerging from the VKP framework: expertise, embodiment, and the social. If we could bridge our incipient sense of meaning for these dimensions in student learning with the full social embodiment of our collective expertise as educators, then we would indeed have a bridge to the future.

Acknowledgements: In putting together this issue I want to thank the supervising editors, Mike Roy and John Ottenhoff for the invitation and opportunity. I also want to thank Lisa Gates, managing editor of AC for her infinite patience and skill in working with such complicated and multi-faceted content. Many thanks to Pat Hutchings of the Carnegie Foundation for the Advancement of Teaching, for her support through the years, and especially her reading of the synthesis essay for this volume. I also want to thank my longtime collaborator, Bret Eynon, for his intellectual and spiritual companionship throughout the process; many thanks also to current and former colleagues at the Center for New Designs in Learning and Scholarship (CNDLS) and the Visible Knowledge Project who worked on dimensions of this issue, especially Theresa Schlafly, Susannah McGowan, Eddie Maloney, John Rakestraw. -RB

Return to Table of Contents for the January 2009 Issue of Academic Commons

In addition to the articles listed in the Table of Contents, the following are forthcoming:

  • Opening Up Education: The Remix, by Toru Iiyoshi and Vijay Kumar. Excerpts from the book Opening Up Education: The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge, editors Toru Iiyoshi and M.S. Vijay Kumar (Coming in February)
  • Tom Carey, John Rakestraw, and Jennifer Meta Robinson, Expanding the Teaching Commons in Web 2.0: A New Vision for a Scholarship of Teaching and Learning Repository (Coming in February)
  • Can We Bridge an Expansive View of Student Learning with Institutional Learning? The VALUE Project Thinks we Can, and Here’s How, an Interview with Terry Rhodes  (Coming in February)

Profiles of Key Cyberinfrastructure Organizations

by David Green, Knowledge Culture

We present here a collection of short profiles, specially written for Academic Commons, on key service organizations and networks that will be poised to assist and lead others who are working to bring a rich cyberinfrastructure into play. Some are older humanities organizations for which cyberinfrastructure is a totally new environment, others have been created specifically around the provision of digital resources and support.

We invite your comments and your suggestions for other organizations and networks that you see as key players in providing CI support.

Alliance of Digital Humanities Organizations (AHDO)

American Council of Learned Societies (ACLS)

ARTstor

Council on Library and Information Resources (CLIR)

Cyberinfrastructure Partnership (CIP) & Cyberinfrastructure Technology Watch

Humanities, Arts, Science, and Technology Advanced Collaboratory (HASTAC)

CenterNet

Institute of Museum and Library Services (IMLS)

Ithaka

The Andrew W. Mellon Foundation

National Endowment for the Humanities (NEH)

NITLE

Open Content Alliance

Software Environment for the Advancement of Scholarly Research (SEASR)

The Bates College Imaging Center: A Model for Interdisciplinarity and Collaboration

by Matthew J. Coté, Bates College

The Bates College Imaging and Computing Center (known on campus simply as the Imaging Center) is a new interdisciplinary facility designed to support Bates’s vision of a liberal arts education, as codified by its newly-adopted General Education Program. This program reflects the increasingly porous and mutable nature of disciplinary boundaries and emphasizes the effectiveness of teaching writing as a means of improving students’ ability to think, reason and communicate. The Imaging Center strives to further expand the reach of this program by promoting visual thinking and communication–serving as a catalyst for interdisciplinary and transdisciplinary work. In many ways the Center embodies most of the ideas underpinning Bates’s new General Education Program and is a model on this campus for the kind of transformative work cyberinfrastructure will enable. Floorplan image courtesy of the Bates College Imaging and Computing Center.

The Imaging Center’s physical space, its imaging resources and its place within the college’s cyberinfrastructure, are all designed to foster interactions between scholars from disparate fields and to further the Center’s goal of promoting visual literacy. Traditional campus structures–whether organizational or architectural–are efficient from the administrative perspective, but often have the unintended consequence of reifying disciplinary boundaries. For example, the spatial grouping of faculty by academic discipline provides few opportunities for faculty from different fields to interact with each other, either purposefully or by happenstance, while doing their work. Such campus structures have significant pedagogical ramifications as well. They encourage students to pigeon-hole ideas and ways of thinking according to academic field rather than inspiring them to find connections between fields of inquiry.
These consequences, of course, are antithetical to the goals of academic programs intended to foster interdisciplinary thinking. To counter these effects, the Bates Imaging Center provides a visually-inviting space available to all members of the campus community. Its array of equipment and instrumentation, and its extensive computer networking, make it the campus hub for collaborative and interdisciplinary projects, especially those that are computationally intensive, apply visualization techniques, or include graphical or image-based components.

Imaging Center Public GalleryBates College Imaging Center Public Space
Imaging Center Public Gallery (photo courtesy of the
Bates College Imaging and Computing Center)

The Imaging Center’s central public gallery provides comfortable seating, readily accessible kiosk computers and wireless networking to encourage faculty and students to use the space for both planned and spontaneous meetings of small groups. To make more public the scholarly activities taking place within the Center, a contiguous array of three large flat-panel LCD monitors displays looped sequences of images created by faculty and students who are using the Center’s resources to support their work. Image sequences include, for example, micrographs obtained using the Center’s microscopes, digital photographs taken by students working in the fine arts, maps generated using GIS mapping software, and animated multidimensional graphs of political data. The sequences are designed to exemplify effective visual communication and to juxtapose work by faculty and students drawn from widely varied disciplines throughout the campus. The display publicizes the scholarly activities taking place within the Center, and by encouraging viewers to think more deeply about the images, cultivates more sophisticated approaches to the images they encounter or create in their own work.

The Center’s gallery is abutted on one side by an imaging lab and on the other by a computer room.The imaging lab contains a digital photography studio and a suite of optical microscope rooms with a shared sample preparation room. Driven by the goals of improving the accessibility of work that is conventionally done in isolation, and of making the Center’s resources available to as broad an audience as possible, the microscope rooms are each electronically linked with the computer room. This allows images obtained with the microscopes to be displayed for large groups in real time, complete with two-way audio communication between the microscope operator and the audience.

Imaging Lab (photo courtesy of the Bates College Imaging and Computing Center)

Imaging Lab (photo courtesy of the Bates College Imaging and Computing Center)


Computer Room (photo courtesy of the Bates College Imaging and Computing Center)

The Imaging Center’s resources are leveraged by a one-gigabit-per-second network that connects the Imaging Center to the campus’s Language Resource Center and the Digital Media Center (the latter supports audio and video work). In this way each center can be physically located for the convenience of its most frequent users yet large data files and other electronic resources can be readily shared between centers. Local storage of the large data sets and images is provided by a two terabyte storage array.

As the Imaging Center moves forward, its participation in the Internet2 consortium will provide wide bandwidth access to large databases such as those relied upon by users of GIS mapping software and bioinformatics researchers. It will also make it possible for scientists working on the Bates campus to operate specialized instrumentation located at large research institutions and to do so in real time. These capabilities will bring to a small liberal arts college in Maine the unfettered access to databases, equipment and distributed expertise that were formerly available only to those working in large research facilities.

As is true with cyberinfrastructure generally, it’s the Imaging Center’s people that make it work. Two full-time staff members–one with expertise in database management, computer hardware and software development and GIS mapping, and the other a microscopist and photographer with technical training in optics and imaging technologies–bring a wealth of experience to the Imaging Center. They support the Center’s users by training them to use unfamiliar tools and techniques. Some workshops and group training sessions are used for this purpose, but the widely varying schedules and backgrounds of the Center’s users render scheduled, “one size fits all” training sessions insufficient. To complement these offerings, the staff is developing electronic training materials that use imbedded hyperlinks to provide the background that some readers might be missing. These documents have the advantages of being readily customized and updated, allowing readers to focus their attention on those aspects of a topic that are particularly pertinent or unfamiliar. Because the documents are available to anyone with internet access, they can be used whenever and wherever the need arises.

As workers in an ever-expanding range of fields seek to express or explore ideas through expert use of images, and to find and convey meaning in large multidimensional data sets through increased visualization capability, there will be a concomitant demand for improved visual literacy. As a result, acquiring the ability to communicate and think visually will be seen as an integral part of a complete education. This realization has motivated the development of a new type of center whose impact is dramatically enhanced by recent advances in computer power and connectivity. With the Imaging Center providing a practical working model of interdisciplinarity and numerous examples of the power of visualization, Bates is well placed to take advantage of the new directions afforded by a well-deployed cyberinfrastructure.

Managed Cyber Services as a Cyberinfrastructure Strategy for Smaller Institutions of Higher Education

by Todd Kelley, NITLE

Technology and Relationships
Director of the National Science Foundation, Arden Bement, recently stated that “At the heart of the cyberinfrastructure vision is the development of virtual communities that support peer-to-peer collaboration and networks of research and education.”[1] Just as Bement emphasizes networked relationships as an essential component of cyberinfrastructure, I would like to address how small to mid-sized institutions might meet some of the critical challenges of this vision.I propose that in order to realize the cyberinfrastructure vision, colleges and universities reconsider how they approach technology and technology management, which have become just as important as constructing and maintaining the physical facilities on campus. Providing Internet access, for example, should be seen as a key infrastructure asset that needs to be managed. A robust connection to the Internet is necessary for a successful local cyberinfrastructure; however, it is by no means sufficient. The new cyberinfrastructure should include cyber services that enhance existing organizational relationships and make new ones possible–on a national and global basis. However, for some institutions, deploying and sustaining sophisticated organization-wide tools and infrastructure are complex and risky activites.  Smaller institutions often simply cannot implement, sustain and support these initiatives on their own.Cyber Services
While colleges and university libraries were pioneers in using the Internet to provide access to scholarly resources, rarely have they used it to access enterprise technology tools. Instead, most campuses have tried to meet these needs through combining their own hardware infrastructure with (mostly) proprietary software systems that are licensed for the campus, such as Blackboard, ContentDM and Banner. This approach to learning management systems, repository and administrative services may have made sense at a time when the Internet was still in its early stage. It may still make sense for large institutions that have a degree of scale and deep human resources where the organizational benefits of locating all technology services on campus outweigh the costs.

However, smaller, teaching-centered colleges and universities require an attractive alternative to locating all hardware, software, and the attendant technical support on campus, without the onus of locating and selecting application service providers, negotiating licenses and support agreements. They also need to avoid becoming trapped by contractual relationships with new vendors or Faustian bargains with technology giants Google or Microsoft. One option for these institutions is to obtain managed services from organizations such as NITLE, which provide a broad array of professional development and managed information technology services for small and mid-sized institutions. Through using such managed services, institutions are reporting that they lower their technology risk and increase the value proposition for technology innovations.

Lowering Technological Risk Encourages Innovation
Typically, there is a high degree of risk of failure for smaller colleges and universities when they deploy a new technology system. This is because the technical resources and organizational processes required are just not part of the primary focus of these organizations. Typically the risk might be mitigated through devoting significant technological resources and organizational focus to altering the infrastructure in the hope that the institutional culture and processes will adjust to it. But this does not appear a wise approach.

When smaller colleges and universities need organizational technology they often:
1) work to identify the most appropriate vendor and negotiate to obtain the technology they need;
2) focus on how the technology works and on how the technical support for it will be provided; and
3) create organizational processes and procedures that attempt to connect the technical work to the perceived need and the promised benefits.

The focus in this process is often the technical or procedural aspects of a project when the institution would be far better served if the emphasis were on the substantive innovations, relationships, and other benefits that technology can provide. Relationships that are about technical issues per se are off-focus, distracting, and ultimately unproductive, relative to organizational mission.

The continuing development of more sophisticated and complex technologies and the increased dependence on them by these institutions will only increase the potential risk of failure for those that do not make a significant commitment to hiring technology specialists. Increased risk thwarts any interest in using technology to innovate, so technology becomes much less interesting and viable as a route to organizational strength and sustainability. The challenge for smaller colleges, then, is to have dependable, secure and innovative cyber services while reducing the risks and resources traditionally associated with creating new technology systems on campus.

Managed Cyber Services
What do managed cyber services look like and how do they work? In the case of NITLE, it aggregates the cyber services needs of smaller colleges and universities and provides managed services via the Internet so that each individual institution does not have to replicate the hardware, software and technical support on campus for each enterprise application that is needed. NITLE does the legwork, finding reliable and cost-effective hosting solutions and negotiating agreements with applications service providers for services and support. Open source applications are used wherever it is viable. Individual campuses do not have to become involved with these processes, as the goal is to provide an easy on-ramp without legal or contractual agreements with participating campuses. There are also opportunities to test services and experiment with them before participants commit to beginning a new service. In addition, NITLE provides professional development opportunities for campus constituents to learn about the functionality and features of the software in the context of campus needs. Moreover, it encourages campus representatives to participate in communities of practice that it supports.

NITLE currently offers four managed cyber services. The criteria used for selecting cyber services include: participants’ needs; the technology benefits; the development path for the technology (including reliability, scalability, and security); and the expectation and understanding that when adopted by peer institutions, the technology will support the learning communities on campus and peer-to-peer collaboration among campuses.Advantages of Open Source
Colleges are advised to consider open source software (OSS) whenever possible, because OSS offers distinct advantages. The first is the cost savings, as there are no annual licensing fees, and many OSS applications require less hardware overhead, thus helping contain hardware expenditure costs. Second is the support that OSS can provide: a common infrastructure, readily accessible to all, can enable institutions to collaborate more effectively and to focus together on the substantive activities that technology supports.

As a case in point, NITLE provides a repository service using the open source DSpace repository software. The twenty-five colleges and universities that participate in the repository program share their experience and expertise about how the software helps them meet their individual and common goals. Their stated goals include:
1) creating a centralized information repository for information scattered in various difficult-to-find locations;
2) moving archival material into digital formats and making it accessible from one easy-to-access location;
3) bringing more outside attention to the work of students and scholars and thus to the campus;
4) providing the service as a catalyst to help faculty and students begin to learn about and use new forms of publishing and scholarly communication, including intellectual property, open access and publishing rights;
5) preserving digital information.

According to one participating organizational representative to NITLE,

“the open-source approach is definitely helpful in terms of cost. Having [a dependable vendor] administer the hardware and software has been wonderful, since we can concentrate on the applications and not worry about the technical end….Having colleagues from similar schools work on this project has been beneficial, since we can play off of their strengths and weaknesses. They have also given us some good ideas for projects.”*

Another participating organizational representative has added,

“The open source nature of the software is important to us because we know that we are not locked into a closed proprietary system whose existence depends upon the livelihood of a software company. Furthermore, we wouldn’t have gotten started with d-Space on our own because of the infrastructure we’d have to provide to get it going. We don’t have the staff with the skills needed to handle the care and feeding of the server or to customize the software to our needs through application development. Having that part out of the way has given us the opportunity to focus on creating the institutional repository rather than being mired in technical detail of running the software.”*

Open technologies are more than a path to cost savings. They are a critical condition for innovation, access, and interoperability. Many colleges are using OSS for important critical operations, including email (Sendmail), web serving (Apache), and operating system (Linux) applications. This use of OSS suggests that there is a growing acceptance and adoption of OSS. The use of OSS can leverage economies of scale, support network effects, and dramatically increase the speed of innovation.

There is, however, still resistance to making consideration of OSS the de facto approach to meeting organizational software needs. There are several reasons for this opposition, including the view of OSS as hacking, the historical lack of accessible technical support and the paucity of documentation which has complicated the learning curve. Many have long recognized the potential of OSS, but they were reluctant to pursue it because of the increased need for specialized technical support on campus. Thus, for every OSS system, an institution would need to find and hire a technical specialist to support it. This approach certainly is not scalable and smaller institutions were right not to do it.

Multipoint Interactive Videoconferencing (MIV)

Another example of cyber service that institutions should consider is Multipoint Interactive Videoconferencing (MIV). MIV systems enable participants to communicate visually and aurally in real time through the use of portable high-resolution (and inexpensive) cameras and microphones attached to their computers. Participants can see and hear each other, not only on a one-to-one basis, but one-to-many as well. MIV is not a completely new technology; however, its enhanced level of functional maturity, the reduction in costs to provide it, and the need for such systems, have made MIV a technology that is on the verge of widespread adoption and use in a variety of settings.

In the winter and spring of 2007, a dozen participant colleges agreed to evaluate the use of MIV on their campuses and provide NITLE with feedback on the application and their perceptions of its utility. During this evaluation period, participant institutions discovered many types of needs for this technology, both for on-campus and off-campus communications. Uses included guest lectures, meetings between faculty working remotely and connecting with students studying abroad. Since this assessment, NITLE has used MIV for:

1) facilitated conversations led by one or two practitioners among a group of practitioners in an area of common interest, such as incubating academic technology projects or the application of learning theory to the work of academic technologists;
2) presentations by individuals who are using technologies of interest in their classrooms or other campus work to groups of others interested in whether and how they might do something similar, such as historians using GIS;
3) presentations by experts on topics of interest to others in their professional field, such as the academic value of maps;
4) technology training for the participants and users of the cyber services that NITLE offers.

The experience of MIV service participants suggests that the adoption of MIV may be most successful when placed in the context of next steps, developing relationships, individual experience and expertise, and common goals and objectives. This premise suggests learning and collaborative environments that include the use of MIV as part of a range of learning and communications options. Through the pilot study, the many positive benefits that participants have experienced have been documented. However, these benefits are a fraction of what can be realized when many more institutions participate because of network effects and because participants may use the MIV service to collaborate with other organizations outside of the opportunities organized by NITLE.

The “Open” Movement
The promise of information technology can not be met when only large, powerful, and for-profit IT organizations are in control. Open access, open courseware and open source initiatives point toward a world where there is a level playing field for individual learning and organizational innovation by not-for-profit institutions. Where just a few years ago it was difficult to name more than a few organizations that provide technical support for open source applications, the number of service providers is growing. Identifying these providers, selecting the best ones and negotiating agreements–these are the important challenges for managers of cyber services. Providers report that it is often financially unfeasible for them to market to and negotiate with individual institutions for providing cyber services. Creating a reliable and scalable approach to cyber services that works for colleges and providers alike would seem to be an important advance for smaller institutions, both individually and as an important segment of higher education.

The open movement is not about software tools alone, as Arden Bement noted in his comments about the importance of virtual communities. Success depends upon achieving a balance among essential human, organizational and technological components. The potential benefits of the open movement will accrue to colleges and universities that collaborate through using a common set of tools, actively participate in peer information networks and make a priority of mission-focused knowledge and skills. Many institutions know that the value of peer-to-peer communities for individual institutions will increase proportionally to their equal investment in all three of these components. The question may ultimately center on how to support these activities in a systematic and sustainable fashion. This is where small and mid-sized institutions may want to innovate in their approach to technology management.

Collaborative Relationships Foster Organizational Strength and Learning
Technology that supports wide-spread virtual collaboration among smaller colleges and universities such as the repository and MIV services described above demonstrates the potential power of cyber services to enhance organizational innovation, learning and productivity. These peer communities of practice allow campuses to: 1) exchange information about usage, technical issues and support; 2) learn from one another; and 3) synchronize their efforts to use technology to promote shared goals and processes. Having campuses work together and share knowledge as they engage with enterprise systems is a crucial part of the equation. The community of smaller colleges and universities needs a robust organization for that collaboration to happen. Organizations such as NITLE can help fill this need, while also providing opportunities for community participation and encouraging institutions to play lead roles in needs identification, service development, and training and education. As one participant has stated, participation in a managed cyber service is “an opportunity for a group of us to make a leap forward and learn from each other along the way. In addition, [our participating college] saw it as an opportunity to overcome our geographic isolation…I think we have the potential to achieve something tremendous that we will all be proud of.”*
Summary
Technology seems to be much more compelling to smaller colleges and universities–and more cost-effective as well–when it provides substantive benefits while the procedural and instrumental aspects of technology innovation are kept under control. This is not to say that technical expertise at smaller institutions is not necessary or that all cyberinfrastructures should be moved off campus. These extreme changes would be neither productive nor prudent. By working collectively, smaller colleges can use managed services to more effectively apply advanced technologies. Bringing institutions with common needs together in a shared organizational network and aggregating many of their common technology needs through cyber services seems to be a powerful idea. Participating campuses can then provide the scope and scale of programs and services that larger institutions provide while retaining their intimacy and sense of community, and also controlling costs. At the same time, a strong foundation is created both technologically and organizationally for the type of cross-institutional endeavors and learning communities that can help smaller institutions promote scholarship that is vital and attractive to students and faculty alike. When common goals are met in cost effective ways, mission is strengthened for all.

[1]”Shaping the Cyberinfrastructure Revolution: Designing Cyberinfrastructure for Collaboration and Innovation,”  First Monday, volume 12, number 6 (June 2007)http://firstmonday.org/issues/issue12_6/bement/index.html. Accessed September 26, 2007.

* Responses to a survey administered by the author to a subset of NITLE participating organizations during July of 2007.

Cyberinfrastructure and the Sciences at Liberal Arts Colleges

by Francis Starr, Wesleyan.edu

Introduction
The technical nature of scientific research led to the establishment of early computing infrastructure and today, the sciences are still pushing the envelope with new developments in cyberinfrastructure. Education in the sciences poses different challenges, as faculty must develop new curricula that incorporate and educate students about the use of cyberinfrastructure resources. To be integral to both science research and education, cyberinfrastructure at liberal institutions needs to provide a combination of computing and human resources. Computing resources are a necessary first element, but without the organizational infrastructure to support and educate faculty and students alike, computing facilities will have only a limited impact. A complete local cyberinfrastructure picture, even at a small college, is quite large and includes resources like email, library databases and on-line information sources, to name just a few. Rather than trying to cover such a broad range, this article will focus on the specific hardware and human resources that are key to a successful cyberinfrastructure in the sciences at liberal arts institutions. I will also touch on how groups of institutions might pool resources, since the demands posed by the complete set of hardware and technical staff may be larger than a single institution alone can manage. I should point out that many of these features are applicable to both large and small universities, but I will emphasize those elements that are of particular relevance to liberal arts institutions. Most of this discussion is based on experiences at Wesleyan University over the past several years, as well as plans for the future of our current facilities.

A brief history of computing infrastructure
Computing needs in the sciences have changed dramatically over the years. When computers first became an integral element of scientific research, the hardware needed was physically very large and very expensive. This was the “mainframe” computer and, because of the cost and size, these machines were generally maintained as a central resource. Additionally, since this was a relatively new and technically demanding resource, it was used primarily for research rather than education activities.

The desktop PC revolution started with the IBM AT in 1984 and led to the presence of a computer on nearly every desk by the mid 1990’s. The ubiquity of desktop computing initiated tremendous change to both the infrastructure and uses of computational resources. The affordability and relative power of new desktops made mainframe-style computing largely obsolete. A computer on every desktop turned users into amateur computer administrators. The wide availability of PCs also meant that students grew up with computers and felt comfortable using them as part of their education. As a result, college courses on programming and scientific computing, as well as general use of computers in the classroom, became far more common.

Eventually, commodity computer hardware became so cheap that scientists could afford to buy many computers to expand their research. Better yet, they found ways to link computers together to form inexpensive supercomputers, called clusters or “Beowulf” clusters, built from cheap, off-the-shelf components. Quickly, the size of these do-it-yourself clusters grew very large, and companies naturally saw an opportunity to manufacture and sell them ready-made. People no longer needed detailed technical knowledge of how to assemble these large facilities; they could simply buy them.

This widespread availability of cluster resources has brought the cyberinfrastructure needs full circle. The increasing size, cooling needs, and complexity of maintaining a large computing cluster has meant that faculty now look to information technology (IT) services to house and maintain cluster facilities. Maintaining a single large cluster for university-wide usage is more cost effective than maintaining several smaller clusters and reduces administrative overhead. Ironically, we seem to have returned to something resembling the mainframe model. At the same time, the more recently developed desktop support remains critical. As technology continues to progress, we will doubtless shift paradigms again, but the central cluster would appear to be the dominant approach for at least the next five years.

Hardware resources
The cluster is the central piece of hardware–but what makes up the cluster? How large a cluster is needed? Before we can address the question of size, we should outline the key elements. This becomes somewhat technical, so some readers may wish to skip the next five paragraphs.

First, there is the raw computing power of the processors to consider. This part of the story has become more confusing with the recent advent of multiple core processors. In short, a single processor may have 2, 4 or, soon, 8 processing cores, each of which is effectively an independent processor. This does not necessarily mean it can do a task faster, but it can perform multiple tasks simultaneously. Today, I think of the core as the fundamental unit to count, since a single processor may have several cores, and a single “node” (physically, one computer) may have several processors. For example, at Wesleyan, we recently installed a 36-node cluster, each node having 2 processors and each processor having 4 cores. So while a 36-node cluster may not sound like much, it has packed into it 288 computing cores.

This high density of computing cores has several advantages: it decreases the footprint of the cluster; decreases cooling needs; and decreases the number of required connections. For the moment, let’s focus on connectivity. The speed of connections between computers is glacial in comparison to the speed of the processors. For example, a 2-GHz processor does one operation every 0.5 nanoseconds. To get an idea of how small amount of time this is, consider that light travels just about 6 inches in this time. The typical latency–the time lost to initiate a transmission–of a wired ethernet connection is in the range of 0.1-1 milliseconds, or around 2000 clock cycles of the processor. Hence, if a processor is forced to wait for information coming over a network, it may spend a tremendous number of cycles twiddling its thumbs, just due to latency. Add the time for the message to transmit, and the problem becomes even worse. Multiple cores may help limit the number of nodes, and therefore reduce the number of connections, but the connectivity problem is still unavoidable. So what to do?

The answer depends on the intended usage of the cluster. In many cases, users want to run many independent, single process, or serial, tasks. In this case, communication between the various pieces is relatively unimportant, since the vast majority of the activity is independent. Ordinary gigabit ethernet should suffice in this situation and is quite cheap. If the usage is expected to include parallel applications, where many cores work together to solve a single problem faster, it may be necessary to consider more expensive solutions. However, given that it is easy to purchase nodes containing 8 cores in a single box, these expensive and often proprietary solutions are only needed for rather large parallel applications, of which there are relatively few.

All this processing power is useless, however, without a place to store the information. This is most commonly achieved by hard disks that are bundled together in some form, though for the sake of simplicity, they appear to the end user as a single large disk. These bundles of disks can easily achieve storage sizes of tens to hundreds of terabytes, a terabyte being 1000 gigabytes. The ability to store such large amounts of information is particularly important with the emergence in the last decade of informatics technologies, which rely on data-mining of very large data sets.

The last, and sometimes the greatest challenge, is housing and cooling the cluster. Even with the high density of computing cores, these machines can be large and require substantial cooling. A dedicated machine room with supplemental air conditioning is needed, typically maintained by an IT services organization. Fortunately, most IT organizations already have such a facility, and with the decreasing size of administrative university servers, it is likely that space can be found without major building modifications. However, do not be surprised if additional power or further boosting of cooling is needed. The involvement of the IT organization is critical to the success of infrastructure. Accordingly, it is important that IT services and technically-inclined faculty cultivate a good working relationship in order to communicate effectively about research and education needs.

OK, but how big?
Given these general physical specifications for the key piece of hardware, the question remains, how big a cluster? Obviously the answer depends on the institution, but I estimate 3 or 4 processing cores for each science faculty member. An alternate and perhaps more accurate way to estimate is to consider how many faculty members are already heavy computational users and already support their own facilities. I would budget about 50 cores for each such faculty member, though it is wise to more carefully estimate local usage. Part of the beauty of a shared facility is that unused computing time that might be lost on an individual faculty member’s facility can be shared by the community, reducing the total size of the cluster necessary to fulfill peak needs.

Software needs tend to be specialized according to the intended uses, but it is important to budget funds for various software needs, such as compilers and special purpose applications. The Linux operating system is commonly used on these clusters and helps to keep down software costs since it is an open source system. For many scientific computing users, Linux is also the preferred environment regardless of cost.

The cluster itself is of limited use without the human resources–that is, the technical staff–to back it up. At a minimum, a dedicated systems administrator is needed to ensure the smooth operation of the facility. Ideally, the administrator can also serve as a technical contact for researchers to assist in the optimal use of the cluster facility. However, to make the facility widely accessible and reap the full benefit for the larger university community, a more substantial technical support staff is needed.

The human element: resource accessibility
The presence of a substantial cluster is an excellent first step, but without additional outreach, the facility is unlikely to benefit anyone other than the expert users who were previously using their own local resources. Outreach is key and can take a number of forms.

First, faculty who are expert in the use of these computer facilities need to spearhead courses that introduce students to the use and benefits of a large cluster. This will help build a pool of competent users who can spread their knowledge beyond the scope of the course. This effort requires little extra initiative and is common at both liberal arts and larger universities.

Second, it is particularly important in a liberal arts environment to develop and sustain a broad effort to help non-expert faculty take advantage of this resource for both research and educational purposes. Otherwise, the use of these computers will likely remain limited to the existing expert faculty and the students whom they train.

Outreach across the sciences can also take the form of a cross-disciplinary organization. At Wesleyan, we established a Scientific Computing and Informatics Center, with the goal of both facilitating the use of high-performance computing and supporting course initiatives that use computational resources. The center is directed by a dedicated coordinator, who is not burdened with the technical duties of the systems administrator, and is assisted by trained student tutors.

The first goal of the center, facilitating cluster use, is primarily research-oriented. That is, the center serves as a resource where faculty and students can seek assistance or advice on a range of issues–from simple tasks like accessing the resources to complex problems like optimization or debugging complex codes. In addition, the center offers regular tutorials on the more common issues, making broader contact across the institution.

The second goal–educational outreach–is particularly important for liberal arts institutions. Educational outreach deals with all aspects of computational activities in the curriculum, not just cluster-based activities. For example, if a faculty member wishes to make use of computational software, the center staff will offer training to the students in the course, thereby leaving class time to focus on content. The center staff will also be available for follow-up assistance as the need arises. This eliminates the problem of trying to add or include training for computational resources in existing courses.

But efforts should not stop at this level. While we are still in the early stages of our experiment at Wesleyan, I believe that such a support organization will not have a significant impact if it simply exists as a passive resource. The center must actively seek out resistant faculty and demonstrate through both group discussions and one-on-one interactions how computational resources can enhance their teaching activities.

To maintain the long-term vitality of this kind of center, it is important to maintain a group of trained and motivated student tutors. To do this, we have chosen is to offer students summer fellowships to work on computationally demanding research projects with faculty. Some of these students then serve as tutors during the academic year. Combined with this summer program are regular lecture and tutorial activities. These tutorials may also be expanded to reach beyond the bounds of the university to other institutions as workshop activities.

Cross-institutional collaboration
Sometimes, all of these goals can be met by a single institution. But even if this is possible, there are still benefits to looking outside the institution. And for smaller institutions, pooling resources may be the only way to develop an effective cyberinfrastructure.

While high-speed networks now make it technically possible to establish inter-institutional efforts across the country, it is important to be able to gather together a critical mass of core users who can easily interact with each other. In my own experience, this happens more easily when the users are relatively nearby, say no more than 100 miles apart. It means that institutions can share not only the hardware resources over the network, but also the technical support staff. Of course, day-to-day activity is limited to interaction within an institution or virtual communications between institutions, but frequent and regular person-to-person interaction can be established at modest distances.

Balancing individual institutional priorities in such a collaboration is obviously a delicate process, but I envision that the institution with the most developed IT services can house and maintain the primary shared hardware resource, thereby reducing the administrative needs across several institutions. Adequate access to facilities can be guaranteed by taking advantage of the fact that most states maintain high-speed networks dedicated for educational usage. In addition, there are many connections between these state networks, such as the New England Regional Network. Personal interactions can be facilitated by regular user group meetings where users can share their questions and concerns with an audience that extends beyond their institution. In addition, new electronic sharing tools, such as wikis and blogs, can help foster more direct virtual communications.

Summary
To have a successful cyberinfrastructure in the sciences, it is essential to develop both hardware and human resources. Personal support and outreach to faculty and students is crucial if the benefits of the infrastructure are to serve a wider clientele. For liberal arts institutions, the presence of state-of-the-art infrastructure helps them to compete with larger institutions, both in terms of research and in attracting students interested in technology. At the same time, emphasizing outreach is of special importance to achieve the educational goals that make liberal arts institutions attractive to students.

Acknowledgments
I wish to thank Ganesan Ravishanker (Associate Vice President for Information Technology at Wesleyan University) and David Green for their assistance preparing this article.

College Museums in a Networked Era–Two Propositions

by John Weber, Skidmore College

To begin, let’s take it as a given that the “cyberinfrastructure” we are writing about in this edition of Academic Commons is both paradigmatically in place, and yet in some respects technologically immature. The internet and the intertwined web of related technologies that support wired and wireless communication and data storage have already altered our ways of dealing with all manner of textual and audiovisual experience, data, modes of communication, and information searching and retrieval. Higher education is responding, but at a glacial pace, particularly in examining new notions of publishing beyond those which have existed since the printed page. Technologies such as streaming and wireless video remain crude, but digital projectors that handle still image data and video are advancing rapidly, and the gap between still and video cameras continues to close. Soon I suspect there will simply be cameras that shoot in whatever mode one chooses (rather than “camcorders” and “digital cameras”), available in a variety of consumer and professional versions and price points. Already, high definition projectors and HD video are a reality, but they have yet to permeate the market. They will soon, with a jump in image quality that will astonish viewers used to current recording and projection quality.

For museums, network and CPU speed, as well as screen and projection resolution, are key aspects of these technologies. Only recently have digital images caught up with analog film in resolution and potential color accuracy (which, lest we forget, was never a given with film, either). The digitization of museum collections and their placement on higher education or public networks is undoubtedly a meaningful teaching asset, but the impact of this shift is, I suspect, largely a matter of ease and logistics, wherein the information provided replicates existing resources without fundamentally changing the knowledge gained from them. In other words, slide collections and good research libraries already provided much of the museum collection information now present on the internet. Yes, we should all be documenting our collections and making it easier for faculty and students to use those images, but no, that activity alone will not change our world in and of itself. Combined with an aggressive program to foster collection use by faculty and students, it can accomplish a great deal for a college museum, but we can and should aim even higher.

With that preface in place, let’s consider the museum, the internet, and the college curriculum as structuring conditions governing the nature of human experience that can occur within their boundaries. Museums are traditionally and fundamentally concerned with unique objects and notions of first-hand experience tied intrinsically to one specific place. There is only one Mona Lisa, and one Louvre where you can see it. There is only one Guggenheim Bilbao, and to see it you must go to Spain. In contrast, the internet is fundamentally about the replication and distribution of whatever it touches or contains, made available all the time, everywhere. And the internet continues to extend its reach, now arriving in phones, cafés, hotel rooms, airports, and no doubt soon on plane flights: ubiquitous computing, 24/7. The two conditions could not seem to be more distinct, disparate, and opposed.

Now let’s consider the nature of the college curriculum, briefly, as a structuring condition for experience and learning. Like the internet, it relies fundamentally on the reproducibility and distributability of the knowledge it seeks to offer each new generation of students. Courses are offered more than once. Books are read again and again. Disciplines must be taught in a way that adequately reproduces accepted standards and thereby transfers credits, reputations, and ultimately knowledge from grade to grade, classroom to classroom, and institution to institution, across time. The notion of a unique, one-time course is at best a luxury, at worst a foolish expenditure of time and effort–for faculty, if not for students. Shortly after becoming director of the Frances Young Tang Teaching Museum and Art Gallery at Skidmore College, for example, I overheard a tenured, senior faculty member remark over lunch that no one could compel him to create a course he would be able to teach just once. To him, the notion was absurd and counterproductive. His point was obvious: since it always takes more than one attempt to get a course developed and refined for a given student community and college culture, creating courses you can offer only once is simply not an intelligent way to teach, even if the demands of establishing a consistent curriculum would allow it, which of course they don’t.

As a new college museum director and recent emigrant from the world of the large, urban museum, it was an instructive moment, and as someone who had periodically taught at the college level, it made perfect sense. Yet at the same time, I was mildly taken aback: museums routinely create “courses” (i.e. exhibitions) that they “teach” (i.e. present) only once. The one-time special exhibition is, in fact, arguably our bread and butter. Even museums with world-class collections (e.g. New York’s Metropolitan Museum of Art or the Museum of Modern Art) rely on special, one-time exhibitions to drive attendance, increase membership, build revenue, and underwrite their economic survival. How, then, can college museums effectively link a program of changing exhibitions to the rhythm of a college curriculum?

How, in essence, can we “teach” the exhibition after it has left the gallery? How can we marry the one-time encounter with a set of unique objects to the cyclical, repeating demands of curriculum? These are central questions for college museums as they are asked increasingly to play a more significant role in the teaching efforts of the institutions that house and foster them. In fact, they may well be thecentral questions, since without answering them college museums are unlikely to achieve a new degree of relevance and support within their institutional context.

Paradoxically (in view of how I started this article) I’m going to argue that the best available answers are to be found in the creative use of new technologies and the internet. Networked multimedia technologies and the maturing cyberinfrastructure can’t fully reproduce the one-time experience offered by the museum space and the museum exhibition, but they can go much farther toward capturing its unique, spatial, temporal, multimodal, three-dimensional and temporal impact than any previously available publication method or recording device we have had.

Now let’s examine the nature of exhibitions and museum installations themselves; I’m using art museums as my test case, but much or most of what I’m saying should apply to other kinds of institutions and subject matters. Museum exhibitions exist in space, and by that I mean three-dimensional space. They house and assemble discrete groups of objects, arranged by curators to create or emphasize meaning through juxtaposition, sequence, and context. Wall texts, lectures, publications, docent and audio tours have been the primary means of sharing with museum visitors the curatorial intentions driving exhibitions and the insights gleaned in the course of assembling them. Over the past decade and more, museums have experimented increasingly with interactive kiosks, websites, and more recently podcasts as ways to share insights, ideas, and background information relevant to the work on view. College museums have participated in this exploration, but only rarely led it.[1] I suspect this has to do in part with the relatively small size of education departments in college museums, combined with an orientation toward “scholarship” that finds its preferred outcome in printed matter, i.e. the scholarly catalogues valued by faculty curators, rather than in “visitor outreach” so conceived as to motivate and underwrite digital programming.

An additional factor slowing digital innovation in college museums may be the fact that the IT and academic computing staff on college campuses–which could in theory assist museums in the creation of digital learning programs–are generally beset by huge demands from across campus. Only rarely can they devote extended blocks of time and significant resources to their resident museums. Yet the presence of such theoretically available staff makes it difficult for college museum directors to argue for dedicated, in-museum staff devoted to digital matters. As a result, we attempt to piece together project teams from existing staff, work-study students and interns, a mix that seldom attains the degree of hands-on experience, longevity, or programming expertise needed to create truly new, exceptional programs. This Catch 22 is, I suspect, not a trivial issue.

In contrast, large urban museums such as the National Gallery, London, the Minneapolis Institute of Art, the San Francisco Museum of Modern Art, New York’s Museum of Modern Art, and recently Los Angeles Museum of Contemporary Art, among many others, have created groundbreaking interactive educational programming by hiring dedicated staff and devoting significant fiscal resources to their efforts. Generally, those institutions have relied on a balance of in-house staff and high-powered (but modestly compensated) outside programming and design firms.

Yet despite the relative lack of resources that college museums devote to their digital education efforts, the potential rewards of doing so are significant. In particular, I believe that college museums also have a vested interest in exploring an area of digital programming that remains largely untouched by their civic counterparts, namely, the creation of rich multimedia documentation and multilayered, interactive responses to exhibitions themselves, after they have opened. Such programming would focus not only on the basic content of the exhibition (i.e. the individual images and objects in it) but on the physical exhibition itself as a carefully considered form of content and utterance. Such programming would take full advantage of the completed exhibition as the arena for both documenting and interrogating the set of propositions, insights, and ideas expressed in its physical layout and checklist. It would survey curatorial, scholarly, and lay responses to the completed show, allowing insights gained in the final installation and post-facto contemplation of the exhibition to emerge over time. Finally, such an approach would offer real-time walk-throughs of the exhibition, as well as high-quality, 360-degree still images, providing future virtual visitors a strong, visceral sense of what it felt like to be in the galleries with the work, looking from one object to another, moving through space, and getting a sense of the way the curator used the building and its architecture.[2]Although simple and straightforward, this practice has rarely been explored due to the mandates and pressures governing digital education programs in large museums. With a few laudable recent exceptions [3], major museums create interactive programs designed to provide visitors to upcoming exhibitions with background information on the basic content to be on view. They create their programs to be ready on opening day. Once the exhibition is open, the harried staff moves on to the next project for the next show. In short, such institutions ignore the exhibition as a finished product and focus on its raw content, a practice that makes sense given their audiences and economics. A quick survey of museum websites demonstrates that few museums are even in the practice of posting extensive images of their shows or galleries online, regardless of the extensive databases of collection images they may maintain.

For college museums that seek to create new ways to encourage faculty to teach their content and bring classes to their galleries, the potential benefits of creating experientially gripping and idea-rich responses to exhibitions should be obvious: digital technologies can allow us to teach an exhibition after it closes, and that would be a fundamentally new step for the museum world.

The second thing I’d like to discuss is the potential relevance of museum-based teaching and learning to generations of students (and soon faculty) for whom the structured but non-linear, highly visual as well as verbal, multimodal information world offered by the cyberinfrastructure is second nature. Highly textual, the World Wide Web in particular is also routinely and compulsively visual. It is a domain that is designed. Pictures are used as building blocks in enterprises created to argue, inform, archive, entice, sell, and distract. Rarely do we now encounter a text-only website; instead, text-image juxtapositions prevail, and websites now typically offer a mix of static graphics, sound, and animated graphics or video clips. Effective graphic design, or “information design” if you will, is essential. Students today grow up in this world and live there. Significantly, their cyberworld is a social world of self-projection and at times fantasy (i.e. blogs, Facebook pages, and social gaming) as well as a realm of entertainment and research.

As higher education considers the “digital native,” “net generation” students now entering the academy, the question of how to teach what is variously referred to as visual literacy, information literacy, twenty-first-century literacy, or expanded literacy comes increasingly to the forefront. I share the conviction that unless colleges and universities find a way to expand their text-based notions of literacy, analysis, and critique to include the domains of the visual and the moving image, we are not equipping our students adequately to enter either the future academic world or the workplace. Quite simply, the tools that empower and govern human expression have changed, and the academy needs to decide how it will respond.

As I have argued elsewhere[4], museums can potentially play an intriguing role in fostering forms of visual literacy and expanded literacy suited to the digital, networked era. Like the internet, the museum space is structured, yet non-linear. You move through museum galleries laterally from object to object in a largely self-determined path, much like motion from webpage to webpage. Both experiences are highly but not exclusively visual. Along with looking, museum visits generally encompass reading, listening, talking to friends and family members or museum personnel, and making decisions about how long to linger in any given place. Museum visits, like many web visits, are infused by random user choices made within spatial structures that are highly designed and planned by their builders.

Teaching within the museum space forces faculty and students alike to make different choices about how to structure time, how to do research, and, one hopes, about how to present their ideas, analysis, and conclusions. In pushing the visual dimension of experience and analysis to the forefront, museum exhibitions of all kinds force participants to use their eyes and link what is seen to what is said and written.
Notions of proof and argument evolve in new ways when first-hand, three-dimensional visual artifacts rather than texts are the subject of analysis. For example, a professor I know begins a class by bringing her students to the museum and showing them everyday ceramics and pottery from the American southwest. Without the benefit of library research, she asks them to deduce everything they can about the people that produced the artifacts from the visual evidence in front of them, unaided by others’ insights. Allowing students to work with visual evidence similar to the material confronted by working archeologists, and forcing them to use only their eyes and brains demands that students both look and think for themselves, expressing their own conclusions in their own words.

As another example of the intersection of visual and analytical learning in the museum environment, Molecules That Matter, a special exhibition on view this year at the Tang Museum, was originated by a longtime Skidmore organic chemistry professor, Ray Giguere. Investigating ten organic molecules that influenced twentieth-century history (aspirin, isooctane, penicillin, polyethylene, nylon, DNA, progestin, DDT, Prozac, and buckyball), the exhibition brings together a wide variety of artworks and objects of material culture with a set of huge, specially commissioned, scientifically accurate molecular models. Reaching into fields as diverse as women’s studies (progestin is the molecule responsible for oral contraception), economics, psychology, engineering, medicine and nutrition, technology, environmental studies, and of course art and art history, it offers a wealth of ways, visual and otherwise, for faculty and students to engage its subject matter. Crucially, the show seeks to function as a starting point for wide-ranging investigations, research projects, and responses. Far too broad to sum up the many topics it points to, Molecules That Matter offers specific, highly-stimulating and revealing artifacts as visual bait to lure non-scientists and future scientists alike to reconsider how organic chemistry runs through their everyday lives in unnoticed ways.

Working on an extended website for the show with a group of students, Susi Kerr, the Tang’s senior educator, Ray Giguere, myself, and the rest of the exhibition team had to ask the students and ourselves again and again how we could not simply say but show the ideas we sought to convey. In both the museum and on the internet, words alone simply don’t entice or suffice. Furthermore, in both domains, not all visual experiences are created equal–some pictures, objects, and images are more powerful and academically appropriate than others, and learning to distinguish between them is a key skill that students (and first-time faculty curators) need to learn. I have also found that museum writing (for intro texts, extended object labels, and even catalogue essays for non-specialist audiences) bears more in common with writing for the web than does the traditional academic paper. Museum writing is inherently public, for one thing, and meant to be read by people who can walk away the minute they lose interest. That said, all three forms of writing (museum, web, and academic) need to be succinct, grammatically correct, pleasingly well-crafted, and intellectually sound.

To sum up, the two propositions outlined here argue for (1) the importance of networked digital technologies to the particular mission of the college museum, and (2) the potential importance of the college museum in teaching forms of visual literacy suited to the internet era in innovative and appropriate ways. I take it as a given that museums and the materials they hold and display are valuable to their particular subject domains and academic disciplines. That should be obvious and beyond dispute, and for that reason alone college museums deserve a place on their campuses. However, if we are to play an even more essential and intriguing role in higher education, museums of all varieties must explore how we can function as a core aspect of the overall teaching effort of our institutions, and how we can regularly address multiple disciplines in our exhibitions. At that moment, our intersection with the cyberinfrastructure and the largely unexploited teaching potential of digital technologies takes on a new significance.
NOTES

[1] One exception I can think of is American Visions, The Roy L. Neuberger Collection, an excellent, early interactive CD-ROM published by SUNY Purchase in 1994. Tellingly, the art historian who worked on it was Peter Samis, who soon became head of interactive educational technologies at SFMOMA and pioneered our efforts to develop SFMOMA’s award-winning interactive programs.

[2] See, for example, the brilliant use of QuickTime VR in Columbia University’s Real?Virtual, Representing Architectural Time and Space, which stunningly documents Le Corbusier’s Ronchamp Church of Notre-Dame-Du-Haut.

[3] New York MOMA’s recent Richard Serra retrospective was accompanied by an admirable video walk-through of the completed exhibition, narrated insightfully by the artist himself. In Los Angeles, the Museum of Contemporary Art created an extensive site that visually documents the WACK exhibition on the history of feminist art, and brings to bear the voices of many artists and scholars who spoke at the museum while the show was on view. Audio of the artists and other speakers was complemented by images of them with their audiences, and by a list-serve allowing others to comment. Together, these programs brought the exhibition itself to life, adding texture, voice, and personality rarely seen in the “big museum” world.

[4] See “Thinking Spatially: New Literacy, Museums, and the Academy,” EDUCAUSE Review Online, January-February, 2007, pp 68-69.

Museums, Cataloging & Content Infrastructure: An Interview with Kenneth Hamma

by David Green, Knowledge Culture

Ken Hamma is a digital pioneer in the global museum community. A classics scholar, Hamma joined the Getty Trust in 1987 as Associate Curator of Antiquities for the Getty Museum. He has since had a number of roles there, including Assistant Director for Collections Information at the Getty Museum, Senior Advisor to the President for Information Policy and his current position, Executive Director for Digital Policy and Initiatives at the Getty Trust.

David Green: Ken, you are in a good position to describe the evolution of digital initiatives at the Getty Trust as you’ve moved through its structure. How have digital initiatives been defined at the Getty and how are they faring at the institutional level as a whole, as the stakes and benefits of full involvement appear to be getting higher?
Ken Hamma: Being or becoming digital as short-hand for the thousands of changes institutions like this go through as they adopt new information and communication technologies has long been discussed at the Getty from the point of view of the technology. And it did once seem that applying technology was merely doing the same things with different tools when, in fact, we were starting to embark upon completely new opportunities. It also once seemed that the technology would be the most expensive part. Now we’ve learned it’s not. It’s content, development and maintenance, staff training, and change management that are the expensive bits.

About 1990 it seemed to me (without realizing the impact it would cause) that it was the Getty’s mission that would and should largely drive investments in becoming digital. That it would require someone from the program side of the house to take more than a passing interest in it. I know that sounds impossibly obvious, but it wasn’t nearly so twenty years ago when computers were seen by many as merely expensive typewriters and the potential of the network wasn’t seen yet at all. Needless to say, the interim has been one long learning curve with risks taken, mistakes made, and both successes and failures along the way. Now, we’ve just got to the point at the Getty where–with a modicum of good will–we can think across all programs with some shared sense of value for the future. We now have a working document outlining the scope and some of the issues for digital policy development at the institution that would cover things like the stewardship and the dissemination of scholarship, digital preservation and funding similar activities elsewhere. Within this scope, we’ll be considering our priorities, the costs and risks involved, and some specific issues such as intellectual property and scholarship, partnerships and what kind of leadership role there might be for the Getty.

Do you see the Getty, or some other entity, managing to lead a project that might pull museums together on some of these issues?
There’s only a certain amount that can be done from inside one institution and there are some fundamental changes that can’t be made and probably need to be made. One of the big problems about technology is its cost. For so many institutions it’s still just too expensive and too difficult. There’s a very high entry barrier–software license and maintenance fees as well as technology staff, infrastructure development and professional services–in short, the full cost of owning technology. The result isn’t just a management problem for museums but an opportunity cost. We’re falling behind as a community by not fully participating in the online information environment.

There was a technology survey in 2004 of museums and libraries that pointed out that although small museums and public libraries had made dramatic progress since 2001, they still lagged behind their larger counterparts.[1] While almost two-thirds of museums reported having some technology funding in the previous year, 60% said current funding did not meet technology needs and 66% had insufficiently skilled staff to support all their technology activities. This problem is complicated by a gap between museums’ community responsibilities and the interests of the commercial museum software providers–notably the vendors’ complete disinterest in creating solutions for contributing to aggregate image collections. There was a similar gap between library missions and OPAC (Online Public Access Catalog) software until OCLC grew to fill that gap in the 1980s.

Can you imagine any kind of a blue-sky solution to this?
Well, imagine a foundation, for example, that took it upon itself to develop and license collection-management and collection-cataloging software as open source applications for institutional and individual collectors. It might manage the software as an integrated suite of web applications along with centralized data storage and other required infrastructure at a single point for the whole museum community. This would allow centralized infrastructure and services costs to be distributed across a large number of participating institutions rather than being repeated, as is the case today, at every institution. Museums could have the benefits of good cataloging and collection management at a level greater than most currently enjoy and at a cost less than probably any individual currently supports.

Managing this as a nonprofit service model could create cataloging and collection management opportunities that are not just faster, better and cheaper, but also imbued with a broader vision for what collecting institutions can do, both individually and as a community in a digital environment. If we could do this by providing open source applications as well as web services, it would build value for the community rather than secure market advantage for a software vendor. A service model like this could also assume much of the burden of dealing with highly variable to non-existent data contributions that have plagued previous attempts to aggregate art museum data. And I think it could do it by supplying consistent metadata largely by enabling more easily accessible and better cataloging tools.[2] This problem of aggregating museum data has a relatively long history and its persistence suggests that though current schemes are certainly more successful, what the community needs is a more systemic approach. One of the problems is that there just isn’t a lot of good museum data out there to be aggregated. So talking about what it would be like to have aggregated repositories other than those that are hugely expensive and highly managed (like ARTstor), it’s unlikely to happen anytime soon. There’s not enough there there to aggregate with good results.

Cataloging seems to be the key to this future, as far as museums’ resources are concerned. Would this scenario would be a first step in producing some good common cataloging?
Well, yes. It’s not enough to say to institutions, “You have to be standards-compliant, you have to use thesauri, you have to use standards, you have to do this and do that.” There are a lot of institutions that aren’t doing anything and aren’t going to do things that are more expensive and time consuming. So it’s not going to help to say that collection managers should be doing this. They’re just not going to do it unless its easier and cheaper, or unless there an obvious payoff and there isn’t one of those in the short term.

So such a project, if it were ever undertaken, would be about providing infrastructure, about providing tools?
Yes, as well as thinking about how we maintain those tools and how we provide services. Because most cultural heritage institutions don’t have IT departments and probably never will, how can we think about sharing what’s usually thought of as internal infrastructure? I mean, choose a small museum with a staff of three; you can’t say ‘you can’t have a finance guy because you need IT,’ or ‘you can’t have a director because you need to do cataloging.’ That’s just not going to happen.

There’s a related model that you have been working on that provides a technical solution both to cataloging and to distribution. If I’m right, it’s not about creating a single aggregated resource but rather about enabling others to create a range of different sources of aggregated content, all using metadata harvesting.
Yes, it’s still in its formative stages, but the essential idea is to put together a system that is lightweight, easily implemented by small institutions, doesn’t require huge cataloging overhead and that supports resource discovery. A problem today is that if you wanted to ask for, say, an online list of all Italian paintings west of the Mississippi, that presupposes that all collections with an Italian painting are participating. But we’re so far from that. It’s the rich and well-funded that continue to be visible and the others are largely invisible. So can we come up with a protocol and a data set that would allow for easy resource discovery that would have a low bar for cataloging and metadata production for unique works?

In this project, we’ve gone through a few rounds now, using the recently developed CDWA Lite as the data standard, mapping that to the Dublin Core in the Open Archives Initiative Protocol for Metadata Harvesting (OAIPMH). Dublin Core, as we’ve all learned, is a bit too generic so we’ve applied some domain knowledge to it and have additionally included URL references for images. We’ve collaborated with ARTstor and have done a harvesting round with them. Getty’s paintings collection is in ARTstor not because we wrote it all to DVD and mailed it to New York, but because ARTstor harvested it from our servers. Just imagine we get to the point where all collections can be at least CDWA-Litely cataloged–say just nine fields for resource discovery. Then these can be made available through an exchange protocol like OAIPMH and then interested parties such as an ARTstor (who might even host an OAI server so not every collecting institution has to do that) could harvest them. If we could get that far and we imagine that other aggregators like OCLC might aggregate the metadata even if they didn’t want the images, it could be completely open. The network would support collection access sharing and harvesting that would be limited only by the extent of the network. Any institution (or private collector) could make works available to the network so any aggregator could collect it. A slide librarian at a college, with desktop harvesting tools, could search, discover and gather high-quality images and metadata for educational use by the teachers in that school. Or perhaps intermediate aggregators would do this with value-added services like organizing image sets for Art 101 at a cost that might suggest a different end-user model.

How far away is this from happening?
The protocol exists and will likely very shortly be improved with the availability of OAI-ORE. The data set exists but is still under discussion. That will hopefully be concluded in the next months. And the data standards exist, along with cross collection guides, like CCO, that’s Cataloging Cultural Objects, on using them. The tools should not be too hard to create. The problem again is the institutional one, the usual one when we’re talking about content. Most museums are not willing to enter into such an open environment because they will want to know who is harvesting their collection. It’s the reaction that’s usually summed up by “we’re not sure we can let our images out.” These are those expected nineteenth-century attitudes about protecting content along with the late twentieth-century attitudes that have been foisted on the museum community about “the great digital potential”–generating revenue based on that content as long as they control it and don’t make it accessible. How sad.

The recent NSF/JISC Cyberscholarship Report[3] discusses the importance of content as infrastructure, and how any cyberscholarship in a particular discipline is grounded until that part of cyberinfrastructure is in place. Museums are clearly far behind in creating any such content infrastructure out of their resources. What will it take to get museums to contribute more actively to such an image content infrastructure? Is there a museum organization that could coordinate this or will it take a larger coordinating structure? Will museums be able to do this together or will they need some outside stimulus?
If it isn’t simply a matter of waiting for the next generation, I don’t really know. It would really be helpful if there were, for example, a museum association in this country that had been thoughtfully bringing these issues to the attention of the museum community, but it hasn’t been true for the last twenty years. And museums are different from the library community with respect to content-as-cyberinfrastructure in that they’re always dealing with unique works. This changes two things: first, one museum can’t substitute a work in the content infrastructure for another one (the way in which a library can supply a book that another library cannot); and, secondly, for these unique works there’s a much greater sense of them as property (“its mine”). This, in a traditional mindset, raises the antenna for wanting to be a gatekeeper, not just to the work but even to information about it. You can see this in museums talking about revenue based on images of the works in their collections, or the need for museums to be watching over “the proper use” (whatever that is) of their images. Not that we don’t need to be mindful of many things like appropriate use of works under copyright. So there is still the sense that there’s got to be something (financial) gained from these objects that are “mine,” whereas most of these collections are supported by public dollars and there must be some public responsibility to make them freely available.

‘You’ve talked elsewhere about the “gatekeeper” mentality among many museum professionals, perhaps especially curators. How do you imagine the forward trajectory of this? How will this gatekeeper mentality play out?
Yes, it’s been very frustrating, but I think it is changing. Even over the past few years I think there’s been significant change in how people think about their gatekeeper role. Today–different from ten years ago–I would say curators are less and less gatekeepers, and directors are being caught off-guard by curators proposing greater openness of the sort that will take advantage of network potential. The Victoria & Albert Museum, the Metropolitan Museum and others are now making images available royalty-free for academic publishing.[4] And along with these changes there is a change in the tenor of the discussion. We want to keep the conversation going as much as possible in hopes that we can move toward a world where objects, especially those in the public domain, can become more fluid in this environment. Many of the attitudes toward intellectual property can be summed up in focusing more on maintaining appropriate attribution for work rather than asserting “ownership,” rather than saying, “it’s mine, you have to pay me for it.” If we’re honest we have to admit that there’s really not a lot of money in the whole system around these kinds of resources. In fact, the real value of these items lies in their availability, their availability for various audiences but especially for continued scholarship and creativity.

That’s a good point. Not too long ago the Yale art historian Robert Nelson said in an interview here that whatever is available online is what will be used, what will create the new canon. He made the analogy to JSTOR. In teaching he notices that the articles he cites that are in JSTOR are the ones that get read; the others don’t.
Yes, that’s absolutely true. And it will take one museum or one major collecting institution to have the imagination to see that and to see that in addition to people coming into the gallery for a curated exhibition, that this other experience of network availability and use has extraordinary value. And if there were two or three really big collections available, literally available as high-quality public domain images, not licensed in any way, one could imagine there would be significant change in attitudes pretty quickly.

You’ve described the open quality of the digital environment as threatening to many in institutions. Could you elaborate a little on that?
The extent to which the opportunities bundled here for realizing mission in non-profits are perceived as threats derives largely from confusing traditional practice with the purpose of the institution. The perception of threats, I think, clearly has been decreasing over the last few years as we become more comfortable with changes (perhaps this is due to generational shift, I don’t know). It is decreasing also as we continue with wide ranging discussions about those traditional practices, which were well suited to business two decades ago but act as inappropriately blunt instruments in the digital environment. These include, for example, the use of copyright to hold public domain hostage in collecting institutions; notions of “appropriate” cataloging, especially for large volume collections that are more suited to a slower paced physical access than they are to the fluidity of a digital environments; and assumptions that place-based mission continues alone or would be in some way diminished by generous and less mediated online access.

In your ACLS testimony back in 2004 on the challenges for creating and adopting cyberinfrastructure, you argue that the most important work for us all ahead is not the technology or data structures but the social element: the human and institutional infrastructure. Is this the weakest link in the chain?
I’m not sure that I would still describe institutions and people as the weakest link, but rather as the least developed relative to technology and the opportunities it brings. This too seems to have changed since the start of the work of the ACLS Commission. We can do plenty with the technology we now have on hand but we’ve frequently lacked the vision or will to do so. One of the most startling examples of this became visible several years ago when the Getty Foundation (the Grant Program) was awarding grants under the Electronic Cataloging Initiative. Many Los Angeles institutions received planning and implementation grants with varied results. One of the most successful would have been predicted by no one other, I suppose, than the hard-working and ingenious staff who are employed there namely, the Pacific Asia Museum. Greater than average success from an institution with, to all appearances, less capacity and fewer resources than other participants was not based on access to better software or on an IT manager who would only accept a platinum support package. It was based on the will and the imagination of staff and the institution.

So would you cite that museum as one that is successfully redefining itself for a digital world?
Yes. You know, there are lots of museums that are doing really good work, but it’s going to take time and the results will show up eventually. If all the effort over the next ten years or so is informed by more open attitudes about making content more available–seeing content as cyberinfrastructure–then it will be all the better. It really is a question of attitude in institutions and a willingness to see opportunities. Almost never believe “we haven’t got the money to do it.” In scholarly communication there are millions of dollars going into print publications that, for example, have a print run of several hundred, for heaven’s sake. You just need to take money out of that system and put it into a much more efficient online publication or collection access system.

It’s about attitude and a willingness to invest effort. The Pacific Asia Museum is a good example. It doesn’t have the budget of the other large institutions in LA and yet it was among the most successful in taking advantage of this opportunity from the Getty’s electronic cataloging initiative. They were very clear about the fact that they wanted to create a digital surrogate of everything in their collection, do some decent cataloging and documentation and make it available. That just sounds so perfectly obvious. But that there are so many institutions that don’t seem to get something so basic, that don’t understand some aspect of that, is just completely astounding to me.
NOTES 

[1] Status of Technology and Digitization in the Nation’s Museums and Libraries (Washington, DC: Institute of Museum and Library Services, 2006),http://www.imls.gov/publications/TechDig05/index.htm.

[2] Recent aggregating efforts include ARTstor and, in recent history, AMICO, both of which look back to the Getty’s Museum Educational Site Licensing project and the earliest attempt to coordinate art museum data at the point of cataloging in Museum Prototype software from the Getty Art History Information Program.

[3] William Y. Arms and Ronald L. Larsen, The Future Of Scholarly Communication: Building The Infrastructure For Cyberscholarship, report of a workshop held in Phoenix, Arizona, April 17-19, 2007,http://www.sis.pitt.edu/~repwkshop/NSF-JISC-report.pdf.

[4] See Martin Bailey, “V&A to scrap academic reproduction fees,” The Art Newspaper 175 (Nov 30, 2006), http://www.theartnewspaper.com/article01.asp?id=525.; The Metropolitan Museum, “Metropolitan Museum and ARTstor Announce Pioneering Initiative to Provide Digital Images to Scholars at No Charge,” press release, March 12, 2007; and Sarah Blick, “A New Movement to Scrap Copyright Fees for Scholarly Reproduction of Images? Hooray for the V & A!,” Peregrinations 2, no. 2 (2007), http://peregrinations.kenyon.edu/vol2-2/Discoveries/Blick.pdf.

Cyberinfrastructure: Leveraging Change at our Institutions. An interview with James J. ODonnell

by David Green, Knowledge Culture

James O’Donnell, Provost of Georgetown University, is a distinguished classics scholar (most recently author of Augustine: A New Biography), who has contributed immensely to critical thinking about the application of new technologies to the academic realm. In 1990, while teaching at Bryn Mawr College, he co-founded the Bryn Mawr Classical Review, one of the earliest online scholarly journals, and while serving as Professor of Classical Studies at the University of Pennsylvania, he was appointed Penn’s Vice Provost for Information Systems and Computing. In 2000 he chaired a National Academies committee reviewing information technology strategy at the Library of Congress, resulting in the influential report, LC21: A Digital Strategy for the Library of Congress. One of his most influential books, Avatars of the Word (Harvard, 1998) compares the impact of the digital revolution to other comparable paradigmatic communications shifts throughout history.David Green: We’re looking here at the kinds of organizational design and local institutional evolution that will need to happen for liberal arts (and other higher-education) institutions to take advantage of a fully-deployed international cyberinfrastructure. How might access to massive distributed databases and to huge computational and human resources shift the culture, practice and structure of these (often ancient) institutions? How will humanities departments be affected–willingly or unwillingly? Will they lead the way or will they need to be coaxed forward?
James O’Donnell:
I think the issue you’re asking about here boils down to the question, “What problem are we really trying to solve?” And I think I see the paradox. The NSF Cyberinfrastructure Report, addressed to the scientific community, could assume a relatively stable community of people whose needs are developing in relatively coherent ways. If wise heads get together and track the development of those needs and their solutions, you can imagine it would then just be an ordinary public policy question: what things do you need, how do you make selections, how do you prioritize, what do you do next? NSF has been in this business for several decades. But when you come to the humanities (and full credit to Dan Atkins, chair of the committee that issued the report, for saying “and let’s not leave the other guys behind”) and you ask “what do these people need?” you come around to the question (that I take it to be the question you are asking of us) “Are we sure these people know they need what they do need?”In the humanities, it’s more that for a long time a bunch of people have been able to see, with varying degrees of clarity, a future, but that hasn’t translated to a science-like community of people who share that need, recognize it and are looking around for someone who will meet those needs–if not in one way then another. With the sciences, it’s almost like a natural market. So it’s easy enough for the CI group to say “This is what forward-looking, directionally-sensible humanists need.” But then we look around the institution and say: “Hello, does anyone around here know they need this stuff? And if so, why aren’t people doing more about this?” And we’re all a little puzzled by the gap and trying to put an interpretation on it. Is this a group of people who are burying their heads in the sand and will be obsolete in three to five years? Or is it a group of people who are not seduced by fashion and gimmickry and are just sticking with their business, undistracted and doing darn good work? Or is it somewhere in between?I’m curious about the differences between what we’re told is coming, the next wave of radically magnified networking and computing power, and the first wave, when the Internet hit in the mid-90s. Before that you had a fairly small but robust set of networks that had been built for a relatively tiny number of scientists. Then with the Web and the government white papers, the Internet hit in a pretty big way. Some members of the humanities community realized there were tools and capabilities here that could change the way they do business and a tiny minority proceeded to work in this way. Now, how will things go this time around? Will it just be a repeat: a few innovators declaring rather insufficiently that this will radically alter the way we do business in the humanities and the vast majority skeptically watching and waiting–for who knows what? And within the institutions–will the big changes happening in the sciences “trickle down?” How much interaction is there between the two cultures?
Let me start by trying to characterize the two waves. First a story: When I was at Penn, I took over the networking in 1995 and one of the stories I got was about Joe Bordogna, Dean of the Engineering School, who in 1984 really believed in this networking thing and he wanted to get the campus backbone built and connected to the Internet. Nobody much agreed with him, there was no money for it and no one believed it would happen. He finally got them to agree to build it on the mortgage plan (a fifteen-year loan). Three years after I got there, the mortgage was paid off and we had something like a million dollars a year we could spend on something else (even though, while the cables and wiring were all there, all the electronic equipment attached to it was long gone by the time the mortgage was paid off). That was visionary and it was challenging. But it was clear, in retrospect, that that was what you had to do: you had to build network infrastructure and had to figure out how to make it work. I came into the IT business partly due to the crunch of the mid-90s. Without anyone planning it, this electrifying paradigm shift occurred. The physical form of it was Bill Gates releasing Windows 95 on August 28, 1995, three days before students returned to campus, all demanding it be loaded onto their machines while all the guys in IT support hadn’t had time to figure out how it worked. So there was a real crunch time as we had to figure out how to get all these machines installed, all designed for the new network paradigm (Windows 95 had the new Internet Explorer browser bundled with it). So we were all suddenly moving into this new space. Nobody had planned for it and no one understood it. But what everyone did know was that you had to connect your machine to the network and that’s the paradigm that’s remained fairly stable ever since. You have a basic machine-it’s shrunk to a laptop now (essentially none of our students have “desktop” computers any more)-and you connect to the network, but nothing else has substantially changed. The under-the-hood browser environment is more complex than it used to be, but nobody’s had to take lessons, the change has been seamless. So my concern is that today there’s no high-concept transition. We’ve had to (a) build networks and (b) connect machines to networks. There’s nothing so clear to face now as what we had fifteen to twenty-five years ago.There’s wireless and WiFi that’s exploding, then there’s the continuing miniaturization, and the iPhone. Is that all incremental change?
Yes, and it all feels incremental. The place where there is real change is invisible. It’s in the handset and all the things it can do now and, though the browser on my Blackberry is pretty crippled, I can get core critical information that way and when I’m really bored in a meeting I can read Caesar’s Gallic Wars in Latin on my handheld. It also gets me through an overnight trip now. I don’t lug a laptop around with me quite so much.

And then there’s the additional bandwidth. It’s also incremental but its pretty fast now.
You know, I must have dozed off for a while, because I never noticed the point at which basic web design started assuming broadband.

And assuming video.
Right. But for a long time, basic web design was geared to deliver the best experience over a 28K dialup connection. Now we’re past that. If we go back to the average humanities academic, he’s talking on his cell phone, doing email, and web-surfing every morning. When I read Arts & Letters Daily with my orange juice and I see a piece I like, I hit “Print” and it’s waiting for me at my office when I get there 30 minutes later.

It’s making things faster, but it’s not changing too much?
Yes, this is automation carried to a point where there is a change in kind as well as in degree. I’m reading more kinds of stuff; I’m a better informed person across a wider range of topics than I was. I am a different person because I do this. But it’s an incrementally different kind of person. Nothing substantial has changed.

Now, although I’d like to pursue the social networking route, I also want to ask if you think there are any pull factors at work on humanities faculty. What would entice faculty to really deeply engage with networking? It’s certainly not collaboration, in itself at least.
There’s the real question of whether most academic behavior is really driven by the content of our enquiry versus how much of it is the need to perform. “Published by Harvard University Press” is still a superior form of performance to any form of electronic publication that you can now imagine.

So the intensity of a social intellectual life that might be increased through collaborative engagement online is of no comparison to that kind of imprimatur?
For many that is correct. I mean, I may be writing better articles because I’m in touch with more people. (I just checked the number of emails in my Gmail account over the last 6 months and it’s a mind-boggling number, something like 1500, so compared to the total number of people I ever met, spoke to on the phone or wrote paper letters to back in the day, it’s half an order of magnitude.) But it’s not getting us to a tipping point where instead of doing x I’ll decide to do y. Instead, I’m just running faster in place. And that’s interesting.

So now I’m provosting, I believe in this future. I’ve written about it. I think we can get somewhere. I think it’s exciting. But has my own personal practice changed that much? Not that much.

Could one tipping point be when the majority of the resources you use are in digital form? I know that would vary dramatically across disciplines.
Well it makes it easier for a humanities scholar-provost to write books while provosting. It means I can carry an amazing library on the train and read through stuff I wouldn’t otherwise be able to get to.

Put another way: does the format of one’s resources affect the format of how one will eventually produce or publish one’s work?
Not to my knowledge. I’m still writing “chapters”–and that’s interesting to me. Even at my level, the obvious rewards are for writing in traditional formats rather than for doing something digital–even down to dollar rewards. I mean, if you’re a scholar wanting to break through to a new audience, you do that through a trade publisher in New York.

At Georgetown you work with science departments that are engaged in cyberinfrastructure projects, so you’re quite conversant with what they’re doing and how. And our big question, where we started tonight, is whether there’s any connection between this activity in the sciences at Georgetown and the humanities. Will the humanities and social sciences always be the poor neighbors who might get to see what the sciences are up to and, if they’re lucky, might occasionally benefit from trickle-down effects?

That’s one extreme position–and it’s an external and judgmental one. An internal extreme position is “Well, we’re doing just fine, thanks!” And those two may be somewhat congruent. In between is a more hopeful and responsible position that says “Look, we are moving forward, developing things gradually.” You saw the piece in today’s Chronicle of Higher Education about Stephen Greenblatt’s[1]new course he’s teaching at Harvard? Almost the most important thing about that, by the way, was that they mentioned Stephen Greenblatt by name–because he’s truly famous and writes famous books and if he does this kind of thing then it must be OK.

And so this is the kind of thing that we need, only much more of it? But how old is Ed Ayers’ complaint that despite all of the really substantial and revolutionary work many have done in creating and using digital resources, as a community we were not moving forward, that real cyberscholarship is still-born?[2] He has pushed as hard as anyone and is as prominent as can be. For his pains, they’ve made him first a dean and now a president. But there’s the tendency for people to sit back and say “Look at that Ed go, isn’t he marvelous,” and that’s the puzzle. I’ll come back to say that the core issue for me is still the one of defining the problem that we have to solve persuasively enough that we get enough people interested in solving it.What’s the role of librarians in this? They seem to be leading in pushing for the provision of digital resources.
Librarians are very good inside and outside agitators in this regard. A logical way to make progress happen is to substantially support them in what they do. I have to say at Georgetown every time we do something digital in the library, foot traffic in the building and circulation of physical material goes up. For example, we offer more transparent web access to the library catalog with more links on it, letting you order stuff from other libraries–and foot traffic goes way up. We can’t stop them coming in (and sure aren’t trying to!).

So the building will be around for a while?
Let me be provostial here and say not only will the building be around, but in five years we’ll have to seriously renovate and consider building an extension. And this for many of our stakeholders (board members and donors) is at first glance counterintuitive: “I thought all that stuff was digital now.” But students are going in more and more, and going in collaboratively-to see and talk to each other. I’m left figuring how to budget for it.

You’re clearly deeply engaged by the present. Do you spend much time going the John Seely Brown route[3] and thinking through what the university of twenty years hence will look like?

Well, that’s kind of my day job. We’re about to kick off a formal curriculum review process at Georgetown that will take years to enact. My task is to have my colleagues challenge themselves to think about the abstract questions of what the goals might be for bringing people together in one place for four years and how we might get there. Can we even get to challenging ourselves about the four-year-ishness, the academic-term-ishness? That’s going to be very hard. As long as that is so powerfully the model and so powerfully the business plan and so universal the expectation that even breaking up student time so they can spend a month on a project is really, really hard. Now this has nothing to do in itself with digital, but there are things you can imagine with new empowering technologies that would be really, really cool if they could do that.

Will there be opportunities for serious consideration of totally discontinuous change?
Definitely. But we’re just beginning and we have to acknowledge that anytime you go anywhere near a faculty meeting, you get what I call the Giuliani Diagnosis of NYC traffic: gridlock is upon us and the natural behavior of everyone around us is go get a bigger SUV and lean on the horn some more. Now, that’s not a good thing and wisdom in such a situation is not about reinventing spaces for living together but consists of the first emergency response level of striping certain intersections and changing the timing on the stoplights because everything is so entangled and interwoven. That’s why I say getting students to get a four-week period to sit together to collaborate and do something truly new and different together is extremely hard. Again, for reasons that have nothing to do with electronic technology but everything to do with institutional structures we have chosen with certain kinds of assumptions in place. (Giuliani left New York before they did more than the emergency response, of course.)

The university is a highly evolved form, so it’s hard to suddenly change direction, or grow a new limb.
Yes, so any academic looking at this has to have pessimistic days in which you say “survival will go to the institution that can start afresh.” I’m reading a report by a colleague on “Lifelong Learning in China” and my question for him will be, “Do you think this vision for lifelong learning in China, where they don’t have such a vast installed base as we have, will/could/should be as exclusively associated with the kind of four-year institutions of learning we have in this country, or will the model get created not in rich first-world institutions but in places where productivity and output matter, where people will invent forms that are genuinely creative and more productive and efficient than we have now?”

Will that kind of conversation enter the curriculum review?
I’m chairing it, so we’ll see. But I have no illusions about my ability, resource-challenged as the institution is, simply to grasp the helm and do hard-a-lee and steer off in a different direction. You have to get a whole load of folks shoveling coal in the engine room to get buy-in before you can do that.

I’d like to make an observation: Theodore Ziolkowski, who wrote the book German Romanticism and Its Institutions[4]–how the zeal of the Age of Wordsworth and Goethe turned into bourgeois Victorianism–makes an important point about the university. Von Humboldt had a choice about the research institution that he had in mind. He didn’t have to take over a university and animate it; it could have been any other kind of educational institution–an Institut, a Gymnasium, an Akademie–but he did and there were costs in doing that. (You know the joke about why God created the Universe in only six days? No? Because He didn’t have to make it compatible with the installed base.) Von Humboldt chose to make his university compatible with the installed base and it was a good idea and it worked. But it carried with it the cost of associating the high-end research enterprise with all of that teaching of an increasingly mass audience. It also carried with it all the benefits of associating research with that kind of teaching.

Now this is an ‘I wonder:’ I wonder if we’re not at the tipping point where that cost-benefit ratio isn’t working anymore. And where, therefore, new institutional forms will need to emerge, if money was there to make new institutional forms emerge or if an institutional form emerged with a business plan–and the University of Phoenix doesn’t look like it.Can you imagine any foundations venturing seriously in this direction? They generally seem quite constrained in their thinking.
Well, have you ever read Thorstein Veblen? They should make us memorize his The Higher Learning in America in Provost school. These institutions are a lot about transmitting social and cultural capital and less about academic performance than we might think. There’s a young scholar I know, Joseph Soares, who’s passionate about demonstrating that the best predictor of performance in college by prospective students is not the SAT but class rank: people who have climbed to whatever heap they’re sitting in will go climb to the top of the next heap.[5] People with good test scores can get good test scores, but there’s no telling what will happen when they get out into the world. But this is unfashionable and it connects well with the fact that these institutions are bound up in the creation, preservation and transmission of cultural capital from one generation to the next. That’s a piece of the function of this tiny but trend-setting group of institutions that transmit their trends out to a wider public in remarkable ways. And that function makes institutions full of creative, innovative, iconoclastic people into bastions of conservatism. Good thing for me I love navigating the tensions that result.

NOTES

[1] Jennifer Howard, “Harvard Humanities Students Discover the 17th Century Online,” Chronicle of Higher Education 54, no. 9 (October 26, 2007) A1.

[2] Edward L. Ayers, “Doing Scholarship on the Web: 10 Years of Triumphs and a Disappointment,” Chronicle of Higher Education 50, no. 21 (January 30, 2004) B24-25.

[3] In for example, the chapter “Re-Education,” in John Seely Brown and Paul Duguid, The Social Life of Information (Harvard Business School Press, 2000).

[4] Theodore Ziolkowski, German Romanticism and Its Institutions (Princeton University Press, 1992).

[5] Joseph Soares, The Power of Privilege: Yale and America’s Elite Colleges (Stanford University Press, 2007).

css.php