From Knowledgable to Knowledge-able: Learning in New Media Environments

by Michael Wesch , Kansas State University

Knowledge-able
Most university classrooms have gone through a massive transformation in the past ten years. I’m not talking about the numerous initiatives for multiple plasma screens, moveable chairs, round tables, or digital whiteboards. The change is visually more subtle, yet potentially much more transformative. As I recently wrote in a Britannica Online Forum:

There is something in the air, and it is nothing less than the digital artifacts of over one billion people and computers networked together collectively producing over 2,000 gigabytes of new information per second. While most of our classrooms were built under the assumption that information is scarce and hard to find, nearly the entire body of human knowledge now flows through and around these rooms in one form or another, ready to be accessed by laptops, cellphones, and iPods. Classrooms built to re-enforce the top-down authoritative knowledge of the teacher are now enveloped by a cloud of ubiquitous digital information where knowledge is made, not found, and authority is continuously negotiated through discussion and participation.1

This new media environment can be enormously disruptive to our current teaching methods and philosophies. As we increasingly move toward an environment of instant and infinite information, it becomes less important for students to know, memorize, or recall information, and more important for them to be able to find, sort, analyze, share, discuss, critique, and create information. They need to move from being simply knowledgeable to being knowledge-able.

The sheer quantity of information now permeating our environment is astounding, but more importantly, networked digital information is also qualitatively different than information in other forms. It has the potential to be created, managed, read, critiqued, and organized very differently than information on paper and to take forms that we have not yet even imagined. To understand the true potentials of this “information revolution” on higher education, we need to look beyond the framework of “information.” For at the base of this “information revolution” are new ways of relating to one another, new forms of discourse, new ways of interacting, new kinds of groups, and new ways of sharing, trading, and collaborating. Wikis, blogs, tagging, social networking and other developments that fall under the “Web 2.0” buzz are especially promising in this regard because they are inspired by a spirit of interactivity, participation, and collaboration. It is this “spirit” of Web 2.0 which is important to education. The technology is secondary. This is a social revolution, not a technological one, and its most revolutionary aspect may be the ways in which it empowers us to rethink education and the teacher-student relationship in an almost limitless variety of ways.

Physical, Social, and Cognitive Structures Working Against Us
But there are many structures working against us. Our physical structures were built prior to an age of infinite information, our social structures formed to serve different purposes than those needed now, and the cognitive structures we have developed along the way now struggle to grapple with the emerging possibilities.

The physical structures are easiest to see, and are on prominent display in any large “state of the art” classroom. Rows of fixed chairs often face a stage or podium housing a computer from which the professor controls at least 786,432 points of light on a massive screen. Stadium seating, sound-absorbing panels and other acoustic technologies are designed to draw maximum attention to the professor at the front of the room. The “message” of this environment is that to learn is to acquire information, that information is scarce and hard to find (that’s why you have to come to this room to get it), that you should trust authority for good information, and that good information is beyond discussion (that’s why the chairs don’t move or turn toward one another). In short, it tells students to trust authority and follow along.

This is a message that very few faculty could agree with, and in fact some may use the room to launch spirited attacks against it. But the content of such talks are overshadowed by the ongoing hour-to-hour and day-to-day practice of sitting and listening to authority for information and then regurgitating that information on exams.

Many faculty may hope to subvert the system, but a variety of social structures work against them. Radical experiments in teaching carry no guarantees and even fewer rewards in most tenure and promotion systems, even if they are successful. In many cases faculty are required to assess their students in a standardized way to fulfill requirements for the curriculum. Nothing is easier to assess than information recall on multiple-choice exams, and the concise and “objective” numbers satisfy committee members busy with their own teaching and research.

Even in situations in which a spirit of exploration and freedom exist, where faculty are free to experiment to work beyond physical and social constraints, our cognitive habits often get in the way. Marshall McLuhan called it “the rear-view mirror effect,” noting that “We see the world through a rear-view mirror. We march backwards into the future.”2

Most of our assumptions about information are based on characteristics of information on paper. On paper we thought of information as a “thing” with a material form, and we created elaborate hierarchies to classify each piece of information in its own logical place. But as David Weinberger and Clay Shirky have demonstrated, networked digital information is fundamentally different than information on paper.3 And each digital innovation seems to shake us free from yet another assumption we once took for granted.

Even something as simple as the hyperlink taught us that information can be in more than one place at one time, challenging our traditional space-time based notions of information as a “thing” that has to be “in a place.” Google began harnessing the links and revolutionized our research with powerful machine-assisted searching.

Blogging came along and taught us that anybody can be a creator of information. Suddenly anybody can create a blog in a matter of seconds. And people have responded. Technorati now reports that there are over 133 million blogs, almost 133 million more than there were just five years ago. YouTubeand other video sharing sites have sparked similar widespread participation in the production of video. Over 10,000 hours of video are uploaded to the web everyday. In the past six months more material has been uploaded to YouTube than all of the content ever aired on major network television. While such media beg for participation, our lecture halls are still sending the message, “follow along.”

Wikipedia has taught us yet another lesson, that a networked information environment allows people to work together in new ways to create information that can rival (and even surpass) the content of experts by almost any measure. The message of Wikipedia is not “trust authority” but “explore authority.” Authorized information is not beyond discussion on Wikipedia, information is authorized through discussion, and this discussion is available for the world to see and even participate in. This culture of discussion and participation is now available on any website with the emerging “second layer” of the web through applications like Diigo which allow you to add notes and tags to any website anywhere.

And as we note and tag these sites, we are also collectively organizing them, so that the notion that this new media environment is too big and disorganized for anybody to find anything worthwhile and relevant is simply not the case. Our old assumption that information is hard to find, is trumped by the realization that if we set up our hyper-personalized digital network effectively, information can find us. For example, I have set up my own Netvibes portal so that the moment anybody anywhere tags something with certain keywords I am interested in I will immediately receive a link to the item. It is like continuously working with thousands of research associates around the world.

Taken together, this new media environment demonstrates to us that the idea of learning as acquiring information is no longer a message we can afford to send to our students, and that we need to start redesigning our learning environments to address, leverage, and harness the new media environment now permeating our classrooms.

A Crisis of Significance
Unfortunately, many teachers only see the disruptive possibilities of these technologies when they find students Facebooking, texting, IMing, or shopping during class. Though many blame the technology, these activities are just new ways for students to tune out, part of the much bigger problem I have called “the crisis of significance,” the fact that many students are now struggling to find meaning and significance in their education.4

Nothing good will come of these technologies if we do not first confront the crisis of significance and bring relevance back into education. In some ways these technologies act as magnifiers. If we fail to address the crisis of significance, the technologies will only magnify the problem by allowing students to tune out more easily and completely. With total and constant access to their entire network of friends, we might as well be walking into the food court in the student union and trying to hold their attention. On the other hand, if we work with students to find and address problems that are real and significant to them, they can then leverage the networked information environment in ways that will help them achieve the “knowledge-ability” we hope for them.

We have had our why’s, how’s, and what’s upside-down, focusing too much on what should be learned, then how, and often forgetting the why altogether. In a world of nearly infinite information, we must first address why, facilitate how, and let the what generate naturally from there. As infinite information shifts us away from a narrow focus on information, we begin to recognize the importance of the form of learning over the content of learning. It isn’t that content is not important; it is simply that it must not take precedence over form. But even as we shift our focus to the “how” of learning, there is still the question of “what” is to be learned. After all, our courses have to be about something. Usually our courses are arranged around “subjects.” Postman and Weingartner note that the notion of “subjects” has the unwelcome effect of teaching our students that “English is not History and History is not Science and Science is not Art . . . and a subject is something you ‘take’ and, when you have taken it, you have ‘had’ it.” Always aware of the hidden metaphors underlying our most basic assumptions, they suggest calling this “the Vaccination Theory of Education” as students are led to believe that once they have “had” a subject they are immune to it and need not take it again.5

Not Subjects but Subjectivities
As an alternative, I like to think that we are not teaching subjects but subjectivities: ways of approaching, understanding, and interacting with the world. Subjectivities cannot be taught. They involve an introspective intellectual throw-down in the minds of students. Learning a new subjectivity is often painful because it almost always involves what psychologist Thomas Szasz referred to as “an injury to one’s self-esteem.”6 You have to unlearn perspectives that may have become central to your sense of self.

To illustrate what I mean by subjectivities over subjects, I have created a list of subjectivities that I am trying to help students attain while learning the “subject” of anthropology:

  • Our worldview is not natural and unquestionable, but culturally and historically specific.
  • We are globally interconnected in ways we often do not realize.
  • Different aspects of our lives and culture are connected and affect one another deeply.
  • Our knowledge is always incomplete and open to revision.
  • We are the creators of our world.
  • Participation in the world is not a choice, only how we participate is our choice.

Even a quick scan of these subjectivities will reveal that they can only be learned, explored, and adopted through practice. We can’t “teach” them. We can only create environments in which the practices and perspectives are nourished, encouraged, or inspired (and therefore continually practiced).

My own experiments in this regard led to the creation the World Simulation, now the centerpiece of my Introduction to Cultural Anthropology course at Kansas State University. As the name implies, the world simulation is an activity in which we try to simulate the world. Of course, in order to simulate the world, we need to know everything we can about it. So while the course is set up much like a typical cultural anthropology course, moving through the same readings and topics, all of these learnings are ultimately focused around one big question, “How does the world work?”

Students are co-creators of every aspect of the simulation, and are asked to harness and leverage the new media environment to find information, theories, and tools we can use to answer our big question. Each student has a specific role and expertise to develop. A world map is superimposed on the class and each student is asked to become an expert on a specific aspect of the region in which they find themselves. Using this knowledge, they work in 15-20 small groups to create realistic cultures, step-by-step, as we go through each aspect of culture in class. This allows them to apply the knowledge they learn in the course and to recognize the ways different aspects of culture–economic, social, political, and religious practices and institutions–are integrated in a cultural system.

In the final weeks of the course we explore how different cultures around the world are interconnected and how they relate to one another. Students continue to harness and leverage the new media environment to learn more about these interconnections, and use the wiki to work together to create the “rules” for our simulation. They face the daunting task of creating a way to simulate colonization, revolution, the emergence of a global economy, war and diplomacy, and environmental challenges. Along the way, they are exploring some of the most important challenges now facing humanity.

The World Simulation itself only takes 75-100 minutes and moves through 650 metaphorical years, 1450-2100. It is recorded by students on twenty digital video cameras and edited into one final “world history” video using clips from real world history to illustrate the correspondences. We watch the video together in the final weeks of the class, using it as a discussion starter for contemplating our world and our role in its future. By then it seems as if we have the whole world right before our eyes in one single classroom – profound cultural differences, profound economic differences, profound challenges for the future, and one humanity. We find ourselves not just as co-creators of a simulation, but as co-creators of the world itself, and the future is up to us.

Managing a learning environment such as this poses its own unique challenges, but there is one simple technique, which makes everything else fall into place: love and respect your students and they will love and respect you back. With the underlying feeling of trust and respect this provides, students quickly realize the importance of their role as co-creators of the learning environment and they begin to take responsibility for their own education.

New Models of Assessment for New Media Environments: The Next Frontier.
All of this vexes traditional criteria for assessment and grades. This is the next frontier as we try to transform our learning environments. When I speak frankly with professors all over the world, I find that, like me, they often find themselves jury-rigging old assessment tools to serve the new needs brought into focus by a world of infinite information. Content is no longer king, but many of our tools have been habitually used to measure content recall. For example, I have often found myself writing content-based multiple-choice questions in a way that I hope will indicate that the student has mastered a new subjectivity or perspective. Of course, the results are not satisfactory. More importantly, these questions ask students to waste great amounts of mental energy memorizing content instead of exercising a new perspective in the pursuit of real and relevant questions.

Of course, multiple-choice questions are an easy target for criticism, but even more sophisticated measures of cognitive development may miss the point. When you watch somebody who is truly “in it,” somebody who has totally given themselves over to the learning process, or if you simply imagine those moments in which you were “in it” yourself, you immediately recognize that learning expands far beyond the mere cognitive dimension. Many of these dimensions were mentioned in the issue precis, “such as emotional and affective dimensions, capacities for risk-taking and uncertainty, creativity and invention,” and the list goes on. How will we assess these? I do not have the answers, but a renewed and spirited dedication to the creation of authentic learning environments that leverage the new media environment demands that we address it.

The new media environment provides new opportunities for us to create a community of learners with our students seeking important and meaningful questions. Questions of the very best kind abound, and we become students again, pursuing questions we might have never imagined, joyfully learning right along with the others. In the best case scenario the students will leave the course, not with answers, but with more questions, and even more importantly, the capacity to ask still more questions generated from their continual pursuit and practice of the subjectivities we hope to inspire. This is what I have called elsewhere, “anti-teaching,” in which the focus is not on providing answers to be memorized, but on creating a learning environment more conducive to producing the types of questions that ask students to challenge their taken-for-granted assumptions and see their own underlying biases.

The beauty of the current moment is that new media has thrown all of us as educators into just this kind of question-asking, bias-busting, assumption-exposing environment. There are no easy answers, but we can at least be thankful for the questions that drive us on.
Notes

1. Michael Wesch, “A Vision of Students Today (and what Teachers Must Do),”  Encyclopedia Britannica blog, Oct. 21, 2008,  http://www.britannica.com/blogs/2008/10/a-vision-of-students-today-what-teachers-must-do/ [return to text]
2. Marshall McLuhan, The Medium is the Massage (New York: Random House, 1967). <a=href=”#2return”>[return to text]
3. See Clay Shirky, “Ontology is Overrated: Categories, Links, and Tags,” http://www.shirky.com/writings/ontology_overrated.html and David Weinberger, Everything is Miscellaneous: The Power of the New Digital Disorder (New York: Times Books, 2007). [return to text]
4. Michael Wesch, “Anti-Teaching: Confronting the Crisis of Significance,” Education Canada (Spring 2008),
http://www.cea-ace.ca/media/en/AntiTeaching_Spring08.pdf [return to text]
5. Neil Postman and Charles Weingartner, Teaching as a Subversive Activity (Delacorte Press, 1969), 21. [return to text]
6. Thomas Szasz, The Second Sin (Routledge, 1974), 18. [return to text]

The Difference that Inquiry Makes: A Collaborative Case Study on Technology and Learning, from the Visible Knowledge Project

This collection of essays from the Visible Knowledge Project is edited by Randy Bass and Bret Eynon, who served together as the Project’s Co-Directors and Principal Investigators. The Visible Knowledge Project was a collaborative scholarship of teaching and learning project exploring the impact of technology on learning, primarily in the humanities.  In all, about seventy faculty from twenty-two institutions participated in the Visible Knowledge Project over five years. Participating campuses included five research universities (Vanderbilt University, the University of Alabama, Georgetown University, the University of Southern California, Washington State University, and the Massachusetts Institute of Technology), four comprehensive public universities (Pennsylvania’s Millersville University, California State University (CSU)–Monterey Bay, CSU Sacramento, Ohio’s Youngstown State University, and participants from several four-year colleges in the City University of New York system, including City College, Lehman, and Baruch), and three community colleges (two from CUNY–Borough of Manhattan Community College and LaGuardia Community College, and California’s Cerritos College). In addition to campus-based teams, a number of independent scholars participated from a half dozen other institutions, such as Arizona State and Lehigh University.

The project began in June 2000 and concluded in October 2005. We engaged in several methods for online collaboration to supplement our annual institutes, including an adaptation of the digital poster-tool created by Knowledge Media Lab (Carnegie Foundation), asynchronous discussion, and web-conferencing. For more detailed information, see the VKP galleries and archives at http://crossroads.georgetown.edu/vkp/Note: You can find pdf files formatted for printing attached at the end of each article.

Capturing the Visible Evidence of Invisible Learning

0 Comments | 6366 Page Views
This is a portrait of the new shape of learning with digital media, drawn around three core concepts: adaptive expertise, embodied learning, and socially situated pedagogies. These findings emerge from the classroom case studies of the Visible Knowledge Project, a six-year project engaging almost 70 faculty from 21 different institutions across higher education. Examining the scholarly work of VKP faculty across practices and technologies, it highlights key conceptual findings and their implications for pedagogical design.  Where any single classroom case study yields a snapshot of practice and insight, collectively these studies present a framework that bridges from Web 1.0 to Web 2.0 technologies, building on many dimensions of learning that have previously been undervalued if not invisible in higher education.

Reading the Reader

0 Comments | 3166 Page Views
Many teachers wonder what happens (or doesn’t happen) when students read text. What knowledge do students need, gain, or seek when reading? Through VKP’s early emphasis on technology experimentation, Sharona Levy adapted a proven reading method of annotation from paper to computer. Through using the comment feature in Word, students’ reading processes became more transparent, explicit, and traceable, allowing her to diagnose gaps in understanding and to encourage effective reading strategies.

Close Reading, Associative Thinking, and Zones of Proximal Development in Hypertext

0 Comments | 3563 Page Views
How can we teach students to slow down their reading process and move beyond surface-level comprehension? Patricia O’Connor’s Appalachian Literature students co-constructed hypertexts which capture the connections readers make among assigned texts, reference documents, and multimedia sources. These hypertexts became more than artifacts of student work; rather, they became collaborative, exploratory spaces where implicit literary associations become explicit.

Inquiry, Image, and Emotion in the History Classroom

0 Comments | 2467 Page Views
With increased online access to historical sources, will students “read history” differently among such artifacts as text, image, or video? Questioning his own assumptions of students’ abilities to analyze historical sources, Peter Felten conducted pedagogical investigations to understand student interpretation of a variety of sources. Designing the use of visual artifacts in the classroom helped students learn not only how to interrogate and interpret primary sources, but also how to construct original arguments about history. Students’ understanding of history deepened while they became emotionally engaged with the material.

From Looking to Seeing: Student Learning in the Visual Turn

0 Comments | 3332 Page Views
Rather than simply using primary source images as illustrations for his course on Power, Race, and Culture in the U.S. City, David Jaffee wanted to teach his students how to interpret visual texts as a historian would. By paying close attention to his students’ readings of images, Jaffee was able to develop ways to scaffold their analysis, teaching them how to move beyond “looking” at isolated images to “seeing” historical context, connection and complexity.

Engaging Students as Researchers through Internet Use

0 Comments | 3416 Page Views
Effective habits of research begin early and should be practiced often. Unearthing discoveries, making connections, and evaluating judiciously are research traits valued by Taimi Olsen in her first-year composition course. Not only should these research habits exist in the library, but Olsen advocates the application of these habits in online archives hones students’ abilities to become expert researchers.

Trace Evidence: How New Media Can Change What We Know About Student Learning

0 Comments | 2465 Page Views
Clicker technology, often used in large-enrollment science courses, works well when every question has a single right answer. Lynne Adrian wanted to find out whether clickers could be used in disciplines which raise more questions than answers, and how illuminating the gray areas between “right” and “wrong” could help her students think critically about American studies. She found that the technology allowed her to preserve traces of the otherwise ephemeral class discussions, enabling her to analyze the types of questions she was asking in class and to track their effects on students’ written work throughout the semester.

Shaping a Culture of Conversation: The Discussion Board and Beyond

1 Comments | 6655 Page Views
What happens when the discussion board goes from being just an assignment to a springboard for intellectual community? Foreseeing many benefits to cultivating discussion among his English students, Ed Gallagher worked to develop frameworks to articulate why discussion is not only central to the learning process in the classroom but also beyond its walls. A higher level of critical analysis, reflection, and a synthesis of multiple perspectives turned class discussions into artful conversations.

The Importance of Conversation in Learning and the Value of Web-based Discussion Tools

0 Comments | 3221 Page Views
In this essay Heidi Elemendorf and John Ottenhoff discuss the central role that intellectual communities should play in a liberal education and the value of conversation for our students, and we explore the ways in which web-based conversational forums can be best designed to fully support these ambitious learning goals. Coming from very different fields (Biology and English Literature) and in different course contexts (Microbiology course for non-majors and Shakespeare seminar), they nonetheless discover core values and design issues by looking closely at the discourse produced from online discussions. Centrally, they connect what they identify as expert-like behavior to the complexities of intellectual development in conversational contexts.

Why Sophie Dances: Electronic Discussions and Student Engagement with the Arts

0 Comments | 1386 Page Views
Paula Berggren struggled to engage her students in critical thinking about unfamiliar art forms, until she posed a simple question on the class’s online discussion board: “Why do people dance?” She found that the students’ responses, rather than being just less-polished versions of what they might write in formal essays, warranted close analysis in their own right. In subsequent teaching, Berggren continues to incorporate some version of a middle space for student work, which not only increases students’ engagement but also allows her to observe and document their thought processes.

Connecting the Dots: Learning, Media, Community

0 Comments | 1143 Page Views
Sometimes the research question you ask isn’t the one you end up answering. Elizabeth Stephen recounts how a debate about the use of films in a freshman seminar led to an experiment in forming a community of scholars which could be sustained over time and across distances. Creating online spaces for students in this group to share their reflections with one another strengthened the ties among them, while allowing Stephen to analyze the multiple elements, both academic and social, which made this a successful learning community.

Focusing on Process: Exploring Participatory Strategies to Enhance Student Learning

0 Comments | 1781 Page Views
Confronting the challenge of improving student writing in a large sociology class, Juan José Gutiérrez developed a software-based peer review process. He required students to evaluate one another’s papers based on specific criteria and to provide constructive feedback. He found that not only did this process help with the logistics of paper-grading, but it also allowed him to adapt his teaching to address specific concerns indicated by qualitative and quantitative analysis of the peer reviews.

Theorizing Through Digital Stories: The Art of “Writing Back” and “Writing For”

0 Comments | 3058 Page Views
Discovering how digital stories engage students in critical, theoretical frameworks lives at the center of Rina Benmayor’s work. Through her course, Latina Life Stories, Rina asked each student to tell his or her own life story digitally and then situate the story within a theoretical context. While this process engaged students to theorize creatively, it also allowed her to document methods to recognize the quality of student work resulting in a flexible and intuitive rubric to use beyond this experience.

Video Killed the Term Paper Star? Two Views

0 Comments | 1947 Page Views
Two instructors from separate disciplines discuss what happens when alternative multimedia assignments replace traditional papers. Peter Burkholder found the level of engagement to change dramatically in his history courses while Anne Cross experienced new avenues for talking about sensitive subjects in sociology. Together, both professors explore the advantages and opportunities for video assignments that challenge students to synthesize information in critical and innovative ways.

Producing Audiovisual Knowledge: Documentary Video Production and Student Learning in the American Studies Classroom

0 Comments | 4598 Page Views
Traditionally, academic institutions have segregated multimedia production from disciplinary study. Bernie Cook wondered what his American Studies students would learn from working collaboratively to produce documentary films based on primary sources, and what he in turn might find out about their learning in the process. Students created documentary films on local history, and wrote reflections on their creative and critical process. Not only did students report tremendous engagement with the topics and sources for their projects, they also indicated satisfaction at being able to screen their work for an audience. By allowing his students to become producers of content, Cook enables them to participate fully in the intellectual work of American Studies and Film Studies.

Multimedia as Composition: Research, Writing, and Creativity

0 Comments | 3673 Page Views
Viet Thanh Nguyen reflects on a three-year experiment in assigning multimedia projects in courses designed around the question “How do we tell stories about America?” Determined to integrate multimedia conceptually into his courses, rather than tacking it onto existing syllabi, Nguyen views multimedia as primarily a pedagogical strategy and secondarily a set of tools. Exploring challenges and opportunities for both students and teachers in using multimedia, he outlines principles for teaching with multimedia, and concludes that, while not for everyone, multimedia can potentially create a transformative learning experience.

Looking at Learning, Looking Together: Collaboration across Disciplines on a Digital Gallery

0 Comments | 1345 Page Views
What does it mean for two community college colleagues, teaching in very different disciplines, to work together on a Scholarship of Teaching and Learning (SoTL) project?  What happens when they join together to examine their students’ work, their individual teaching practice, and the possibilities for collaborative research?  And what do they learn when they undertake an electronic publication of that work in a digital gallery?

“It Helped Me See a New Me”: ePortfolio, Learning and Change at LaGuardia Community College

0 Comments | 3725 Page Views
What happens if we shift the focus of our teaching and learning innovations from a single classroom to an entire institution? What new kinds of questions and possibilities emerge? Can an entire college break boundaries, moving from a focus on “what teachers teach” to a focus on “what students learn?” Can we think differently about student learning if we create structures that enable thousands of students to use new media tools to examine their learning across courses, disciplines, and semesters? Bret Eynon explores these questions as he analyzes the college-wide ePortfolio initiative at LaGuardia Community College. Studying individual portfolios and focus group interviews, he also examines quantitative outcomes data on engagement and retention to better consider ePortfolio’s impact on student learning.

From Narrative to Database: Multimedia Inquiry in a Cross-Classroom Scholarship of Teaching and Learning Study

0 Comments | 2855 Page Views
Michael Coventry and Matthias Oppermann draw on their work with student-produced digital stories to explore how the protocols surrounding particular new media technologies shape the ways we think about, practice, and represent work in the scholarship of teaching and learning. The authors describe the Digital Storytelling Multimedia Archive, an innovative grid they designed to represent their findings, after considering how the technology of delivery could impact practice and interpretation. This project represents an intriguing synthesis of digital humanities and the scholarship of teaching and learning, raising important questions about the possibilities for analyzing and representing student learning in Web 2.0 environments.

Multimedia in the Classroom at USC: A Ten Year Perspective

0 Comments | 3993 Page Views
Does multimedia scholarship add academic value to a liberal arts education? How do we know? Looking back at the history of the Honors Program in Multimedia Scholarship at USC, Mark Kann draws on his own teaching experience, discussions with other faculty members, and the university’s curriculum review process to explore these questions. He describes the process of developing the program’s academic objectives and assessment criteria, and the challenges of gathering evidence for his intuitions about the effects of multimedia scholarship. Finally, Kann reports on the program’s first student cohort and looks ahead to the future of multimedia at USC.

Capturing the Visible Evidence of Invisible Learning

by Randy Bass and Bret Eynon

Note: This is a synthesis essay for the Visible Knowledge Project (VKP), a collaborative project engaging seventy faculty at twenty-one institutions in an investigation of the impact on technology on learning, primarily in the humanities. As a matter of formatting to the Academic Commons space, this essay is divided in three parts: Part I (Overview of project, areas of inquiry, introduction to findings); Part II (Discussion of findings with a focus on Adaptive Expertise and Embodied Learning); Part III (Discussion of findings continued with a focus on Socially Situated learning, Conclusion). A full-text version of this essay is available as a pdf document here.
Here, in this forum as part of Academic Commons, the essay complements eighteen case studieson teaching, learning, and new media technologies. Together the essay and studies constitute the digital volume “The Difference that Inquiry Makes: A Collaborative Case Study of Learning and Technology, from the Visible Knowledge Project.” For more information about VKP, see https://digitalcommons.georgetown.edu/blogs/vkp/.

Déjà 2.0
Facebook. Twitter. Social media. YouTube.Viral marketing. Mashups. Second Life. PBWikis. Digital Marketeers. FriendFeed. Flickr. Web 2.0. Approaching the second decade of the twenty-first century, we’re riding an unstoppable wave of digital innovation and excitement. New products and paradigms surface daily. New forms of language, communication, and style are shaping emerging generations. The effect on culture, politics, economics and education will be transformative. As educators, we have to scramble to get on board, before it’s too late.

Wait a minute. Haven’t we been here before? Less than a decade ago, we rode the first wave of the digital revolution–email, PowerPoint, course web pages, digital archives, listservs, discussion boards, etc. As teachers and scholars, we dove into what is now called Web 1.0, trying out all sorts of new systems and tools. Some things we tried were fabulous. Others, not so much. Can we learn anything from that experience? What insights might we garner that could help us navigate Web 2.0? How can we separate the meaningful from the trivial? How do we decide what’s worth exploring? What do we understand about the relationship of innovations in technology and pedagogy? What can we learn about effective ways to examine, experiment, evaluate, and integrate new technologies in ways that really do advance learning and teaching?
The teaching and research effort of the Visible Knowledge Project (VKP) could be a valuable resource as we consider these questions. Active from 2000 to 2005, VKP was an unusual collective effort to initiate and sustain a discipline-based examination of the impact of new digital media on education. A network of around seventy faculty from twenty U.S. colleges, primarily from American history and culture studies departments, gathered not only to experiment with new technologies in their teaching, but also to document and study the results of their inquiries, using the tools of the scholarship of teaching and learning. In this collaborative and synoptic case study, under the title The Difference that Inquiry Makes, we try to capture and make sense of the visible evidence of this relatively invisible learning as it emerged over five (and more) years of collaborative classroom inquiry. We share participants’ reports on key elements of the VKP inquiry, and integrate their reports into a framework that can help us learn from this experience as we navigate a fast-changing educational landscape.

Invisible Learning
What do we mean by “invisible learning?” We use this phrase to mean at least two things. First, it points us to what Sam Wineburg, in his book Historical Thinking and Other Unnatural Acts, talked about as “intermediate processes,” the steps in the learning process that are often invisible but critical to development.1 All too often in education, we are focused only on final products: the final exam, the grade, the perfect research paper, mastery of a subject. But how do we get students from here to there? What are the intermediate stages that help students develop the skills and habits of master learners in our disciplines? What kinds of scaffolding enable students to move forward, step by step? How do we, as educators, recognize and support the slow process of progressively deepening students’ abilities to think like historians and scholars? In VKP, from the beginning, we tested our conviction that digital media could help us to shine new light on–to make visible–and to pay new attention to these crucial stages in student learning.

Second, by invisible learning we also mean the aspects of learning that go beyond the cognitive to include the affective, the personal, and issues of identity. Cognitive science has made great strides in recent years, scanning the brain and understanding everything from synapses and neurons to perception and memory. Educators are still struggling to grasp the implications of this research for teaching and learning. However, perhaps because it is less “scientific,” higher education has paid considerably less attention to (and is even less well prepared to deal with) the role of the affective in learning and its relationship to the cognitive. How does emotion shape engagement in the learning process? How do we understand risk-taking? Community? Creativity? The relationship between construction of knowledge and the reconstruction of identity? In VKP we explored the ways that digital tools and processes surfaced the interplay between the affective and the cognitive, the personal and the academic.

Visible Evidence
Education at all levels has largely taken on faith that if teachers teach, students will learn–what could be seen as a remarkable, real-life version of “If you build it, they will come.” In recent years, calls for greater accountability have produced a new emphasis on standardized testing as the only appropriate way to assess whether students are learning. Meanwhile, growing numbers of faculty in higher education have taken a different approach, engaging in the scholarship of teaching and learning–using the tools of scholarship to study their own classrooms–to deepen their understanding of the learning process and its relationship to teacher practice. Spurred by the ideas of Ernest Boyer and Lee Shulman of the Carnegie Foundation for the Advancement of Teaching, faculty from many disciplines have posed research questions about student learning, gathered evidence from their classrooms, and gone public with their findings in countless conference presentations, course portfolios, and scholarly journals. This movement, with its focus on classroom-based evidence, provided key tools and language for the Visible Knowledge Project. It allowed VKP faculty to study the impact of new technologies on learning and teaching, and it also helped us frame questions about problems and practice, inquiry and expertise that remain critical as we move into a new phase of technological innovation and change.2

The Visible Knowledge Project
The Visible Knowledge Project emerged in 2000 from the juxtaposition of these two powerful yet largely distinct trends in higher education–the scholarship of teaching and learning movement and the initial eruption of networked digital technologies into the higher education classroom. Responding to a dynamic combination of need and opportunity, faculty engaged in multi-year teaching and learning research projects, examining and documenting the ways the use of new media was reshaping their own teaching and patterns of student learning. Participating faculty came from a wide range of institutions, from community colleges and private liberal arts colleges to research universities; from Georgetown and USC to Youngstown State, the University of Alabama, and City University of New York (CUNY). Meeting on an annual basis, and interacting more frequently in virtual space, we formed our research questions representing a broad spectrum, shared ideas about research strategies, discussed emerging patterns of our evidence, and formulated our findings. The digital resources used ranged from Blackboard and PowerPoint to interactive online archives and Movie Maker Pro. The VKP galleries (https://digitalcommons.georgetown.edu/blogs/vkp/) provide a wealth of background information, including lists of participants, regular newsletters, and reports from more than thirty participants, as well as a number of related resources and meta-analyses.3

The VKP ethos was formed by a belief in the value of messiness, of unfolding complexity, of adventurous, participant-driven inquiry that would inform the nature of the collective conversation. A few scientists and social scientists entered the group and helped create exciting projects, but the vast majority of the participants were from the fields of history, literature, women’s studies and other humanist disciplines. While technology was key to our raison d’être, our inquiries often evolved to focus on issues of pedagogy that transcended individual technologies. We wanted to learn about teaching, to learn about learning. We wanted to go beyond “best practice” and “what worked” to get at the questions about why and how things worked–or didn’t work. In some cases, we went further, rethinking our understanding of what it meant for something to “work.” Our questions were evolving, shaped by the exigencies of time and funding as well as our on-going exchange and new technological developments. We struggled with ways to nuance and realize our inquiries, to come up with workable methods and evidence that matched our changing and, we hoped, increasingly sophisticated questions.

Over the course of the Project, we found that participants’ teaching experiments started to group in three areas:

  1. Reading–Engaging ideas through sources/texts: As VKP took shape at the end of the twentieth century, the great museums, universities, and research libraries of this country were mounting their collections on the Web. Web sites such as the American Memory Collection of the Library of Congress vastly expanded the availability of archival source materials on the Web. It was a time, as Cathy Davidson put it recently, of digitally-driven “popular humanism.”4 Responding to this opportunity, VKP’s historians and culture studies faculty explored the effectiveness of active reading strategies using primary sources, both textual and visual, for building complex thinking. Introducing students to the process of inquiry, faculty tested combinations of pedagogy and technology designed to help students “slow down” their learning, interpret challenging texts and concepts, and engage in higher order disciplinary and interdisciplinary practices.For example, Susan Butler, teaching an introductory history survey at Cerritos College, had her students examine primary sources on different facets of the Trail of Tears, made available online by the Great Smoky Mountains National Park, PBS, and the Cherokee Messenger; as students grappled with perspective and the evolving definition of democracy in America, Butler examined evidence of the ways that scaffolded learning modules that incorporated online primary sources could expand students’ capacity for critical analysis. Meanwhile, Sherry Linkon at Youngstown State used online archives to help students in her English course create research papers that contextualized early twentieth-century immigrant novels. And Peter Felten at Vanderbilt integrated online texts, photographs and videos into a history course on the 1960s, analyzing the ways students did–or didn’t–apply critical thinking skills to visual evidence.Across the board, the focus was less on “searching” and “finding” than on analyzing, understanding, and applying evidence to address authentic problems rooted in the discipline. Testing innovative strategies, faculty asked students to model the intellectual behaviors of disciplinary experts, focusing earlier and more effectively on the learning dimensions that characterize complex thinking. (For sample projects addressing these questions, see http://cndls.georgetown.edu/crossroads/vkp/themes/poster_showcase_reading.htm )
  2. Dialogue–Discussion and writing in social digital environments: As VKP faculty moved into the world of Blackboard and Web-CT, they explored ways that discussion and social writing in online environments can foster learning. Projects explored strategies for using online communication to make the intermediate processes of learning more visible and to provide opportunities for students to develop personal and academic voice. For example, Mills Kelly, teaching a Western Civilization survey at Virginia’s George Mason University, focused on the possibilities of using online tools, including the WebCT discussion board and a special GMU Web Scrapbook, as tools for enhancing collaborative learning. Meanwhile, Ed Gallagher at Lehigh University tested the impact of his detailed and creative guidelines for students in prompting more interactive and substantial discussion in an online context.In general, carefully structured online discussion environments provided students and faculty a context in which to think socially; they also allowed discussion participants to document, retrieve and reflect on earlier stages of the learning process. This ability to “go meta” offered a new way for students and faculty to engage more deeply with disciplinary content and method. Highlighting the scaffolding strategies that might maximize student learning, these projects gathered evidence of learning that reflected the social and affective dimensions of these digitally-based pedagogical practices. (For sample projects, see http://cndls.georgetown.edu/crossroads/vkp/themes/poster_showcase_discussion.htm)
  3. Authorship–Multimedia construction as experiential learning: As multimedia authoring became easier to master in these years, faculty became interested not only in creating multimedia presentations and Web sites; they also sought to develop ways to put these tools into the hands of students. Many VKP scholar-teachers were guided by the constructivist notion that learning deepens when students make knowledge visible through public products. In the projects clustered here, student authorship takes place in various multimedia genres of the early twenty-first century, including digital stories and digital histories, Web sites and PowerPoint essays, historically-oriented music videos, electronic portfolios and other historical and cultural narratives. The emergent pedagogies explored by these scholar-teachers involve multiple skills, points of view, and collaborative activities (including peer critique). For example, Patricia O’Connor had her Appalachian literature students at Georgetown University create Web pages about Dorothy Allison’s Bastard Out of Carolina, annotating particular phrases and creating links to historical sources and images, while she investigated the ways that “associative thinking” shaped students’ ability to make nuanced speculations about literary texts.
    Meanwhile, Tracey Weis at Pennsylvania’s Millersville University and several faculty at California State University at Monterey Bay gathered evidence on the cognitive and emotional impact of student construction of short interpretative “films,” or what we came to call “digital stories.” Examining the qualities of student learning evidenced through such assignments, these projects spotlight issues of assessment and the need to move beyond the narrowly cognitive quiz and the critical research essay to find ways to value creativity, design, affect, and new modes of expressive complexity. (For sample projects, see http://cndls.georgetown.edu/crossroads/vkp/themes/poster_showcase_writing.htm )

Naturally, these three areas of classroom practice–critically engaging primary sources, social dialogue, and multimedia authorship–converged in all kinds of ways. Some of the richest and most intriguing projects engaged students in a scaffolded process of collaborative research and writing, laying the groundwork for multimedia-enhanced performances of their learning. Our fluid categories were defined and redefined by the creativity of our faculty as they experimented within them.

The key to faculty innovations in VKP was not merely trying new teaching strategies but looking closely at the artifacts of student work that emerged from them, not only in traditional summative products such as student writing, but in new kinds of artifacts that captured the intermediate and developmental moments along the way. What did these artifacts look like? They included video evidence of students working in pairs on inquiry questions, as well as student-generated Web archives and research logs; they included careful analysis of discussion threads in online spaces and student reflections on collaborative work; they included not only new forms of multimedia storytelling but evidence of their authoring process through interviews and post-production reflections about their intentions and their learning. One of the consequences emerging from these new forms of evidence was that, as faculty looked more closely and systematically at evidence of learning processes, those processes started to look more complex than ever. The impact of transparency, at least at first, seemed to be complexity, which can be unsettling in many ways.

Pieces of Insight
This phenomenon had a significant impact on the kinds of findings and claims that emerged from this work. We set out looking for answers (“what is the impact of technology on learning?”) and what we mostly found were limited claims about impact, new ways of looking at student learning, and often dynamic new questions. In fact, the VKP projects followed a pattern typical in faculty inquiry.  Whatever the question that initiates the inquiry, it often changes and deepens into something else. For example, Lynne Adrian (University of Alabama) started off investigating the role of personal response systems (“clickers”) in a large enrollment Humanities course to see if the use of concept questions would increase student engagement, but was soon led to reflect much more interestingly on the purpose of questions in class and the very nature of the questions she had been asking for more than twenty years. Similarly, Joe Ugoretz (Borough of Manhattan Community College), in an early inquiry, hoped to study the benefits of a free-form discussion space in an online literature course, but got frustrated because the students would frequently digress and stray off topic; finally it occurred to him that the really interesting inquiry lay in learning more about the nature of digressions themselves, considering which were productive and which were not. The changing nature of questions, and the limited nature of claims, is not a flaw of faculty inquiry but its very nature. John Seely Brown describes the inevitable way that we build knowledge around teaching: “We collect small fragments of data and struggle to capture context from which this data was extracted, but it is a slow process. Context is sufficiently nuanced that complete characterizations of it are extremely difficult. As a result, education experiments are seldom definitive, and best practices are, at best, rendered in snapshots for others to interpret.”5

Here is where the power of collaborative inquiry came into play. That is, what emerged from each individual classroom project was a piece of insight, a unique local and limited vision of the relationship between teaching and learning that yet contributed to some larger aggregated picture. We had, in the microcosm of the Visible Knowledge Project, created our own “teaching commons” in which individual faculty insights pooled together into larger meaningful patterns.6 Each of these snapshots is interesting in itself; together they composite into something larger and significant. What follows below is our effort at putting together the snapshots to create a composite image in which we recognize new patterns of learning and implications for practice.

A Picture of New Learning: Cross-Cutting Findings

Collectively, what emerged from this work was an expansive picture of learning. Although we started out with questions about technology, early on it became clear that the questions were no longer merely about the “impact of tools” on learning; the emergent findings compelled us to confront the very nature of what we recognized as learning, which in turn fed back into what we were looking for in our teaching. Over the years, faculty experienced iterative cycles of innovation in their teaching practice, of reflection on an increasingly expansive range of student learning, and of experimentation shaped by the deepening complexity (and at times befuddlement) that emerged from trying to read the evidence of that learning. From this spiral of activity developed a research framework with broad implications for the now-emergent Web 2.0 technologies. We have come to articulate this range of cross-cutting findings under the headings of three types of learning: adaptive, embodied, and socially situated.

Briefly, by adaptive learning we mean the skills and dispositions that students acquire which enable them to be flexible and innovative with their knowledge, what David Perkins calls a “flexible performance capability.”7 An emphasis on adaptive capacities in student learning emerged naturally from our foundational focus on visible intermediate processes. What became visible were the intermediate intellectual moves that students make in trying to work with difficult cultural materials or ideas, illuminating how novice learners progress toward expertise or expert-like thinking in these contexts.

Our recognition of the embodied nature of learning emerged from this increased attention to intermediate processes–the varied forms of invention, judgment, reflection–when we realized that we were no longer accounting for simply cognitive activities. Many manifestations of the affective dimension of learning opened up in this intermediate space informed by new media, whether it was the way that students drew on their personal experience in social dialogue spaces, or the sensual and emotional dimensions of working with multimedia representations of history and culture. In these intermediate spaces, dimensions of affect such as motivation and confidence loomed large as well. We have come to think of this expansive range of learning as embodied, in that it pointed us to the ways that knowledge is experienced through the body as well as the mind, and how intellectual and cognitive thinking are embodied by whole learners and scholars.

Inasmuch as this new learning is embodied, similarly is it socially situated. Influenced by the range of work on situated learning, communities of practice, and participatory learning, our work with new technologies continuously brought us to see the impact new forms of engagement through media had on the students’ relative stance to learning. This effect was not merely a sense of heightened interest due to the novelty of new forms of social learning. Rather, what we were seeing was evidence of the ways that multimedia authoring, for example, constructed for students a salient sense of audience and public accountability for their work; this, in turn, had an impact on nearly every aspect of the authoring process–visible in the smallest and largest compositional decisions. The socially situated nature of learning became a summative value, capturing what Seely Brown calls “learning to be,” beyond mere knowledge acquisition to a way of thinking, acting, and a sense of identity.

These three ways of looking at pedagogies–as adaptive, embodied, and socially situated–together help constitute a composite portrait of new learning. Each helps us focus on a different dimension of complex learning processes: adaptive pedagogies emphasizing the developmental stages linking learning to disciplines; embodied pedagogies focusing on how the whole person as learner engages in learning; and socially situated learning focusing on the role of context and audience. In this sense, the dimensions are overlapping and reinforcing in any particular set of practices. For example, consider Patricia O’Connor’s work making use of Web authoring tools to lead students to engage in close reading of print fiction. Calling the activity “hypertext amplification,” O’Connor asks students to make increasingly sophisticated “associational” connections, to move from novice reading encounters with texts to more expert ones. She wants them to experience “associational thinking” on multiple levels, from the personal and emotional to the definitional and critical. Ultimately, students’ ability to engage fully along a continuum of expert practice is shaped by their knowledge that their Web pages will be public, and their presentations to their peers a social act. All three key dimensions are in play in her teaching practices, as in so many of the case studies coming out of VKP.

Nevertheless, we believe it is a valuable exercise to slow down and look closely at each of three areas, and to begin making sense of how each dimension might be better understood for its shaping influence on learning. We now explore each of these areas more fully below.

A Note on Findings
Because faculty inquiry lives at the boundary of theory and practice, we have chosen to present the findings in two forms: as conceptual findings (representing the way theory informed practice, and vice versa) and design findings (representing some of the key claims on practice made by these concepts and values about learning). As a further response to the challenge of representing collective findings in a messy research environment, we also present each area with a set of “tags,” keywords that help associate the findings with various trajectories. Finally, at the end of each finding description we link to several relevant case studies within this volume.

[jump to Part II]

Notes
1. Sam Wineburg, Historical Thinking and Other Unnatural Acts (Philadelphia: Temple University Press, 2001). [return to text]
2. Many good resources exist on the scholarship of teaching. Two essential resources can be found at the Carnegie Foundation for the Advancement of Teaching (http://www.carnegiefoundation.org/CASTL/) and the Scholarship of Teaching and Learning tutorial at Indiana University, Bloomington (http://www.issotl.org/tutorial/sotltutorial/home.html). [return to text]
3. In all, more than seventy faculty from twenty-two institutions participated in the Visible Knowledge Project over five years. Participating campuses included five research universities (Vanderbilt University, the University of Alabama, Georgetown University, the University of Southern California, Washington State University, and the Massachusetts Institute of Technology), four comprehensive public universities (Pennsylvania’s Millersville University, California State University (CSU)–Monterey Bay, CSU Sacramento, Ohio’s Youngstown State University, and participants from several four-year colleges in the City University of New York system, including City College, Lehman, and Baruch), and three community colleges (two from CUNY–Borough of Manhattan Community College and LaGuardia Community College, and California’s Cerritos College). In addition to campus-based teams, a number of independent scholars participated from a half dozen other institutions, such as Arizona State and Lehigh University.  The project began in June 2000 and concluded in October 2005.  We engaged in several methods for online collaboration to supplement our annual institutes, including an adaptation of the digital poster tool created by Knowledge Media Lab (Carnegie Foundation), asynchronous discussion, and Web-conferencing.  For more detailed information, see the VKP galleries and archives at http://crossroads.georgetown.edu/vkp/. [return to text]
4. Cathy N. Davidson, “Humanities 2.0: Promise, Perils, Predictions,”  PMLA 123, no. 3 (May 2008): 711. [return to text]
5. John Seely Brown, “Foreword,” in Opening Up Education: The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge (Cambridge: MIT Press, 2008). [return to text]
6. For a broader discussion of the “teaching commons,” see Pat Hutchings and Mary Huber, The Advancement of Learning: Building the Teaching Commons (San Francisco: Jossey-Bass, 2005). [return to text]
7. David Perkins, “What is Understanding?” in Teaching for Understanding: Linking Research with Practice, ed. Martha Stone Wiske (San Francisco: Jossey-Bass, 1998), 39-58. [return to text]

New Media Technologies and the Scholarship of Teaching and Learning: A Brief Introduction to this Issue of Academic Commons

by Randy Bass, Georgetown University

A Bridge to Know-ware
Higher education traditionally has found few systematic ways to build and share knowledge about teaching and learning. It is not surprising then that there has been relatively little interaction between those most interested in new technologies and those invested in the scholarship on teaching and learning. Of course there are examples where the two communities intersect, sometimes for robust conversations. Yet much of this talk stays at the level of individual experimentation and focuses on teaching and classroom practice, with very little attention paid to learning. For whatever reason, the quantity and quality of those conversations are far less that we might hope, given the social impact of new technologies and the growing urgency of conversations around active learning, accountability, and assessment.So, how do we make any headway in a landscape where applied knowledge about learning is inchoate, where forms of learning are expanding in ways higher education is poorly situated to accommodate, and the technological contexts are shifting rapidly and radically? We need, in short, to merge a culture of inquiry into teaching and learning with a culture of experimentation around new media technologies. Our ability to make the best use of any technologies to improve education hinges ultimately on the reciprocal capacities to bring our powers of inquiry to bear on educational technologies, as well as to bring the power of new technologies to bear on our methods of inquiry and our representation of knowledge about teaching practice.Slowing Down and Looking at Learning
In this issue of Academic Commons we take up these questions by looking at the possibilities for building knowledge around teaching and learning in a rapidly changing technological landscape. Through articles, case studies, interviews and roundtables, the January 2009 issue of Academic Commons explores the continuity of learning issues from Web 1.0 to 2.0 technologies, from online discussion tools, hypertext and multimedia authoring to emergent forms of electronic portfolios, blogs, social networking tools, and virtual reality environments. We take these up in the context of a dual challenge: to understand better the changing nature of learning in new media environments and the potential of new media environments to make learning–and faculty insights into teaching–visible and usable.
The issue opens with a bundled set of essays that form a synoptic case study of the Visible Knowledge Project (VKP), a five-year project looking at the impact of technology on learning, primarily in the humanities, through the lens of the scholarship of teaching and learning.  These case studies explore the ways that faculty inquiring into their students’ learning deepened and complicated their understanding of technology-enhanced teaching. Out of these classroom-based insights emerged a set of findings that constitute a research framework, clustering especially around three broad areas:

  1. Learning for adaptive expertise: the role of new media in making visible the thinking processes intrinsic to the development of expert-like abilities and dispositions in novice learners;
  2. Embodied learning: the impact of new media technologies on the expansion of learning strategies that engage affective as well as cognitive dimensions, renewed forms of creativity and the sensory experience of new media, and the importance of identity and experience as the foundation of intellectual engagement; and
  3. Socially Situated learning: the role of social dimensions of new media in creating conditions for authentic engagement and high impact learning.

These broad areas of learning provide a bridge from earlier technology innovation to current new media technologies. They also serve as a way of seeing the capacities of new social media in light of the learning issues intrinsic to disciplinary and interdisciplinary ways of knowing. In this sense, they provide a framework for understanding this period of transformation as one (as Michael Wesch puts it in this issue) where we are shifting from “teaching subjects to subjectivities.” This expansive conception of learning challenges us then not to cope with technological change, but shifts that are essentially social and intellectual. As Michael Wesch puts it in his commentary on the meaning of these changes, “Nothing good will come of these technologies if we do not first confront the crisis of significance and bring relevance back into education.  In some ways these technologies act as magnifiers. If we fail to address the crisis of significance, the technologies will only magnify the problem by allowing students to tune out more easily and completely.”

The six additional vision pieces in this issue all provide different lenses onto this transformation. Two pieces–one by Kathleen Yancey and another that is the transcript of the closing session at the ePortfolio conference at LaGuardia Community College in April 2008–look specifically at the current practices and potential of ePortfolios to provide a site that both serves the needs of students to represent themselves and their learning through an integrative digital space as well as the needs of institutions to find better ways to understand the progress of student learning and intellectual development. A key element in this transformation is shifting the unit of analysis from the learner in a single course to the learner over time, inside and outside the classroom. What does this shift imply for the ways we understand learning and development? If we accept this new learning paradigm, what kinds of accommodations do we need to make in our approaches to the curriculum, the classroom, the role of faculty, and the empowerment of learners?

Other pieces in this issue consider similar shifts. For example, in a sampling excerpt from their book Opening Up Education: The Collective Advancement of Education Through Open Technology, Toru Iiyoshi and M. S. Vijay Kumar look at the potential of “open content, opening technology and open knowledge” to transform higher education. “We must develop not only the technical capability but also the intellectual capacity for transforming tacit pedagogical knowledge into commonly usable and visible knowledge: by providing incentives for faculty to use (and contribute to) open education goods, and by looking beyond institutional boundaries to connect a variety of settings and open source entrepreneurs” (Iiyoshi and Kumar, coming in February).

Confronting our Biases
Yet, it seems all too clear that higher education is mostly unprepared to make the most of this new potentiality–of open education and an expansive conception of learning. Gathering and sharing knowledge about educational effectiveness is tricky in an environment in which we rush on to the “next new thing,” as new media pedagogies (as with other emergent pedagogies) often lead to forms of learning that do not neatly fit into traditional frameworks of disciplinary learning and cognitive and critical skills. These new forms of learning–including emotional and affective dimensions, capacities for risk-taking and uncertainty, creativity and invention, blurred boundaries between personal and public expression, or the importance of self-identity to the development of disciplinary understanding, etc.–traditionally have been invisible in higher education. As Bret Eynon and I point out in our synthesis essay for the Visible Knowledge Project, “when the invisible becomes visible it is often disruptive,” although usually in productive and generative ways.
That theme of generative disruption runs throughout all of these pieces in this issue, none more than in Cathy Davidson’s interview about “participatory learning and the new humanities,” where her celebration of the potential for “Humanities 2.0” is counter balanced by entrenched reluctance to rethink basic practices in our fields, especially around the ways we recognize expertise, collaboration, and creativity. As Davidson puts it (in ways that could speak for most of the authors here),

I guess part of me just doesn’t understand why this isn’t the most exciting time for all of us in our profession and why we aren’t figuring out ways that we can use this to bolster our mission in the world, our methods in the world, our reach in the world, our understanding of what we do and what we have to offer our students in the world? It just feels like we’re in an age where we educators should be the thought leaders and so many of us are futzing around the edges, and I don’t get it.

In this issue of Academic Commons we take the disconnection between experimentation with new media technologies and conversations about learning as a presenting symptom of what Davidson calls “futzing around the edges.” That is, we can only futz because we do not have a vocabulary or a tradition for engaging with learning in meaningful communal ways. In this environment it is especially important to flank classroom-based inquiry with institutional learning, where we can put into practice wide-scale views of learning outcomes as textured as those of faculty who look at learning in their own classrooms. Many of the pieces in this issue provide a starting point for these connections, whether looking at the best institutional practices around electronic portfolios (see Roundtable), or the aspirations of a national project developing flexible rubrics for evaluating the intellectual work of students over time and through diverse intellectual products (“Can We Bridge an Expansive View of Student Learning with Institutional Learning? The VALUE Project Thinks we Can, and Here’s How,” an interview with Terry Rhodes, coming in February), or the visionary specifications for a flexible repository for the scholarship of teaching and learning, linking local expertise to collective wisdom (Tom Carey, John Rakestraw, and Jennifer Meta Robinson, “Expanding the Teaching Commons in Web 2.0: A New Vision for a Scholarship of Teaching and Learning Repository,” coming in February).

From the local to the virtual, from classroom innovation to “opening up education,” this issue of Academic Commons seeks to make a modest contribution to these questions and our collective endeavor toward addressing them. What binds these case studies and vision pieces together are the aggregated bonds of the three dimensions of learning emerging from the VKP framework: expertise, embodiment, and the social. If we could bridge our incipient sense of meaning for these dimensions in student learning with the full social embodiment of our collective expertise as educators, then we would indeed have a bridge to the future.

Acknowledgements: In putting together this issue I want to thank the supervising editors, Mike Roy and John Ottenhoff for the invitation and opportunity. I also want to thank Lisa Gates, managing editor of AC for her infinite patience and skill in working with such complicated and multi-faceted content. Many thanks to Pat Hutchings of the Carnegie Foundation for the Advancement of Teaching, for her support through the years, and especially her reading of the synthesis essay for this volume. I also want to thank my longtime collaborator, Bret Eynon, for his intellectual and spiritual companionship throughout the process; many thanks also to current and former colleagues at the Center for New Designs in Learning and Scholarship (CNDLS) and the Visible Knowledge Project who worked on dimensions of this issue, especially Theresa Schlafly, Susannah McGowan, Eddie Maloney, John Rakestraw. -RB

Return to Table of Contents for the January 2009 Issue of Academic Commons

In addition to the articles listed in the Table of Contents, the following are forthcoming:

  • Opening Up Education: The Remix, by Toru Iiyoshi and Vijay Kumar. Excerpts from the book Opening Up Education: The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge, editors Toru Iiyoshi and M.S. Vijay Kumar (Coming in February)
  • Tom Carey, John Rakestraw, and Jennifer Meta Robinson, Expanding the Teaching Commons in Web 2.0: A New Vision for a Scholarship of Teaching and Learning Repository (Coming in February)
  • Can We Bridge an Expansive View of Student Learning with Institutional Learning? The VALUE Project Thinks we Can, and Here’s How, an Interview with Terry Rhodes  (Coming in February)

Profiles of Key Cyberinfrastructure Organizations

by David Green, Knowledge Culture

We present here a collection of short profiles, specially written for Academic Commons, on key service organizations and networks that will be poised to assist and lead others who are working to bring a rich cyberinfrastructure into play. Some are older humanities organizations for which cyberinfrastructure is a totally new environment, others have been created specifically around the provision of digital resources and support.

We invite your comments and your suggestions for other organizations and networks that you see as key players in providing CI support.

Alliance of Digital Humanities Organizations (AHDO)

American Council of Learned Societies (ACLS)

ARTstor

Council on Library and Information Resources (CLIR)

Cyberinfrastructure Partnership (CIP) & Cyberinfrastructure Technology Watch

Humanities, Arts, Science, and Technology Advanced Collaboratory (HASTAC)

CenterNet

Institute of Museum and Library Services (IMLS)

Ithaka

The Andrew W. Mellon Foundation

National Endowment for the Humanities (NEH)

NITLE

Open Content Alliance

Software Environment for the Advancement of Scholarly Research (SEASR)

The Bates College Imaging Center: A Model for Interdisciplinarity and Collaboration

by Matthew J. Coté, Bates College

The Bates College Imaging and Computing Center (known on campus simply as the Imaging Center) is a new interdisciplinary facility designed to support Bates’s vision of a liberal arts education, as codified by its newly-adopted General Education Program. This program reflects the increasingly porous and mutable nature of disciplinary boundaries and emphasizes the effectiveness of teaching writing as a means of improving students’ ability to think, reason and communicate. The Imaging Center strives to further expand the reach of this program by promoting visual thinking and communication–serving as a catalyst for interdisciplinary and transdisciplinary work. In many ways the Center embodies most of the ideas underpinning Bates’s new General Education Program and is a model on this campus for the kind of transformative work cyberinfrastructure will enable. Floorplan image courtesy of the Bates College Imaging and Computing Center.

The Imaging Center’s physical space, its imaging resources and its place within the college’s cyberinfrastructure, are all designed to foster interactions between scholars from disparate fields and to further the Center’s goal of promoting visual literacy. Traditional campus structures–whether organizational or architectural–are efficient from the administrative perspective, but often have the unintended consequence of reifying disciplinary boundaries. For example, the spatial grouping of faculty by academic discipline provides few opportunities for faculty from different fields to interact with each other, either purposefully or by happenstance, while doing their work. Such campus structures have significant pedagogical ramifications as well. They encourage students to pigeon-hole ideas and ways of thinking according to academic field rather than inspiring them to find connections between fields of inquiry.
These consequences, of course, are antithetical to the goals of academic programs intended to foster interdisciplinary thinking. To counter these effects, the Bates Imaging Center provides a visually-inviting space available to all members of the campus community. Its array of equipment and instrumentation, and its extensive computer networking, make it the campus hub for collaborative and interdisciplinary projects, especially those that are computationally intensive, apply visualization techniques, or include graphical or image-based components.

Imaging Center Public GalleryBates College Imaging Center Public Space
Imaging Center Public Gallery (photo courtesy of the
Bates College Imaging and Computing Center)

The Imaging Center’s central public gallery provides comfortable seating, readily accessible kiosk computers and wireless networking to encourage faculty and students to use the space for both planned and spontaneous meetings of small groups. To make more public the scholarly activities taking place within the Center, a contiguous array of three large flat-panel LCD monitors displays looped sequences of images created by faculty and students who are using the Center’s resources to support their work. Image sequences include, for example, micrographs obtained using the Center’s microscopes, digital photographs taken by students working in the fine arts, maps generated using GIS mapping software, and animated multidimensional graphs of political data. The sequences are designed to exemplify effective visual communication and to juxtapose work by faculty and students drawn from widely varied disciplines throughout the campus. The display publicizes the scholarly activities taking place within the Center, and by encouraging viewers to think more deeply about the images, cultivates more sophisticated approaches to the images they encounter or create in their own work.

The Center’s gallery is abutted on one side by an imaging lab and on the other by a computer room.The imaging lab contains a digital photography studio and a suite of optical microscope rooms with a shared sample preparation room. Driven by the goals of improving the accessibility of work that is conventionally done in isolation, and of making the Center’s resources available to as broad an audience as possible, the microscope rooms are each electronically linked with the computer room. This allows images obtained with the microscopes to be displayed for large groups in real time, complete with two-way audio communication between the microscope operator and the audience.

Imaging Lab (photo courtesy of the Bates College Imaging and Computing Center)

Imaging Lab (photo courtesy of the Bates College Imaging and Computing Center)


Computer Room (photo courtesy of the Bates College Imaging and Computing Center)

The Imaging Center’s resources are leveraged by a one-gigabit-per-second network that connects the Imaging Center to the campus’s Language Resource Center and the Digital Media Center (the latter supports audio and video work). In this way each center can be physically located for the convenience of its most frequent users yet large data files and other electronic resources can be readily shared between centers. Local storage of the large data sets and images is provided by a two terabyte storage array.

As the Imaging Center moves forward, its participation in the Internet2 consortium will provide wide bandwidth access to large databases such as those relied upon by users of GIS mapping software and bioinformatics researchers. It will also make it possible for scientists working on the Bates campus to operate specialized instrumentation located at large research institutions and to do so in real time. These capabilities will bring to a small liberal arts college in Maine the unfettered access to databases, equipment and distributed expertise that were formerly available only to those working in large research facilities.

As is true with cyberinfrastructure generally, it’s the Imaging Center’s people that make it work. Two full-time staff members–one with expertise in database management, computer hardware and software development and GIS mapping, and the other a microscopist and photographer with technical training in optics and imaging technologies–bring a wealth of experience to the Imaging Center. They support the Center’s users by training them to use unfamiliar tools and techniques. Some workshops and group training sessions are used for this purpose, but the widely varying schedules and backgrounds of the Center’s users render scheduled, “one size fits all” training sessions insufficient. To complement these offerings, the staff is developing electronic training materials that use imbedded hyperlinks to provide the background that some readers might be missing. These documents have the advantages of being readily customized and updated, allowing readers to focus their attention on those aspects of a topic that are particularly pertinent or unfamiliar. Because the documents are available to anyone with internet access, they can be used whenever and wherever the need arises.

As workers in an ever-expanding range of fields seek to express or explore ideas through expert use of images, and to find and convey meaning in large multidimensional data sets through increased visualization capability, there will be a concomitant demand for improved visual literacy. As a result, acquiring the ability to communicate and think visually will be seen as an integral part of a complete education. This realization has motivated the development of a new type of center whose impact is dramatically enhanced by recent advances in computer power and connectivity. With the Imaging Center providing a practical working model of interdisciplinarity and numerous examples of the power of visualization, Bates is well placed to take advantage of the new directions afforded by a well-deployed cyberinfrastructure.

Managed Cyber Services as a Cyberinfrastructure Strategy for Smaller Institutions of Higher Education

by Todd Kelley, NITLE

Technology and Relationships
Director of the National Science Foundation, Arden Bement, recently stated that “At the heart of the cyberinfrastructure vision is the development of virtual communities that support peer-to-peer collaboration and networks of research and education.”[1] Just as Bement emphasizes networked relationships as an essential component of cyberinfrastructure, I would like to address how small to mid-sized institutions might meet some of the critical challenges of this vision.I propose that in order to realize the cyberinfrastructure vision, colleges and universities reconsider how they approach technology and technology management, which have become just as important as constructing and maintaining the physical facilities on campus. Providing Internet access, for example, should be seen as a key infrastructure asset that needs to be managed. A robust connection to the Internet is necessary for a successful local cyberinfrastructure; however, it is by no means sufficient. The new cyberinfrastructure should include cyber services that enhance existing organizational relationships and make new ones possible–on a national and global basis. However, for some institutions, deploying and sustaining sophisticated organization-wide tools and infrastructure are complex and risky activites.  Smaller institutions often simply cannot implement, sustain and support these initiatives on their own.Cyber Services
While colleges and university libraries were pioneers in using the Internet to provide access to scholarly resources, rarely have they used it to access enterprise technology tools. Instead, most campuses have tried to meet these needs through combining their own hardware infrastructure with (mostly) proprietary software systems that are licensed for the campus, such as Blackboard, ContentDM and Banner. This approach to learning management systems, repository and administrative services may have made sense at a time when the Internet was still in its early stage. It may still make sense for large institutions that have a degree of scale and deep human resources where the organizational benefits of locating all technology services on campus outweigh the costs.

However, smaller, teaching-centered colleges and universities require an attractive alternative to locating all hardware, software, and the attendant technical support on campus, without the onus of locating and selecting application service providers, negotiating licenses and support agreements. They also need to avoid becoming trapped by contractual relationships with new vendors or Faustian bargains with technology giants Google or Microsoft. One option for these institutions is to obtain managed services from organizations such as NITLE, which provide a broad array of professional development and managed information technology services for small and mid-sized institutions. Through using such managed services, institutions are reporting that they lower their technology risk and increase the value proposition for technology innovations.

Lowering Technological Risk Encourages Innovation
Typically, there is a high degree of risk of failure for smaller colleges and universities when they deploy a new technology system. This is because the technical resources and organizational processes required are just not part of the primary focus of these organizations. Typically the risk might be mitigated through devoting significant technological resources and organizational focus to altering the infrastructure in the hope that the institutional culture and processes will adjust to it. But this does not appear a wise approach.

When smaller colleges and universities need organizational technology they often:
1) work to identify the most appropriate vendor and negotiate to obtain the technology they need;
2) focus on how the technology works and on how the technical support for it will be provided; and
3) create organizational processes and procedures that attempt to connect the technical work to the perceived need and the promised benefits.

The focus in this process is often the technical or procedural aspects of a project when the institution would be far better served if the emphasis were on the substantive innovations, relationships, and other benefits that technology can provide. Relationships that are about technical issues per se are off-focus, distracting, and ultimately unproductive, relative to organizational mission.

The continuing development of more sophisticated and complex technologies and the increased dependence on them by these institutions will only increase the potential risk of failure for those that do not make a significant commitment to hiring technology specialists. Increased risk thwarts any interest in using technology to innovate, so technology becomes much less interesting and viable as a route to organizational strength and sustainability. The challenge for smaller colleges, then, is to have dependable, secure and innovative cyber services while reducing the risks and resources traditionally associated with creating new technology systems on campus.

Managed Cyber Services
What do managed cyber services look like and how do they work? In the case of NITLE, it aggregates the cyber services needs of smaller colleges and universities and provides managed services via the Internet so that each individual institution does not have to replicate the hardware, software and technical support on campus for each enterprise application that is needed. NITLE does the legwork, finding reliable and cost-effective hosting solutions and negotiating agreements with applications service providers for services and support. Open source applications are used wherever it is viable. Individual campuses do not have to become involved with these processes, as the goal is to provide an easy on-ramp without legal or contractual agreements with participating campuses. There are also opportunities to test services and experiment with them before participants commit to beginning a new service. In addition, NITLE provides professional development opportunities for campus constituents to learn about the functionality and features of the software in the context of campus needs. Moreover, it encourages campus representatives to participate in communities of practice that it supports.

NITLE currently offers four managed cyber services. The criteria used for selecting cyber services include: participants’ needs; the technology benefits; the development path for the technology (including reliability, scalability, and security); and the expectation and understanding that when adopted by peer institutions, the technology will support the learning communities on campus and peer-to-peer collaboration among campuses.Advantages of Open Source
Colleges are advised to consider open source software (OSS) whenever possible, because OSS offers distinct advantages. The first is the cost savings, as there are no annual licensing fees, and many OSS applications require less hardware overhead, thus helping contain hardware expenditure costs. Second is the support that OSS can provide: a common infrastructure, readily accessible to all, can enable institutions to collaborate more effectively and to focus together on the substantive activities that technology supports.

As a case in point, NITLE provides a repository service using the open source DSpace repository software. The twenty-five colleges and universities that participate in the repository program share their experience and expertise about how the software helps them meet their individual and common goals. Their stated goals include:
1) creating a centralized information repository for information scattered in various difficult-to-find locations;
2) moving archival material into digital formats and making it accessible from one easy-to-access location;
3) bringing more outside attention to the work of students and scholars and thus to the campus;
4) providing the service as a catalyst to help faculty and students begin to learn about and use new forms of publishing and scholarly communication, including intellectual property, open access and publishing rights;
5) preserving digital information.

According to one participating organizational representative to NITLE,

“the open-source approach is definitely helpful in terms of cost. Having [a dependable vendor] administer the hardware and software has been wonderful, since we can concentrate on the applications and not worry about the technical end….Having colleagues from similar schools work on this project has been beneficial, since we can play off of their strengths and weaknesses. They have also given us some good ideas for projects.”*

Another participating organizational representative has added,

“The open source nature of the software is important to us because we know that we are not locked into a closed proprietary system whose existence depends upon the livelihood of a software company. Furthermore, we wouldn’t have gotten started with d-Space on our own because of the infrastructure we’d have to provide to get it going. We don’t have the staff with the skills needed to handle the care and feeding of the server or to customize the software to our needs through application development. Having that part out of the way has given us the opportunity to focus on creating the institutional repository rather than being mired in technical detail of running the software.”*

Open technologies are more than a path to cost savings. They are a critical condition for innovation, access, and interoperability. Many colleges are using OSS for important critical operations, including email (Sendmail), web serving (Apache), and operating system (Linux) applications. This use of OSS suggests that there is a growing acceptance and adoption of OSS. The use of OSS can leverage economies of scale, support network effects, and dramatically increase the speed of innovation.

There is, however, still resistance to making consideration of OSS the de facto approach to meeting organizational software needs. There are several reasons for this opposition, including the view of OSS as hacking, the historical lack of accessible technical support and the paucity of documentation which has complicated the learning curve. Many have long recognized the potential of OSS, but they were reluctant to pursue it because of the increased need for specialized technical support on campus. Thus, for every OSS system, an institution would need to find and hire a technical specialist to support it. This approach certainly is not scalable and smaller institutions were right not to do it.

Multipoint Interactive Videoconferencing (MIV)

Another example of cyber service that institutions should consider is Multipoint Interactive Videoconferencing (MIV). MIV systems enable participants to communicate visually and aurally in real time through the use of portable high-resolution (and inexpensive) cameras and microphones attached to their computers. Participants can see and hear each other, not only on a one-to-one basis, but one-to-many as well. MIV is not a completely new technology; however, its enhanced level of functional maturity, the reduction in costs to provide it, and the need for such systems, have made MIV a technology that is on the verge of widespread adoption and use in a variety of settings.

In the winter and spring of 2007, a dozen participant colleges agreed to evaluate the use of MIV on their campuses and provide NITLE with feedback on the application and their perceptions of its utility. During this evaluation period, participant institutions discovered many types of needs for this technology, both for on-campus and off-campus communications. Uses included guest lectures, meetings between faculty working remotely and connecting with students studying abroad. Since this assessment, NITLE has used MIV for:

1) facilitated conversations led by one or two practitioners among a group of practitioners in an area of common interest, such as incubating academic technology projects or the application of learning theory to the work of academic technologists;
2) presentations by individuals who are using technologies of interest in their classrooms or other campus work to groups of others interested in whether and how they might do something similar, such as historians using GIS;
3) presentations by experts on topics of interest to others in their professional field, such as the academic value of maps;
4) technology training for the participants and users of the cyber services that NITLE offers.

The experience of MIV service participants suggests that the adoption of MIV may be most successful when placed in the context of next steps, developing relationships, individual experience and expertise, and common goals and objectives. This premise suggests learning and collaborative environments that include the use of MIV as part of a range of learning and communications options. Through the pilot study, the many positive benefits that participants have experienced have been documented. However, these benefits are a fraction of what can be realized when many more institutions participate because of network effects and because participants may use the MIV service to collaborate with other organizations outside of the opportunities organized by NITLE.

The “Open” Movement
The promise of information technology can not be met when only large, powerful, and for-profit IT organizations are in control. Open access, open courseware and open source initiatives point toward a world where there is a level playing field for individual learning and organizational innovation by not-for-profit institutions. Where just a few years ago it was difficult to name more than a few organizations that provide technical support for open source applications, the number of service providers is growing. Identifying these providers, selecting the best ones and negotiating agreements–these are the important challenges for managers of cyber services. Providers report that it is often financially unfeasible for them to market to and negotiate with individual institutions for providing cyber services. Creating a reliable and scalable approach to cyber services that works for colleges and providers alike would seem to be an important advance for smaller institutions, both individually and as an important segment of higher education.

The open movement is not about software tools alone, as Arden Bement noted in his comments about the importance of virtual communities. Success depends upon achieving a balance among essential human, organizational and technological components. The potential benefits of the open movement will accrue to colleges and universities that collaborate through using a common set of tools, actively participate in peer information networks and make a priority of mission-focused knowledge and skills. Many institutions know that the value of peer-to-peer communities for individual institutions will increase proportionally to their equal investment in all three of these components. The question may ultimately center on how to support these activities in a systematic and sustainable fashion. This is where small and mid-sized institutions may want to innovate in their approach to technology management.

Collaborative Relationships Foster Organizational Strength and Learning
Technology that supports wide-spread virtual collaboration among smaller colleges and universities such as the repository and MIV services described above demonstrates the potential power of cyber services to enhance organizational innovation, learning and productivity. These peer communities of practice allow campuses to: 1) exchange information about usage, technical issues and support; 2) learn from one another; and 3) synchronize their efforts to use technology to promote shared goals and processes. Having campuses work together and share knowledge as they engage with enterprise systems is a crucial part of the equation. The community of smaller colleges and universities needs a robust organization for that collaboration to happen. Organizations such as NITLE can help fill this need, while also providing opportunities for community participation and encouraging institutions to play lead roles in needs identification, service development, and training and education. As one participant has stated, participation in a managed cyber service is “an opportunity for a group of us to make a leap forward and learn from each other along the way. In addition, [our participating college] saw it as an opportunity to overcome our geographic isolation…I think we have the potential to achieve something tremendous that we will all be proud of.”*
Summary
Technology seems to be much more compelling to smaller colleges and universities–and more cost-effective as well–when it provides substantive benefits while the procedural and instrumental aspects of technology innovation are kept under control. This is not to say that technical expertise at smaller institutions is not necessary or that all cyberinfrastructures should be moved off campus. These extreme changes would be neither productive nor prudent. By working collectively, smaller colleges can use managed services to more effectively apply advanced technologies. Bringing institutions with common needs together in a shared organizational network and aggregating many of their common technology needs through cyber services seems to be a powerful idea. Participating campuses can then provide the scope and scale of programs and services that larger institutions provide while retaining their intimacy and sense of community, and also controlling costs. At the same time, a strong foundation is created both technologically and organizationally for the type of cross-institutional endeavors and learning communities that can help smaller institutions promote scholarship that is vital and attractive to students and faculty alike. When common goals are met in cost effective ways, mission is strengthened for all.

[1]”Shaping the Cyberinfrastructure Revolution: Designing Cyberinfrastructure for Collaboration and Innovation,”  First Monday, volume 12, number 6 (June 2007)http://firstmonday.org/issues/issue12_6/bement/index.html. Accessed September 26, 2007.

* Responses to a survey administered by the author to a subset of NITLE participating organizations during July of 2007.

Cyberinfrastructure and the Sciences at Liberal Arts Colleges

by Francis Starr, Wesleyan.edu

Introduction
The technical nature of scientific research led to the establishment of early computing infrastructure and today, the sciences are still pushing the envelope with new developments in cyberinfrastructure. Education in the sciences poses different challenges, as faculty must develop new curricula that incorporate and educate students about the use of cyberinfrastructure resources. To be integral to both science research and education, cyberinfrastructure at liberal institutions needs to provide a combination of computing and human resources. Computing resources are a necessary first element, but without the organizational infrastructure to support and educate faculty and students alike, computing facilities will have only a limited impact. A complete local cyberinfrastructure picture, even at a small college, is quite large and includes resources like email, library databases and on-line information sources, to name just a few. Rather than trying to cover such a broad range, this article will focus on the specific hardware and human resources that are key to a successful cyberinfrastructure in the sciences at liberal arts institutions. I will also touch on how groups of institutions might pool resources, since the demands posed by the complete set of hardware and technical staff may be larger than a single institution alone can manage. I should point out that many of these features are applicable to both large and small universities, but I will emphasize those elements that are of particular relevance to liberal arts institutions. Most of this discussion is based on experiences at Wesleyan University over the past several years, as well as plans for the future of our current facilities.

A brief history of computing infrastructure
Computing needs in the sciences have changed dramatically over the years. When computers first became an integral element of scientific research, the hardware needed was physically very large and very expensive. This was the “mainframe” computer and, because of the cost and size, these machines were generally maintained as a central resource. Additionally, since this was a relatively new and technically demanding resource, it was used primarily for research rather than education activities.

The desktop PC revolution started with the IBM AT in 1984 and led to the presence of a computer on nearly every desk by the mid 1990’s. The ubiquity of desktop computing initiated tremendous change to both the infrastructure and uses of computational resources. The affordability and relative power of new desktops made mainframe-style computing largely obsolete. A computer on every desktop turned users into amateur computer administrators. The wide availability of PCs also meant that students grew up with computers and felt comfortable using them as part of their education. As a result, college courses on programming and scientific computing, as well as general use of computers in the classroom, became far more common.

Eventually, commodity computer hardware became so cheap that scientists could afford to buy many computers to expand their research. Better yet, they found ways to link computers together to form inexpensive supercomputers, called clusters or “Beowulf” clusters, built from cheap, off-the-shelf components. Quickly, the size of these do-it-yourself clusters grew very large, and companies naturally saw an opportunity to manufacture and sell them ready-made. People no longer needed detailed technical knowledge of how to assemble these large facilities; they could simply buy them.

This widespread availability of cluster resources has brought the cyberinfrastructure needs full circle. The increasing size, cooling needs, and complexity of maintaining a large computing cluster has meant that faculty now look to information technology (IT) services to house and maintain cluster facilities. Maintaining a single large cluster for university-wide usage is more cost effective than maintaining several smaller clusters and reduces administrative overhead. Ironically, we seem to have returned to something resembling the mainframe model. At the same time, the more recently developed desktop support remains critical. As technology continues to progress, we will doubtless shift paradigms again, but the central cluster would appear to be the dominant approach for at least the next five years.

Hardware resources
The cluster is the central piece of hardware–but what makes up the cluster? How large a cluster is needed? Before we can address the question of size, we should outline the key elements. This becomes somewhat technical, so some readers may wish to skip the next five paragraphs.

First, there is the raw computing power of the processors to consider. This part of the story has become more confusing with the recent advent of multiple core processors. In short, a single processor may have 2, 4 or, soon, 8 processing cores, each of which is effectively an independent processor. This does not necessarily mean it can do a task faster, but it can perform multiple tasks simultaneously. Today, I think of the core as the fundamental unit to count, since a single processor may have several cores, and a single “node” (physically, one computer) may have several processors. For example, at Wesleyan, we recently installed a 36-node cluster, each node having 2 processors and each processor having 4 cores. So while a 36-node cluster may not sound like much, it has packed into it 288 computing cores.

This high density of computing cores has several advantages: it decreases the footprint of the cluster; decreases cooling needs; and decreases the number of required connections. For the moment, let’s focus on connectivity. The speed of connections between computers is glacial in comparison to the speed of the processors. For example, a 2-GHz processor does one operation every 0.5 nanoseconds. To get an idea of how small amount of time this is, consider that light travels just about 6 inches in this time. The typical latency–the time lost to initiate a transmission–of a wired ethernet connection is in the range of 0.1-1 milliseconds, or around 2000 clock cycles of the processor. Hence, if a processor is forced to wait for information coming over a network, it may spend a tremendous number of cycles twiddling its thumbs, just due to latency. Add the time for the message to transmit, and the problem becomes even worse. Multiple cores may help limit the number of nodes, and therefore reduce the number of connections, but the connectivity problem is still unavoidable. So what to do?

The answer depends on the intended usage of the cluster. In many cases, users want to run many independent, single process, or serial, tasks. In this case, communication between the various pieces is relatively unimportant, since the vast majority of the activity is independent. Ordinary gigabit ethernet should suffice in this situation and is quite cheap. If the usage is expected to include parallel applications, where many cores work together to solve a single problem faster, it may be necessary to consider more expensive solutions. However, given that it is easy to purchase nodes containing 8 cores in a single box, these expensive and often proprietary solutions are only needed for rather large parallel applications, of which there are relatively few.

All this processing power is useless, however, without a place to store the information. This is most commonly achieved by hard disks that are bundled together in some form, though for the sake of simplicity, they appear to the end user as a single large disk. These bundles of disks can easily achieve storage sizes of tens to hundreds of terabytes, a terabyte being 1000 gigabytes. The ability to store such large amounts of information is particularly important with the emergence in the last decade of informatics technologies, which rely on data-mining of very large data sets.

The last, and sometimes the greatest challenge, is housing and cooling the cluster. Even with the high density of computing cores, these machines can be large and require substantial cooling. A dedicated machine room with supplemental air conditioning is needed, typically maintained by an IT services organization. Fortunately, most IT organizations already have such a facility, and with the decreasing size of administrative university servers, it is likely that space can be found without major building modifications. However, do not be surprised if additional power or further boosting of cooling is needed. The involvement of the IT organization is critical to the success of infrastructure. Accordingly, it is important that IT services and technically-inclined faculty cultivate a good working relationship in order to communicate effectively about research and education needs.

OK, but how big?
Given these general physical specifications for the key piece of hardware, the question remains, how big a cluster? Obviously the answer depends on the institution, but I estimate 3 or 4 processing cores for each science faculty member. An alternate and perhaps more accurate way to estimate is to consider how many faculty members are already heavy computational users and already support their own facilities. I would budget about 50 cores for each such faculty member, though it is wise to more carefully estimate local usage. Part of the beauty of a shared facility is that unused computing time that might be lost on an individual faculty member’s facility can be shared by the community, reducing the total size of the cluster necessary to fulfill peak needs.

Software needs tend to be specialized according to the intended uses, but it is important to budget funds for various software needs, such as compilers and special purpose applications. The Linux operating system is commonly used on these clusters and helps to keep down software costs since it is an open source system. For many scientific computing users, Linux is also the preferred environment regardless of cost.

The cluster itself is of limited use without the human resources–that is, the technical staff–to back it up. At a minimum, a dedicated systems administrator is needed to ensure the smooth operation of the facility. Ideally, the administrator can also serve as a technical contact for researchers to assist in the optimal use of the cluster facility. However, to make the facility widely accessible and reap the full benefit for the larger university community, a more substantial technical support staff is needed.

The human element: resource accessibility
The presence of a substantial cluster is an excellent first step, but without additional outreach, the facility is unlikely to benefit anyone other than the expert users who were previously using their own local resources. Outreach is key and can take a number of forms.

First, faculty who are expert in the use of these computer facilities need to spearhead courses that introduce students to the use and benefits of a large cluster. This will help build a pool of competent users who can spread their knowledge beyond the scope of the course. This effort requires little extra initiative and is common at both liberal arts and larger universities.

Second, it is particularly important in a liberal arts environment to develop and sustain a broad effort to help non-expert faculty take advantage of this resource for both research and educational purposes. Otherwise, the use of these computers will likely remain limited to the existing expert faculty and the students whom they train.

Outreach across the sciences can also take the form of a cross-disciplinary organization. At Wesleyan, we established a Scientific Computing and Informatics Center, with the goal of both facilitating the use of high-performance computing and supporting course initiatives that use computational resources. The center is directed by a dedicated coordinator, who is not burdened with the technical duties of the systems administrator, and is assisted by trained student tutors.

The first goal of the center, facilitating cluster use, is primarily research-oriented. That is, the center serves as a resource where faculty and students can seek assistance or advice on a range of issues–from simple tasks like accessing the resources to complex problems like optimization or debugging complex codes. In addition, the center offers regular tutorials on the more common issues, making broader contact across the institution.

The second goal–educational outreach–is particularly important for liberal arts institutions. Educational outreach deals with all aspects of computational activities in the curriculum, not just cluster-based activities. For example, if a faculty member wishes to make use of computational software, the center staff will offer training to the students in the course, thereby leaving class time to focus on content. The center staff will also be available for follow-up assistance as the need arises. This eliminates the problem of trying to add or include training for computational resources in existing courses.

But efforts should not stop at this level. While we are still in the early stages of our experiment at Wesleyan, I believe that such a support organization will not have a significant impact if it simply exists as a passive resource. The center must actively seek out resistant faculty and demonstrate through both group discussions and one-on-one interactions how computational resources can enhance their teaching activities.

To maintain the long-term vitality of this kind of center, it is important to maintain a group of trained and motivated student tutors. To do this, we have chosen is to offer students summer fellowships to work on computationally demanding research projects with faculty. Some of these students then serve as tutors during the academic year. Combined with this summer program are regular lecture and tutorial activities. These tutorials may also be expanded to reach beyond the bounds of the university to other institutions as workshop activities.

Cross-institutional collaboration
Sometimes, all of these goals can be met by a single institution. But even if this is possible, there are still benefits to looking outside the institution. And for smaller institutions, pooling resources may be the only way to develop an effective cyberinfrastructure.

While high-speed networks now make it technically possible to establish inter-institutional efforts across the country, it is important to be able to gather together a critical mass of core users who can easily interact with each other. In my own experience, this happens more easily when the users are relatively nearby, say no more than 100 miles apart. It means that institutions can share not only the hardware resources over the network, but also the technical support staff. Of course, day-to-day activity is limited to interaction within an institution or virtual communications between institutions, but frequent and regular person-to-person interaction can be established at modest distances.

Balancing individual institutional priorities in such a collaboration is obviously a delicate process, but I envision that the institution with the most developed IT services can house and maintain the primary shared hardware resource, thereby reducing the administrative needs across several institutions. Adequate access to facilities can be guaranteed by taking advantage of the fact that most states maintain high-speed networks dedicated for educational usage. In addition, there are many connections between these state networks, such as the New England Regional Network. Personal interactions can be facilitated by regular user group meetings where users can share their questions and concerns with an audience that extends beyond their institution. In addition, new electronic sharing tools, such as wikis and blogs, can help foster more direct virtual communications.

Summary
To have a successful cyberinfrastructure in the sciences, it is essential to develop both hardware and human resources. Personal support and outreach to faculty and students is crucial if the benefits of the infrastructure are to serve a wider clientele. For liberal arts institutions, the presence of state-of-the-art infrastructure helps them to compete with larger institutions, both in terms of research and in attracting students interested in technology. At the same time, emphasizing outreach is of special importance to achieve the educational goals that make liberal arts institutions attractive to students.

Acknowledgments
I wish to thank Ganesan Ravishanker (Associate Vice President for Information Technology at Wesleyan University) and David Green for their assistance preparing this article.

College Museums in a Networked Era–Two Propositions

by John Weber, Skidmore College

To begin, let’s take it as a given that the “cyberinfrastructure” we are writing about in this edition of Academic Commons is both paradigmatically in place, and yet in some respects technologically immature. The internet and the intertwined web of related technologies that support wired and wireless communication and data storage have already altered our ways of dealing with all manner of textual and audiovisual experience, data, modes of communication, and information searching and retrieval. Higher education is responding, but at a glacial pace, particularly in examining new notions of publishing beyond those which have existed since the printed page. Technologies such as streaming and wireless video remain crude, but digital projectors that handle still image data and video are advancing rapidly, and the gap between still and video cameras continues to close. Soon I suspect there will simply be cameras that shoot in whatever mode one chooses (rather than “camcorders” and “digital cameras”), available in a variety of consumer and professional versions and price points. Already, high definition projectors and HD video are a reality, but they have yet to permeate the market. They will soon, with a jump in image quality that will astonish viewers used to current recording and projection quality.

For museums, network and CPU speed, as well as screen and projection resolution, are key aspects of these technologies. Only recently have digital images caught up with analog film in resolution and potential color accuracy (which, lest we forget, was never a given with film, either). The digitization of museum collections and their placement on higher education or public networks is undoubtedly a meaningful teaching asset, but the impact of this shift is, I suspect, largely a matter of ease and logistics, wherein the information provided replicates existing resources without fundamentally changing the knowledge gained from them. In other words, slide collections and good research libraries already provided much of the museum collection information now present on the internet. Yes, we should all be documenting our collections and making it easier for faculty and students to use those images, but no, that activity alone will not change our world in and of itself. Combined with an aggressive program to foster collection use by faculty and students, it can accomplish a great deal for a college museum, but we can and should aim even higher.

With that preface in place, let’s consider the museum, the internet, and the college curriculum as structuring conditions governing the nature of human experience that can occur within their boundaries. Museums are traditionally and fundamentally concerned with unique objects and notions of first-hand experience tied intrinsically to one specific place. There is only one Mona Lisa, and one Louvre where you can see it. There is only one Guggenheim Bilbao, and to see it you must go to Spain. In contrast, the internet is fundamentally about the replication and distribution of whatever it touches or contains, made available all the time, everywhere. And the internet continues to extend its reach, now arriving in phones, cafés, hotel rooms, airports, and no doubt soon on plane flights: ubiquitous computing, 24/7. The two conditions could not seem to be more distinct, disparate, and opposed.

Now let’s consider the nature of the college curriculum, briefly, as a structuring condition for experience and learning. Like the internet, it relies fundamentally on the reproducibility and distributability of the knowledge it seeks to offer each new generation of students. Courses are offered more than once. Books are read again and again. Disciplines must be taught in a way that adequately reproduces accepted standards and thereby transfers credits, reputations, and ultimately knowledge from grade to grade, classroom to classroom, and institution to institution, across time. The notion of a unique, one-time course is at best a luxury, at worst a foolish expenditure of time and effort–for faculty, if not for students. Shortly after becoming director of the Frances Young Tang Teaching Museum and Art Gallery at Skidmore College, for example, I overheard a tenured, senior faculty member remark over lunch that no one could compel him to create a course he would be able to teach just once. To him, the notion was absurd and counterproductive. His point was obvious: since it always takes more than one attempt to get a course developed and refined for a given student community and college culture, creating courses you can offer only once is simply not an intelligent way to teach, even if the demands of establishing a consistent curriculum would allow it, which of course they don’t.

As a new college museum director and recent emigrant from the world of the large, urban museum, it was an instructive moment, and as someone who had periodically taught at the college level, it made perfect sense. Yet at the same time, I was mildly taken aback: museums routinely create “courses” (i.e. exhibitions) that they “teach” (i.e. present) only once. The one-time special exhibition is, in fact, arguably our bread and butter. Even museums with world-class collections (e.g. New York’s Metropolitan Museum of Art or the Museum of Modern Art) rely on special, one-time exhibitions to drive attendance, increase membership, build revenue, and underwrite their economic survival. How, then, can college museums effectively link a program of changing exhibitions to the rhythm of a college curriculum?

How, in essence, can we “teach” the exhibition after it has left the gallery? How can we marry the one-time encounter with a set of unique objects to the cyclical, repeating demands of curriculum? These are central questions for college museums as they are asked increasingly to play a more significant role in the teaching efforts of the institutions that house and foster them. In fact, they may well be thecentral questions, since without answering them college museums are unlikely to achieve a new degree of relevance and support within their institutional context.

Paradoxically (in view of how I started this article) I’m going to argue that the best available answers are to be found in the creative use of new technologies and the internet. Networked multimedia technologies and the maturing cyberinfrastructure can’t fully reproduce the one-time experience offered by the museum space and the museum exhibition, but they can go much farther toward capturing its unique, spatial, temporal, multimodal, three-dimensional and temporal impact than any previously available publication method or recording device we have had.

Now let’s examine the nature of exhibitions and museum installations themselves; I’m using art museums as my test case, but much or most of what I’m saying should apply to other kinds of institutions and subject matters. Museum exhibitions exist in space, and by that I mean three-dimensional space. They house and assemble discrete groups of objects, arranged by curators to create or emphasize meaning through juxtaposition, sequence, and context. Wall texts, lectures, publications, docent and audio tours have been the primary means of sharing with museum visitors the curatorial intentions driving exhibitions and the insights gleaned in the course of assembling them. Over the past decade and more, museums have experimented increasingly with interactive kiosks, websites, and more recently podcasts as ways to share insights, ideas, and background information relevant to the work on view. College museums have participated in this exploration, but only rarely led it.[1] I suspect this has to do in part with the relatively small size of education departments in college museums, combined with an orientation toward “scholarship” that finds its preferred outcome in printed matter, i.e. the scholarly catalogues valued by faculty curators, rather than in “visitor outreach” so conceived as to motivate and underwrite digital programming.

An additional factor slowing digital innovation in college museums may be the fact that the IT and academic computing staff on college campuses–which could in theory assist museums in the creation of digital learning programs–are generally beset by huge demands from across campus. Only rarely can they devote extended blocks of time and significant resources to their resident museums. Yet the presence of such theoretically available staff makes it difficult for college museum directors to argue for dedicated, in-museum staff devoted to digital matters. As a result, we attempt to piece together project teams from existing staff, work-study students and interns, a mix that seldom attains the degree of hands-on experience, longevity, or programming expertise needed to create truly new, exceptional programs. This Catch 22 is, I suspect, not a trivial issue.

In contrast, large urban museums such as the National Gallery, London, the Minneapolis Institute of Art, the San Francisco Museum of Modern Art, New York’s Museum of Modern Art, and recently Los Angeles Museum of Contemporary Art, among many others, have created groundbreaking interactive educational programming by hiring dedicated staff and devoting significant fiscal resources to their efforts. Generally, those institutions have relied on a balance of in-house staff and high-powered (but modestly compensated) outside programming and design firms.

Yet despite the relative lack of resources that college museums devote to their digital education efforts, the potential rewards of doing so are significant. In particular, I believe that college museums also have a vested interest in exploring an area of digital programming that remains largely untouched by their civic counterparts, namely, the creation of rich multimedia documentation and multilayered, interactive responses to exhibitions themselves, after they have opened. Such programming would focus not only on the basic content of the exhibition (i.e. the individual images and objects in it) but on the physical exhibition itself as a carefully considered form of content and utterance. Such programming would take full advantage of the completed exhibition as the arena for both documenting and interrogating the set of propositions, insights, and ideas expressed in its physical layout and checklist. It would survey curatorial, scholarly, and lay responses to the completed show, allowing insights gained in the final installation and post-facto contemplation of the exhibition to emerge over time. Finally, such an approach would offer real-time walk-throughs of the exhibition, as well as high-quality, 360-degree still images, providing future virtual visitors a strong, visceral sense of what it felt like to be in the galleries with the work, looking from one object to another, moving through space, and getting a sense of the way the curator used the building and its architecture.[2]Although simple and straightforward, this practice has rarely been explored due to the mandates and pressures governing digital education programs in large museums. With a few laudable recent exceptions [3], major museums create interactive programs designed to provide visitors to upcoming exhibitions with background information on the basic content to be on view. They create their programs to be ready on opening day. Once the exhibition is open, the harried staff moves on to the next project for the next show. In short, such institutions ignore the exhibition as a finished product and focus on its raw content, a practice that makes sense given their audiences and economics. A quick survey of museum websites demonstrates that few museums are even in the practice of posting extensive images of their shows or galleries online, regardless of the extensive databases of collection images they may maintain.

For college museums that seek to create new ways to encourage faculty to teach their content and bring classes to their galleries, the potential benefits of creating experientially gripping and idea-rich responses to exhibitions should be obvious: digital technologies can allow us to teach an exhibition after it closes, and that would be a fundamentally new step for the museum world.

The second thing I’d like to discuss is the potential relevance of museum-based teaching and learning to generations of students (and soon faculty) for whom the structured but non-linear, highly visual as well as verbal, multimodal information world offered by the cyberinfrastructure is second nature. Highly textual, the World Wide Web in particular is also routinely and compulsively visual. It is a domain that is designed. Pictures are used as building blocks in enterprises created to argue, inform, archive, entice, sell, and distract. Rarely do we now encounter a text-only website; instead, text-image juxtapositions prevail, and websites now typically offer a mix of static graphics, sound, and animated graphics or video clips. Effective graphic design, or “information design” if you will, is essential. Students today grow up in this world and live there. Significantly, their cyberworld is a social world of self-projection and at times fantasy (i.e. blogs, Facebook pages, and social gaming) as well as a realm of entertainment and research.

As higher education considers the “digital native,” “net generation” students now entering the academy, the question of how to teach what is variously referred to as visual literacy, information literacy, twenty-first-century literacy, or expanded literacy comes increasingly to the forefront. I share the conviction that unless colleges and universities find a way to expand their text-based notions of literacy, analysis, and critique to include the domains of the visual and the moving image, we are not equipping our students adequately to enter either the future academic world or the workplace. Quite simply, the tools that empower and govern human expression have changed, and the academy needs to decide how it will respond.

As I have argued elsewhere[4], museums can potentially play an intriguing role in fostering forms of visual literacy and expanded literacy suited to the digital, networked era. Like the internet, the museum space is structured, yet non-linear. You move through museum galleries laterally from object to object in a largely self-determined path, much like motion from webpage to webpage. Both experiences are highly but not exclusively visual. Along with looking, museum visits generally encompass reading, listening, talking to friends and family members or museum personnel, and making decisions about how long to linger in any given place. Museum visits, like many web visits, are infused by random user choices made within spatial structures that are highly designed and planned by their builders.

Teaching within the museum space forces faculty and students alike to make different choices about how to structure time, how to do research, and, one hopes, about how to present their ideas, analysis, and conclusions. In pushing the visual dimension of experience and analysis to the forefront, museum exhibitions of all kinds force participants to use their eyes and link what is seen to what is said and written.
Notions of proof and argument evolve in new ways when first-hand, three-dimensional visual artifacts rather than texts are the subject of analysis. For example, a professor I know begins a class by bringing her students to the museum and showing them everyday ceramics and pottery from the American southwest. Without the benefit of library research, she asks them to deduce everything they can about the people that produced the artifacts from the visual evidence in front of them, unaided by others’ insights. Allowing students to work with visual evidence similar to the material confronted by working archeologists, and forcing them to use only their eyes and brains demands that students both look and think for themselves, expressing their own conclusions in their own words.

As another example of the intersection of visual and analytical learning in the museum environment, Molecules That Matter, a special exhibition on view this year at the Tang Museum, was originated by a longtime Skidmore organic chemistry professor, Ray Giguere. Investigating ten organic molecules that influenced twentieth-century history (aspirin, isooctane, penicillin, polyethylene, nylon, DNA, progestin, DDT, Prozac, and buckyball), the exhibition brings together a wide variety of artworks and objects of material culture with a set of huge, specially commissioned, scientifically accurate molecular models. Reaching into fields as diverse as women’s studies (progestin is the molecule responsible for oral contraception), economics, psychology, engineering, medicine and nutrition, technology, environmental studies, and of course art and art history, it offers a wealth of ways, visual and otherwise, for faculty and students to engage its subject matter. Crucially, the show seeks to function as a starting point for wide-ranging investigations, research projects, and responses. Far too broad to sum up the many topics it points to, Molecules That Matter offers specific, highly-stimulating and revealing artifacts as visual bait to lure non-scientists and future scientists alike to reconsider how organic chemistry runs through their everyday lives in unnoticed ways.

Working on an extended website for the show with a group of students, Susi Kerr, the Tang’s senior educator, Ray Giguere, myself, and the rest of the exhibition team had to ask the students and ourselves again and again how we could not simply say but show the ideas we sought to convey. In both the museum and on the internet, words alone simply don’t entice or suffice. Furthermore, in both domains, not all visual experiences are created equal–some pictures, objects, and images are more powerful and academically appropriate than others, and learning to distinguish between them is a key skill that students (and first-time faculty curators) need to learn. I have also found that museum writing (for intro texts, extended object labels, and even catalogue essays for non-specialist audiences) bears more in common with writing for the web than does the traditional academic paper. Museum writing is inherently public, for one thing, and meant to be read by people who can walk away the minute they lose interest. That said, all three forms of writing (museum, web, and academic) need to be succinct, grammatically correct, pleasingly well-crafted, and intellectually sound.

To sum up, the two propositions outlined here argue for (1) the importance of networked digital technologies to the particular mission of the college museum, and (2) the potential importance of the college museum in teaching forms of visual literacy suited to the internet era in innovative and appropriate ways. I take it as a given that museums and the materials they hold and display are valuable to their particular subject domains and academic disciplines. That should be obvious and beyond dispute, and for that reason alone college museums deserve a place on their campuses. However, if we are to play an even more essential and intriguing role in higher education, museums of all varieties must explore how we can function as a core aspect of the overall teaching effort of our institutions, and how we can regularly address multiple disciplines in our exhibitions. At that moment, our intersection with the cyberinfrastructure and the largely unexploited teaching potential of digital technologies takes on a new significance.
NOTES

[1] One exception I can think of is American Visions, The Roy L. Neuberger Collection, an excellent, early interactive CD-ROM published by SUNY Purchase in 1994. Tellingly, the art historian who worked on it was Peter Samis, who soon became head of interactive educational technologies at SFMOMA and pioneered our efforts to develop SFMOMA’s award-winning interactive programs.

[2] See, for example, the brilliant use of QuickTime VR in Columbia University’s Real?Virtual, Representing Architectural Time and Space, which stunningly documents Le Corbusier’s Ronchamp Church of Notre-Dame-Du-Haut.

[3] New York MOMA’s recent Richard Serra retrospective was accompanied by an admirable video walk-through of the completed exhibition, narrated insightfully by the artist himself. In Los Angeles, the Museum of Contemporary Art created an extensive site that visually documents the WACK exhibition on the history of feminist art, and brings to bear the voices of many artists and scholars who spoke at the museum while the show was on view. Audio of the artists and other speakers was complemented by images of them with their audiences, and by a list-serve allowing others to comment. Together, these programs brought the exhibition itself to life, adding texture, voice, and personality rarely seen in the “big museum” world.

[4] See “Thinking Spatially: New Literacy, Museums, and the Academy,” EDUCAUSE Review Online, January-February, 2007, pp 68-69.

Museums, Cataloging & Content Infrastructure: An Interview with Kenneth Hamma

by David Green, Knowledge Culture

Ken Hamma is a digital pioneer in the global museum community. A classics scholar, Hamma joined the Getty Trust in 1987 as Associate Curator of Antiquities for the Getty Museum. He has since had a number of roles there, including Assistant Director for Collections Information at the Getty Museum, Senior Advisor to the President for Information Policy and his current position, Executive Director for Digital Policy and Initiatives at the Getty Trust.

David Green: Ken, you are in a good position to describe the evolution of digital initiatives at the Getty Trust as you’ve moved through its structure. How have digital initiatives been defined at the Getty and how are they faring at the institutional level as a whole, as the stakes and benefits of full involvement appear to be getting higher?
Ken Hamma: Being or becoming digital as short-hand for the thousands of changes institutions like this go through as they adopt new information and communication technologies has long been discussed at the Getty from the point of view of the technology. And it did once seem that applying technology was merely doing the same things with different tools when, in fact, we were starting to embark upon completely new opportunities. It also once seemed that the technology would be the most expensive part. Now we’ve learned it’s not. It’s content, development and maintenance, staff training, and change management that are the expensive bits.

About 1990 it seemed to me (without realizing the impact it would cause) that it was the Getty’s mission that would and should largely drive investments in becoming digital. That it would require someone from the program side of the house to take more than a passing interest in it. I know that sounds impossibly obvious, but it wasn’t nearly so twenty years ago when computers were seen by many as merely expensive typewriters and the potential of the network wasn’t seen yet at all. Needless to say, the interim has been one long learning curve with risks taken, mistakes made, and both successes and failures along the way. Now, we’ve just got to the point at the Getty where–with a modicum of good will–we can think across all programs with some shared sense of value for the future. We now have a working document outlining the scope and some of the issues for digital policy development at the institution that would cover things like the stewardship and the dissemination of scholarship, digital preservation and funding similar activities elsewhere. Within this scope, we’ll be considering our priorities, the costs and risks involved, and some specific issues such as intellectual property and scholarship, partnerships and what kind of leadership role there might be for the Getty.

Do you see the Getty, or some other entity, managing to lead a project that might pull museums together on some of these issues?
There’s only a certain amount that can be done from inside one institution and there are some fundamental changes that can’t be made and probably need to be made. One of the big problems about technology is its cost. For so many institutions it’s still just too expensive and too difficult. There’s a very high entry barrier–software license and maintenance fees as well as technology staff, infrastructure development and professional services–in short, the full cost of owning technology. The result isn’t just a management problem for museums but an opportunity cost. We’re falling behind as a community by not fully participating in the online information environment.

There was a technology survey in 2004 of museums and libraries that pointed out that although small museums and public libraries had made dramatic progress since 2001, they still lagged behind their larger counterparts.[1] While almost two-thirds of museums reported having some technology funding in the previous year, 60% said current funding did not meet technology needs and 66% had insufficiently skilled staff to support all their technology activities. This problem is complicated by a gap between museums’ community responsibilities and the interests of the commercial museum software providers–notably the vendors’ complete disinterest in creating solutions for contributing to aggregate image collections. There was a similar gap between library missions and OPAC (Online Public Access Catalog) software until OCLC grew to fill that gap in the 1980s.

Can you imagine any kind of a blue-sky solution to this?
Well, imagine a foundation, for example, that took it upon itself to develop and license collection-management and collection-cataloging software as open source applications for institutional and individual collectors. It might manage the software as an integrated suite of web applications along with centralized data storage and other required infrastructure at a single point for the whole museum community. This would allow centralized infrastructure and services costs to be distributed across a large number of participating institutions rather than being repeated, as is the case today, at every institution. Museums could have the benefits of good cataloging and collection management at a level greater than most currently enjoy and at a cost less than probably any individual currently supports.

Managing this as a nonprofit service model could create cataloging and collection management opportunities that are not just faster, better and cheaper, but also imbued with a broader vision for what collecting institutions can do, both individually and as a community in a digital environment. If we could do this by providing open source applications as well as web services, it would build value for the community rather than secure market advantage for a software vendor. A service model like this could also assume much of the burden of dealing with highly variable to non-existent data contributions that have plagued previous attempts to aggregate art museum data. And I think it could do it by supplying consistent metadata largely by enabling more easily accessible and better cataloging tools.[2] This problem of aggregating museum data has a relatively long history and its persistence suggests that though current schemes are certainly more successful, what the community needs is a more systemic approach. One of the problems is that there just isn’t a lot of good museum data out there to be aggregated. So talking about what it would be like to have aggregated repositories other than those that are hugely expensive and highly managed (like ARTstor), it’s unlikely to happen anytime soon. There’s not enough there there to aggregate with good results.

Cataloging seems to be the key to this future, as far as museums’ resources are concerned. Would this scenario would be a first step in producing some good common cataloging?
Well, yes. It’s not enough to say to institutions, “You have to be standards-compliant, you have to use thesauri, you have to use standards, you have to do this and do that.” There are a lot of institutions that aren’t doing anything and aren’t going to do things that are more expensive and time consuming. So it’s not going to help to say that collection managers should be doing this. They’re just not going to do it unless its easier and cheaper, or unless there an obvious payoff and there isn’t one of those in the short term.

So such a project, if it were ever undertaken, would be about providing infrastructure, about providing tools?
Yes, as well as thinking about how we maintain those tools and how we provide services. Because most cultural heritage institutions don’t have IT departments and probably never will, how can we think about sharing what’s usually thought of as internal infrastructure? I mean, choose a small museum with a staff of three; you can’t say ‘you can’t have a finance guy because you need IT,’ or ‘you can’t have a director because you need to do cataloging.’ That’s just not going to happen.

There’s a related model that you have been working on that provides a technical solution both to cataloging and to distribution. If I’m right, it’s not about creating a single aggregated resource but rather about enabling others to create a range of different sources of aggregated content, all using metadata harvesting.
Yes, it’s still in its formative stages, but the essential idea is to put together a system that is lightweight, easily implemented by small institutions, doesn’t require huge cataloging overhead and that supports resource discovery. A problem today is that if you wanted to ask for, say, an online list of all Italian paintings west of the Mississippi, that presupposes that all collections with an Italian painting are participating. But we’re so far from that. It’s the rich and well-funded that continue to be visible and the others are largely invisible. So can we come up with a protocol and a data set that would allow for easy resource discovery that would have a low bar for cataloging and metadata production for unique works?

In this project, we’ve gone through a few rounds now, using the recently developed CDWA Lite as the data standard, mapping that to the Dublin Core in the Open Archives Initiative Protocol for Metadata Harvesting (OAIPMH). Dublin Core, as we’ve all learned, is a bit too generic so we’ve applied some domain knowledge to it and have additionally included URL references for images. We’ve collaborated with ARTstor and have done a harvesting round with them. Getty’s paintings collection is in ARTstor not because we wrote it all to DVD and mailed it to New York, but because ARTstor harvested it from our servers. Just imagine we get to the point where all collections can be at least CDWA-Litely cataloged–say just nine fields for resource discovery. Then these can be made available through an exchange protocol like OAIPMH and then interested parties such as an ARTstor (who might even host an OAI server so not every collecting institution has to do that) could harvest them. If we could get that far and we imagine that other aggregators like OCLC might aggregate the metadata even if they didn’t want the images, it could be completely open. The network would support collection access sharing and harvesting that would be limited only by the extent of the network. Any institution (or private collector) could make works available to the network so any aggregator could collect it. A slide librarian at a college, with desktop harvesting tools, could search, discover and gather high-quality images and metadata for educational use by the teachers in that school. Or perhaps intermediate aggregators would do this with value-added services like organizing image sets for Art 101 at a cost that might suggest a different end-user model.

How far away is this from happening?
The protocol exists and will likely very shortly be improved with the availability of OAI-ORE. The data set exists but is still under discussion. That will hopefully be concluded in the next months. And the data standards exist, along with cross collection guides, like CCO, that’s Cataloging Cultural Objects, on using them. The tools should not be too hard to create. The problem again is the institutional one, the usual one when we’re talking about content. Most museums are not willing to enter into such an open environment because they will want to know who is harvesting their collection. It’s the reaction that’s usually summed up by “we’re not sure we can let our images out.” These are those expected nineteenth-century attitudes about protecting content along with the late twentieth-century attitudes that have been foisted on the museum community about “the great digital potential”–generating revenue based on that content as long as they control it and don’t make it accessible. How sad.

The recent NSF/JISC Cyberscholarship Report[3] discusses the importance of content as infrastructure, and how any cyberscholarship in a particular discipline is grounded until that part of cyberinfrastructure is in place. Museums are clearly far behind in creating any such content infrastructure out of their resources. What will it take to get museums to contribute more actively to such an image content infrastructure? Is there a museum organization that could coordinate this or will it take a larger coordinating structure? Will museums be able to do this together or will they need some outside stimulus?
If it isn’t simply a matter of waiting for the next generation, I don’t really know. It would really be helpful if there were, for example, a museum association in this country that had been thoughtfully bringing these issues to the attention of the museum community, but it hasn’t been true for the last twenty years. And museums are different from the library community with respect to content-as-cyberinfrastructure in that they’re always dealing with unique works. This changes two things: first, one museum can’t substitute a work in the content infrastructure for another one (the way in which a library can supply a book that another library cannot); and, secondly, for these unique works there’s a much greater sense of them as property (“its mine”). This, in a traditional mindset, raises the antenna for wanting to be a gatekeeper, not just to the work but even to information about it. You can see this in museums talking about revenue based on images of the works in their collections, or the need for museums to be watching over “the proper use” (whatever that is) of their images. Not that we don’t need to be mindful of many things like appropriate use of works under copyright. So there is still the sense that there’s got to be something (financial) gained from these objects that are “mine,” whereas most of these collections are supported by public dollars and there must be some public responsibility to make them freely available.

‘You’ve talked elsewhere about the “gatekeeper” mentality among many museum professionals, perhaps especially curators. How do you imagine the forward trajectory of this? How will this gatekeeper mentality play out?
Yes, it’s been very frustrating, but I think it is changing. Even over the past few years I think there’s been significant change in how people think about their gatekeeper role. Today–different from ten years ago–I would say curators are less and less gatekeepers, and directors are being caught off-guard by curators proposing greater openness of the sort that will take advantage of network potential. The Victoria & Albert Museum, the Metropolitan Museum and others are now making images available royalty-free for academic publishing.[4] And along with these changes there is a change in the tenor of the discussion. We want to keep the conversation going as much as possible in hopes that we can move toward a world where objects, especially those in the public domain, can become more fluid in this environment. Many of the attitudes toward intellectual property can be summed up in focusing more on maintaining appropriate attribution for work rather than asserting “ownership,” rather than saying, “it’s mine, you have to pay me for it.” If we’re honest we have to admit that there’s really not a lot of money in the whole system around these kinds of resources. In fact, the real value of these items lies in their availability, their availability for various audiences but especially for continued scholarship and creativity.

That’s a good point. Not too long ago the Yale art historian Robert Nelson said in an interview here that whatever is available online is what will be used, what will create the new canon. He made the analogy to JSTOR. In teaching he notices that the articles he cites that are in JSTOR are the ones that get read; the others don’t.
Yes, that’s absolutely true. And it will take one museum or one major collecting institution to have the imagination to see that and to see that in addition to people coming into the gallery for a curated exhibition, that this other experience of network availability and use has extraordinary value. And if there were two or three really big collections available, literally available as high-quality public domain images, not licensed in any way, one could imagine there would be significant change in attitudes pretty quickly.

You’ve described the open quality of the digital environment as threatening to many in institutions. Could you elaborate a little on that?
The extent to which the opportunities bundled here for realizing mission in non-profits are perceived as threats derives largely from confusing traditional practice with the purpose of the institution. The perception of threats, I think, clearly has been decreasing over the last few years as we become more comfortable with changes (perhaps this is due to generational shift, I don’t know). It is decreasing also as we continue with wide ranging discussions about those traditional practices, which were well suited to business two decades ago but act as inappropriately blunt instruments in the digital environment. These include, for example, the use of copyright to hold public domain hostage in collecting institutions; notions of “appropriate” cataloging, especially for large volume collections that are more suited to a slower paced physical access than they are to the fluidity of a digital environments; and assumptions that place-based mission continues alone or would be in some way diminished by generous and less mediated online access.

In your ACLS testimony back in 2004 on the challenges for creating and adopting cyberinfrastructure, you argue that the most important work for us all ahead is not the technology or data structures but the social element: the human and institutional infrastructure. Is this the weakest link in the chain?
I’m not sure that I would still describe institutions and people as the weakest link, but rather as the least developed relative to technology and the opportunities it brings. This too seems to have changed since the start of the work of the ACLS Commission. We can do plenty with the technology we now have on hand but we’ve frequently lacked the vision or will to do so. One of the most startling examples of this became visible several years ago when the Getty Foundation (the Grant Program) was awarding grants under the Electronic Cataloging Initiative. Many Los Angeles institutions received planning and implementation grants with varied results. One of the most successful would have been predicted by no one other, I suppose, than the hard-working and ingenious staff who are employed there namely, the Pacific Asia Museum. Greater than average success from an institution with, to all appearances, less capacity and fewer resources than other participants was not based on access to better software or on an IT manager who would only accept a platinum support package. It was based on the will and the imagination of staff and the institution.

So would you cite that museum as one that is successfully redefining itself for a digital world?
Yes. You know, there are lots of museums that are doing really good work, but it’s going to take time and the results will show up eventually. If all the effort over the next ten years or so is informed by more open attitudes about making content more available–seeing content as cyberinfrastructure–then it will be all the better. It really is a question of attitude in institutions and a willingness to see opportunities. Almost never believe “we haven’t got the money to do it.” In scholarly communication there are millions of dollars going into print publications that, for example, have a print run of several hundred, for heaven’s sake. You just need to take money out of that system and put it into a much more efficient online publication or collection access system.

It’s about attitude and a willingness to invest effort. The Pacific Asia Museum is a good example. It doesn’t have the budget of the other large institutions in LA and yet it was among the most successful in taking advantage of this opportunity from the Getty’s electronic cataloging initiative. They were very clear about the fact that they wanted to create a digital surrogate of everything in their collection, do some decent cataloging and documentation and make it available. That just sounds so perfectly obvious. But that there are so many institutions that don’t seem to get something so basic, that don’t understand some aspect of that, is just completely astounding to me.
NOTES 

[1] Status of Technology and Digitization in the Nation’s Museums and Libraries (Washington, DC: Institute of Museum and Library Services, 2006),http://www.imls.gov/publications/TechDig05/index.htm.

[2] Recent aggregating efforts include ARTstor and, in recent history, AMICO, both of which look back to the Getty’s Museum Educational Site Licensing project and the earliest attempt to coordinate art museum data at the point of cataloging in Museum Prototype software from the Getty Art History Information Program.

[3] William Y. Arms and Ronald L. Larsen, The Future Of Scholarly Communication: Building The Infrastructure For Cyberscholarship, report of a workshop held in Phoenix, Arizona, April 17-19, 2007,http://www.sis.pitt.edu/~repwkshop/NSF-JISC-report.pdf.

[4] See Martin Bailey, “V&A to scrap academic reproduction fees,” The Art Newspaper 175 (Nov 30, 2006), http://www.theartnewspaper.com/article01.asp?id=525.; The Metropolitan Museum, “Metropolitan Museum and ARTstor Announce Pioneering Initiative to Provide Digital Images to Scholars at No Charge,” press release, March 12, 2007; and Sarah Blick, “A New Movement to Scrap Copyright Fees for Scholarly Reproduction of Images? Hooray for the V & A!,” Peregrinations 2, no. 2 (2007), http://peregrinations.kenyon.edu/vol2-2/Discoveries/Blick.pdf.

css.php