Can We Promote Experimentation and Innovation in Learning as well as Accountability? Interview with Terrel Rhodes

by Randy Bass, Georgetown University

Editor’s Note: What does the learning revolution inherent in the expansion of social and digital media have to do with the national conversation around assessment and accountability? Faculty often fear that “assessment” (especially mandated assessment) will have a reductive effect, either by reducing the rich complexity of teaching and learning to simplistic metrics, or by limiting what’s being measured to lower-order skills that can easily be measured. Among those who experiment with new media technologies the tension is exacerbated, as student learning gains in new digital environments seem increasingly expansive, holistic and difficult to measure. How then might we find common ground between an impulse to get a more trenchant read on institutional effectiveness at inducing learning and the cultivation of innovation in teaching that higher education so badly needs?

The VALUE project comes into the middle of this tension, as it proposes to create frameworks (or metarubrics) that provide flexible criteria for making valid judgments about student work that might result from a wide range of assessments and learning opportunities, over time. In this interview, Terrel Rhodes, Director of the VALUE project and Vice President of the Association for American Colleges and Universities (AAC&U) describes the assumptions and goals behind the Project. He especially addresses how electronic portfolios serve those goals as the locus of evaluation by educators, providing frameworks for judgments tailored to local contexts but calibrated to “Essential Learning Outcomes,” with broad significance for student achievement. The aims and ambitions of the VALUE Project have the potential to move us further down the road toward a more systematic engagement with the expansion of learning. –Randy Bass

Randy Bass: What is VALUE? What problem is it trying to solve?
Terrel Rhodes: In short, the VALUE Project (Valid Assessment of Learning in Undergraduate Education) works to develop approaches to assessment based upon examples of student work completed in their courses and saved over time in an e-portfolio. The project collects and synthesizes best practices in assessing student work using rubrics developed by faculty members. One of the project’s core purposes is to identify commonalities of outcome expectations of achievement across a variety of institutions.

The project really grew out of the national conversation that was begun with the Essential Learning Outcomes (ELOs) articulated as part of AAC&U’s ten-year LEAP (Liberal Education and America’s Promise) initiative and developed through campus-community conversations (AAC&U 2007). There are fourteen ELO’s, ranging from skills–perhaps more readily assessable–such as written communication or quantitative literacy, to broader abilities and dispositions, such as problem solving, critical thinking, and ethical reasoning. Also included among the ELO’s were more abstract–but no less “essential”–learning goals such as civic engagement, intercultural knowledge, creative thinking, and integrative learning. (See a complete list and description of the Essential Learning Outcomes.)

What we were finding was that there was broad agreement about the value of these learning outcomes, but considerable lack of clarity and precedent for how to be accountable to them. That is, how could a campus or a program use one or more of these Essential Learning Outcomes as a driver for changes and improvement in practice, or even as a measure of how well current curricula were achieving these goals? People were asking, “if we wanted to take these learning outcomes seriously how would we do that? Where would we look? How would we have results that might be comparative and valid?”

We were responding to the growing consensus that to achieve a high-quality education for all students, valid assessment data are needed to guide planning, teaching, and improvement. That was one core assumption. And it was clear that colleges and universities were interested in fostering and assessing many of these essential learning outcomes beyond those addressed by currently available standardized tests–or for that matter that are captured by student performance in individual courses.

We also started from some other assumptions, such as: that learning develops over time and should become more complex and sophisticated as students move through various pathways toward a degree; that good practice in assessment requires multiple assessments, over time; and that well-planned electronic portfolios provide excellent opportunities to collect meaningful data about student learning, from multiple assessments, across a broad range of learning outcomes. At the same time, the electronic portfolio process can serve to help guide student learning and build self-assessment capabilities. Ultimately, we believe that e-portfolios and the assessment of student work in them can better inform programs and institutions on how effectively they are helping students achieve their expected goals.

Say more about what kind of learning is being assessed? What kind of student performance gets looked at in the e-portfolios?
The project builds on a philosophy of learning assessment that privileges multiple expert judgments of the quality of student work over reliance on standardized texts administered to samples of students outside of their required courses. The VALUE project builds on the work campus faculty and staff have done in developing assessment rubrics to evaluate achievement of a broad range of Essential Learning Outcomes and in articulating the expectations and criteria for student learning at beginning through advanced levels of performance. The project explores how rubrics can be applied to the actual work students have done both in their required courses and co-curricular activities.

The initial reaction to the national accountability demands for indicators of student learning have resulted in calls to use tests that have some basic characteristics in common: they are in some way standardized; they result in a score or quantitative measurement that summarizes how well a group of students has performed; they test only samples of students at a given institution; they require additional costs for students or institutions to administer; they reflect a snapshot picture at one point in time; they provide an institutional rather than an individual score; and they lack high stakes for the students taking the exams.

It is ironic that just at the point when higher education research has finally developed a rich information base on effective practices that enhance learning, on cognitive development and neurobiologic bases of knowing, and technological advances that greatly expand our abilities to collect, preserve and demonstrate complex, multi-faceted learning, that we so willingly accept outmoded, snapshot, shorthand representations of the value of our educational outcomes and impact on student learning.

In contrast, the VALUE project responds to the need for multiple measures of multiple abilities and skills, many of which are not particularly well suited to snapshot standardized tests. The types of learning that employers and policy makers are calling for  need to be demonstrated through cumulative, progressive work students perform as they move through their educational pathways to graduation; rich, multifaceted representations of learning in curricular and co-curricular contexts, rather than artificial examinations divorced from applied contexts.

Why e-portfolios? How is the e-portfolio different from other kinds of assessments?
The  evidence of learning collected in an e-portfolio creates a rich portrait of achievement for an individual and, with sampling and analysis from a collection of portfolios, can create a similar portrait of a program or an entire institution. Drawing directly from curriculum-embedded and co-curricular work, e-portfolios can represent multiple learning styles, modes of accomplishment, and the quality of work achieved by students.

Although it is not a direct objective of the Project, VALUE promotes wider use of e-portfolios for assessment without impairing the developmental and progressive dimensions of e-portfolios as spaces that students can own to represent themselves as learners and to make connections across their educational experience. We believe that e-portfolios, potentially, can foster and provide evidence of high levels of student learning, across a vast range of experiences, and across programs and institution-wide outcomes.

By gathering and disseminating student work through electronic portfolios, the same set of student performance information can be used at course, program and institutional levels for assessment purposes, and faculty can collaborate on assessing and responding to student progress. Student work from on and off campus and from all the institutions a student may have attended can be included in a single presentation of student accomplishment over time and space.

We also know, from twenty or more years of pioneering work with portfolios in higher education that periodic reflections on learning by students are critical components of an education. Student reflections, along with self and peer assessments, guided by rubrics, help students to judge their own work as an expert would. These reflections and self-assessments all become part of the collection of work that gets evaluated in light of the Essential Learning Outcomes.

What are these rubrics or metarubrics? What are they supposed to do? What can’t they do?
All teachers use criteria for achievement, if only implicit. Many educators at all levels have created and make use of explicit “rubrics,” or scoring guides, with statements of expected levels of achievement using criteria vital to quality work in a chosen area. For VALUE, the criteria for the rubrics at the center of the project are determined in discussions among experts in the appropriate fields.

The VALUE project has collected rubrics from faculty and programs across the country designed to assess all of the Essential Learning Outcomes. Teams of cross-institutional faculty and staff have been assembled, bringing their own expertise to the process. They have examined the rubrics for the purpose of identifying and articulating the most commonly shared expectations or criteria for learning for each outcome and at progressively more sophisticated and complex levels of performance. This analysis has resulted in what we have been calling “metarubrics,” or shared learning expectations.

Creative Thinking Metarubric

Critical Thinking Metarubric

Integrative Learning Metarubric

The VALUE project is piloting the use of these rubrics by having faculty score actual student work collected in e-portfolios on twelve leadership campuses and additional partner campuses. (See a complete list of leadership campuses.)

Although e-portfolio assessment does not typically result in a simple number or score for students, programs, or institutions, it does result in shared judgments about the quality of student performance in terms of important learning outcomes. The use of rubrics is not new, nor are the methods for creating inter-rater reliability. The resulting e-portfolio scores and judgments are more detailed, indicative of the types of learning expected, and nuanced than simple numeric scores. The examples of work upon which the assessments are based are what the students actually submitted in response to assignments and requirements of the curriculum (and co-curriculum) that comprised their educational program; therefore they reflect the students’ levels of motivation, focus, and investment in demonstrating their learning as exhibited on a day-to-day basis, i.e. the assessment data have face validity.

We hope that the VALUE project will be able to demonstrate several things:  that faculty across the country share fundamental expectations about student learning on all of the Essential Learning Outcomes deemed critical for student success in the 21st century; that rubrics can articulate these shared expectations; that the shared rubrics can be used and modified locally to reflect campus culture within this national conversation; and that the actual work of students should be the basis for assessing student learning and can more appropriately represent an institution’s learning results.

Specifically, how does student learning and student work get assessed? What is the relationship between these “metarubrics” (at a national level) and what actually happens at the local level?
From the collection of rubrics for each outcome, we have engaged teams of faculty and staff to examine the rubrics and to identify the criteria or expectations for learning that appear across multiple institutions. In essence, we have asked the teams to articulate shared expectations and criteria for each outcome. The purpose of this exercise is to demonstrate to ourselves, and to those outside the academy, that faculty across the country and at different types of institutions do have shared criteria for what student learning should look like from beginning or novice levels through advanced understandings and applications.

The shared general criteria are too broad to be useful for assessing specific student work at a course level, but the local rubrics developed for assessing student work are mirrored in these metarubrics that encapsulate the shared expectations of faculty and others for student performance. The local rubrics will use different terms and language, but the core criteria contained in the metarubric map onto these local rubrics so that faculty and staff can use what they have developed that works for their purposes with their students, and at the same time show how what they and their students are doing fits within the core expectations for learning that are shared nationally. We can reduce these shared or common expectations to numbers, but we don’t have to and we can therefore engage as a result in a much more robust conversation about what and how well our students are mastering learning outcomes.

Various campuses have been taking the core criteria of the metarubrics and translating them into the language and context of their particular discipline or program when using the rubrics to assess their students’ work. Other campuses have been testing the metarubrics along with their previously developed local rubrics and comparing the results when used side by side to assess assignment products. We are in the process right now of gathering these types of feedback to modify the metarubrics and further refine the ability of the metarubrics to represent shared expectations that can be used on a variety of campuses and programs.

Where are they being used and tested? What are some examples of what test campuses are doing?
The metarubrics are being tested by faculty on twelve leadership campuses that have histories of using rubrics and e-portfolios to assess student work. The twelve leadership campuses represent large and small, public and private, two and four year institutions, and regions of the country. Each of these campuses uses student e-portfolios in one form or another to have students capture and present examples of the work they have done in response to assignments embedded in the curriculum and co-curriculum at their institutions.

We have relied upon the established processes on these campuses for testing the metarubrics. In many instances, the campus faculty has used their local rubrics and the metarubrics for comparison of the comparability of the rubrics. No campus has piloted all of the rubrics, but all rubrics have been piloted among the campuses collectively. Based on the piloting of the metarubrics, the rubric teams have revised the metarubrics. In total, there will be three iterations of piloting and redrafting for each metarubric during the VALUE project process. Final drafts will be available in the summer of 2009.

In addition, almost sixty other campuses have requested permission to pilot test one or more of the rubrics with student work on their respective campuses (not all of these campuses are using e-portfolios of student work). On every campus, though, faculty members and student services colleagues are using the metarubrics to see how useful they are in assessing student work on the respective learning outcomes.

A lot of work with new media technologies involves student work that doesn’t fit traditional assessments. How might VALUE be useful for understanding new kinds of learning?
One of things that we have learned through the research on student learning is that newer generations of students are exhibiting a variety of learning styles. As everyone knows, current students are much more technologically savvy than earlier generations; they use and expect to use the internet, audio and video sources, social networking modes, etc. Many of our students do not perceive learning as a linear process more attuned to traditional reading and writing – hyperlinking and networked learning are more commonly apparent in the classroom. Couple this with the fact that most student learning occurs outside of the classroom, we have an environment in which we need to be able to encompass a wider variety of modes for students to demonstrate their learning processes and achievements. By definition this forces us to encompass audio and video, Web 2.0, hard copy and virtual learning.

The e-portfolio allows us to bring all of these, and other, modes of learning and demonstration of learning into the collection of evidence we use to assess student learning in the full complexity and variety of its existence. We have tried to encourage our rubric development teams to write rubrics that are not bound by the printed page conception of learning, but applicable and encompassing of other modes of performance.

Are there campuses using the VALUE rubrics to look at non-traditional kinds of learning?
Several campuses already have their students incorporate non-traditional modes of demonstrating their learning in the student e-portfolios. Portland State University has students including videos of community based work, performances, presentations to government boards, or interviews in their e-portfolios to demonstrate communication skills, civic engagement, working in teams, etc. Alverno College has all of their students record oral presentations to show the growth and development of these abilities as they move through the curriculum. LaGuardia Community College has their students deeply engaged in visual representations of their learning through art work, e-portfolio design, etc. as a way to communicate their learning to family and communities outside the academy who are often not accustomed to the text-heavy traditions of higher education. Bowling Green State University, St. Olaf College and the University of Michigan have students incorporate connections outside the classroom, whether they are in co-curricular activities or community-based learning related to the curriculum.

Often we perceive a tension between the desire to assess student learning and the interest in experimentation with new approaches to learning. Assessment of recognizable outcomes and innovation often seem at odds. Might the work of the VALUE project help address that tension?

We certainly hope so. The development of the metarubrics and their pilot testing on campuses was designed to create a shared set of standards that could be used for assessing, or judging, more traditional modes or demonstrations of learning, as well as Web 2.0, live performances or other types of learning. The outcomes for learning can be demonstrated in many ways. In the past, some have been too quick to conclude or declare that certain types of learning cannot be measured. The reality that we all face is that when we begin to evaluate learning, we are always grasping at and relying upon indicators of learning.

Learning of the essential outcomes does not occur in a vacuum or in the ether, it occurs through content and knowledge bases, and therefore will vary depending on the knowledge base on which it rests. Part of the reason we have different disciplines and interdisciplinary programs, is that different knowledge sets and ways of knowing result in learning outcomes being demonstrated in different ways. But in the deconstruction of the demonstrated learning, we tend to find similarity in the core components or criteria of learning, e.g. for critical thinking.

Just as we learn from our research and from our colleagues, we also learn from our students. Innovation and creativity are part of what we all look for in our students’ learning–it tends to be the ultimate learning outcome that we try to capture in many ways, e.g. capstone courses and projects, senior recitals, e-portfolio graduation reflections on work, etc. Having shared expectations or standards for learning outcomes is in no way in conflict with innovation. Our limitations are often due to lack of knowledge and comfort in using newer technologies to capture and represent the learning we seek in our students.

How could a campus make these viable? How would they be useful to start a conversation or provide a framework for discussion around student learning?

Our experience at AAC&U in working with faculty on campuses across the country is that faculty are typically eager to have permission to talk about and to focus on student learning. Once you get beyond complaints about teaching is not rewarded adequately, etc., faculty embrace discussing learning and teaching. So, there is no difficulty in getting faculty interested in talking about the subject. The biggest barrier is often a lack of awareness about options for assessing learning and what it would take for the individual faculty member to adapt what they know and are familiar with to some new environment or process.

Part of the selection of the VALUE leadership campuses was to identify a diverse set of campuses that are using e-portfolios and rubrics in different ways on their respective campuses to illustrate how faculty and institutions can see themselves beginning, expanding or enhancing what they are doing to assess student learning. By broadening our work to include campuses that are not using e-portfolios, we also wanted to demonstrate how similar approaches can be undertaken in the absence of the investment in e-portfolios. Increasingly, the investment in e-portfolios is becoming less and less of an obstacle for campuses since there are free Web tools that students can use to construct e-portfolios.

Essentially, we are finding that campuses are recognizing that student learning is something that the entire campus community is engaged with; each person on the campus participates in the learning, but no one is responsible for all of the learning. By creating and articulating shared learning expectations, we are helping faculty and others on campus see how they can contribute to student learning for essential outcomes; we help students become better judges of their own learning progress; and we create the evidence we can use to communicate to other audiences exactly what it is that our students are learning and what they can do with that learning.

By experimenting with e-portfolios and Web technology, we expand the robustness for capturing learning and the opportunities for students to apply their learning in “real world” situations, which employers, civic leaders and policymakers are calling for. E-portfolios also reflect the attendance patterns of so many of our students who attend multiple institutions (often at the same time) as they move through their educational careers. Their learning is shared in ways we often overlook–different faculty and colleagues in different institutions, perhaps in different states, and different spans of time. The sharing of rubrics, of expectations for learning, perhaps most importantly allows our students to have a much clearer picture of what their learning should look like. They can use the rubrics to frame the demonstration of their learning in an e-portfolio when transferring among institutions, when applying for a job, or for graduate school. The rubrics allow students to better assess their own strengths and weaknesses in areas of learning.

Having been a faculty member on several campuses for over twenty years, I know that using rubrics and e-portfolios does not have to create more work–it requires working differently, shifting my time and focus a bit–but it is richer and more rewarding than what I used to struggle with in trying to communicate my expectations for learning and how students could more readily succeed in meeting those expectations. There is a transparency and communication ability that enriches the conversations both with students and with colleagues.

Attachment Size
Creative Thinking Metarubric Fall 2008 Draft for Public Release.pdf 120.08 KB
IntegrativeLearningMetarubricF08.pdf 121.14 KB
Fall 2008 VALUE Critical ThinkingMetarubric Draft for Public Release.pdf 105.71 KB
Value-Rhodes-Interview.pdf 1.8 MB

Participatory Learning and the New Humanities: An Interview with Cathy Davidson

by Randy Bass and Theresa Schlafly

Cathy Davidson is Ruth F. DeVarney Professor of English and John Hope Franklin Humanities Institute Professor of Interdisciplinary Studies at Duke University. Davidson is co-founder of HASTAC (pronounced “haystack”: Humanities, Arts, Science, Technology Advanced Collaboratory) and co-director of the Digital Media and Learning Competition, funded by the John D. and Catherine T. MacArthur Foundation. Davidson talks here with Randy Bass. Interview and related materials edited by Randy Bass and Theresa Schlafly.

Bass: In the MacArthur Foundation Digital Media and Learning Competition, which featured it as a theme, “Participatory Learning” is defined this way:

Participatory Learning includes the ways in which new technologies enable learners (of any age) to contribute in diverse ways to individual and shared learning goals. Through games, wikis, blogs, virtual environments, social network sites, cell phones, mobile devices, and other digital platforms, learners can participate in virtual communities where they share ideas, comment upon one another’s projects, and plan, design, advance, implement, or simply discuss their goals and ideas together. Participatory learners come together to aggregate their ideas and experiences in a way that makes the whole ultimately greater than the sum of the parts.1

Why did “participatory learning” become important as an organizing theme for the DML Competition?
Davidson: Last year this competition was more wide open–one category was just innovation, another was knowledge networking. As we looked back after the competition was over we found among the winning proposals a cluster of exciting projects that were all looking at this newly enhanced, digitally enhanced, form of learning. We thought it would be interesting to do a more specialized competition on participatory learning this year and see what we came up with. We were especially interested in a form of interactive learning where the whole is much greater than the sum of the parts.

This builds on a method that HASTAC has been dubbing since 2002, when we first began, “collaboration by difference.” If you read much of the management literature, it’s almost always fundamentally about collaboration where you have shared goals, and shared methods or shared areas of expertise. We became interested in  this much looser way of learning, kind of mash-up learning, where people may or may not share credentials–some people might be credentialed, some people might not, and where people might have radically different training–a humanist and an artist and a cancer specialist might be talking about things together but the artist might be a cancer survivor who’s in fact educated her- or himself more than many doctors on the diverse ways that cancer might be cured. What happens if you put all of those people in conversation? What new insights emerge from interactions where the protocols for success are not scripted in advance?

We were also very interested in a third area of participatory learning: the global dimension. We’re piloting an international competition this year. In globally interactive learning, participants may not even share ideas about the basic epistemology of learning. What we’re interested in is how people can use existing digital tools–the social utility sites, social networking sites, something that looks like what some people are calling Web 2.0–to aggregate a range of responses from people who might not have anything else in common except that they’re all participating on the same site. Someone might wander in and wander off and not even be part of any pre-existing community, yet might have something interesting to share.

We’re very interested in the outcomes that happen when you don’t know the outcomes that might happen. We went back and forth over the definition of participatory learning many times. For example, we put “problem-solving” in, and then we took it out, put it in, took it out. We decided not to include it because we didn’t want to limit learning to the utilitarian. We wanted learning to be as visionary, creative, theoretical, or abstract as anybody’s imagination. Problem-solving is one thing you can do through this accretive way of learning, but we were afraid that if we put problem-solving in there, ninety percent of applications would be about problem-solving, rather than thinking in the broadest, most interesting ways about what you can do when you’re in a community with people that you may know but that you may not know. What happens if you leave your community open and invite the whole world in? In other words, when a community gets together and is talking about things, it not only defines the original problem or goal, but the goal itself might change dramatically over the course of the project. We wanted to allow for the free flow of thinking that may or may not end up solving “a problem.”

That does feel very much like the way knowledge work, or even creative work, often gets done in the world. There is a kind of fluidity to it.
The fluidity needs to happen from beginning to end. You cannot separate creative design of new technologies from critical thinking about the use, the application, the cost, the environmental impact, and the intellectual property issues, as well as all of the issues of race, class, gender, sexuality, nationalism, religion, and region so important to humanistic study today. All of those issues have to be thought through at the same time that you’re thinking about designing technology. That’s why the Digital Media and Learning Initiative is such an interdisciplinary project. Often the people who are most skilled at making technology are not the most skilled at thinking critically about it. The people who are most skilled at thinking critically about these issues might not be the most skilled at the aesthetics and the kinesthetics of design. And so we all need to be working together. This is that model of “collaboration by difference.” We might not have anything in common except what we know to be the case about our one contribution to something. But collaboratively and collectively we can yield something more interesting at the end. But all those things have to be thought through together. What we found last year with the winners of the first competition is often they worked in teams. It might be a musician who also was proficient in computer science working with a computer scientist who loved music. The Princeton Laptop Orchestra is what I’m thinking of here. People are working across divides which seem very distant but when they actually start working out problems together it turns out they might not be so distant after all. Again the whole can be larger than the sum of the parts.

Yet, all of this feels very different from the way we educate people–let alone how research and scholarship has traditionally unfolded in the humanities.
I know. To me it’s one of the tragedies of the so-called information age. Here we have this astonishing new way that people are making knowledge together. As educators we should all be vibrating with happiness at this moment! Here are millions of people, typically unpaid, with no ulterior motive, for profit or otherwise, who are validating what we do as a profession with what they do in the spare time as a passion. That seems to suggest that all of us overworked underpaid teachers have it right, that in fact there is something about humanity that likes to learn, and likes to share its learning, and likes to participate. That’s incredible! Every time I read some professor grousing about Wikipedia–that it’s not reliable, it’s not credentialed, etc.–I say sure, of course, so what reference work is perfect? What we may give up in some instances in expertise we more than make up for in scope.  We have to have some skepticism about the products of participatory learning–skepticism is what we do as a profession. But, my God, you’re talking about billions of contributions that people are making for free to world knowledge in so many languages, from so many different traditions of knowledge-making, and on a scale that the world has never seen before. I guess part of me just doesn’t understand why this isn’t the most exciting time for all of us in our profession. Why aren’t we figuring out ways that we can use this exciting intellectual moment to bolster our mission in the world, our methods in the world, our reach in the world, our understanding of what we do and what we have to offer our students in the world? It just feels like we’re in an age where we educators should be the thought leaders and instead we’re futzing around the edges. Our profession’s lack of excitement and leadership in all the issues surrounding the information age baffles me.

Is that an objective of HASTAC, to get beyond “futzing around the edges”?

Yes. Exactly. The Mellon Foundation did a wonderful thing–they invited the directors of all these humanities institutes to New York back in 2002. Harriet Zuckerman, Senior Vice President of the Mellon Foundation, invited me to talk to the directors of all these other institutes about what we were doing. This is when I was the co-director and co-founder of the John Hope Franklin Humanities Institute at Duke along with our dean, the literary scholar Karla F. C. Holloway. The Franklin Humanities Institute was at the epicenter of the much larger intellectual crossroads called the Franklin Center for Interdisciplinary and International Studies. It wasn’t an isolated, hermit-like space but was in the center of the newest, most active intellectual space on campus. We were the new kids on the block, and we had this new idea that knowledge was important enough to be shared, not just among humanists but with all academics and with the general public. We were designing technologies to make our knowledge as public as possible. And we were holding weekly public forums, with a free lunch and free parking (crucial technologies!), to make even the most specialized knowledge available, accessible, and urgent.

This was 2002 and a lot of folks there had the attitude: “We’re humanists now, we have to fight technology.” But a couple of us felt the opposite. We were trying to say, “Wait, it’s the information age! This is our era! This is what we’ve been waiting for! The humanities finally are central. We should be the voice of the information age! We have historical knowledge, we have critical tools, we know what information is, we have whole fields dedicated to understanding what knowledge and information are in this age, why isn’t this our moment?” It was a great meeting. After we left, many of us resolved that the heads of as many humanities institutes as possible should come together to start a new organization that would be not digital humanities in the sense of archiving and tools, but as a new way of  thinking about the human issues that are touched by absolutely every aspect of technology. If we were going to design tools, they should be tools that would help in the larger sense to promote thinking, and sharing of ideas, and learning together.

So that’s the HASTAC origin story. We didn’t have the term participatory learning back then, of course–that’s a relatively new term. But social learning, creatively designing tools, and thinking critically about the role of technology in human life and in all aspects of society, were what we were pushing from the very beginning.

This shift, it seems, is not just about “digital humanities,” but humanities in general. In a piece you published in PMLA this year you called it “Humanities 2.0,” where you said

Humanities 2.0 is distinguished from monumental, first-generation, data-based projects not just by its interactivity but also by an openness about participation grounded in a different set of theoretical premises, which decenter knowledge and authority. Additional concepts decentered by Web 2.0 epistemologies include authorship, publication, refereeing, collaboration, participation, customizing, interdisciplinarity, credentialing, expertise, norms, training, mastery, hierarchy, taxonomy, professionalism, rigor, excellence, standards, and status.2

Where in particular do you see resistance in the humanities around the idea of participatory learning? 
I think it butts up against a number of issues. One is hierarchy and credentialing. If we’re going to be thinking about participatory or social learning, what does that do to the idea of expertise? I personally don’t think it really undermines it it, but many of the formal ways that we evaluate good work–mainly peer review–will undergo a significant transformation, at least expansion. As I said in the piece in PMLA,

The very concept of peer review needs to be defined and interrogated. We use the term as if it were self-explanatory and unitary, and yet who does and does not count as a peer is complex and part of a subtle and often self-constituting (and circular) system of accrediting and credentialing (i.e., “good schools” decide what constitutes a “good school”). We peer-review in different modes in different circumstances. (I’ve known some kind teachers to be savage conference respondents–and vice versa.) Humanities 2.0 peer review extends and makes public the various ways in which we act as professionals judging one another and contributing to one another’s work, whether subtly or substantively.3

David Theo Goldberg and I wrote a draft of a book called The Future of Learning Institutions in a Digital Age that’s been up for the last year on the Institution for the Future of the Book collaborative website, so that any human in the world is allowed to give us feedback and make comments on our book and on our ideas. As I wrote in the PMLA piece, it’s a little scary to have “track changes” available to the world, to anybody who has an internet connection and wants to register. Most of us don’t put our work up to that kind of scrutiny when it’s in draft form. That’s pretty terrifying. So I have some empathy for people who have these resistances to Humanities 2.0.

Despite the terrifying nature of laying out your work, you say in the PMLA piece that it has been worth it. Again, to quote your PMLA piece:

Is this new process worth the trouble? Immeasurably. The project has exposed us to bibliographies from many different fields, to the specific uses of terminologies (and their histories) within fields. It has been one of the most fluidly interdisciplinary exchanges that I have yet experienced. It has also taught me how one’s words can signal meanings one didn’t intend. Reader response is humbling; it is also illuminating. So much of what passes in our profession for response is actually restatement of one’s original premises. In an interactive collaborative exchange, one often gains a better sense of assumptions unfolding, a process that helps make one’s unstated premises visible (especially to oneself).4

That seems like the “peer review” version of what you called earlier “collaboration by difference.” Do you see this becoming common practice?  
I don’t think it is yet clear how much radical reorganization people in the humanities and social sciences want to do. If you carry through the conclusions of social and participatory learning, you come to deep issues that our fields may not wish to interrogate; we pass on these assumptions, often unspoken ones, from generation to generation. I’m teaching a class next semester called “The Early American Novel and Other Fictions.”  Talking about this course, I commented recently on a blog that every term in that title has to be interrogated, because that is what English studies are based on. Early–periodization. American–nation. Novel–genre. So, periodization, genre, and nation are the pillars of how we post job offers, how we recruit people to English departments, how we define our field, how we define specializations.5

Can you say more about how participatory learning potentially destabilizes disciplinary categories? Is it because it reorganizes expertise?
Let’s stay with the example of English departments and their reliance on periodization, genre, and nationalism. Personally, I’m not sure that any of those categories is relevant anymore, to the intellectual world we live in today or to the ways most of us do research. We are always reaching back, no matter what our field, to other sources, earlier examples, and we are constantly casting about in contemporary theory for constructs that help us to see our field more clearly. Ideas rarely have genres and rarely have national borders. Most of us know that.  Yet to redefine what is important in a productive way to the field itself requires enormous upheaval. It isn’t easy to redefine your field and interrogate its most basic structuring principles. It requires a lot of work and results in a lot of acrimony and often the result is backlash that lands you back where you started, but with irreparably bruised, battered, and bitter colleagues. That’s one reason people create new interdisciplinary fields or even virtual organizations such as HASTAC. It is far easier to start new interdisciplinary movements from scratch with like-minded individuals than to try to change existing disciplines from inside. I always believe that if the new field succeeds, if it generates intellectual excitement, then it will feed back into and change the traditional discipline in a far more productive way, in the end, than engaging in departmentally-based attempts at disciplinary reform. And if the excitement is elsewhere, and departments dig in their heels and refuse to respond to it, their enrollments inevitably shrink, and they shrink into irrelevance. So be it.  Those are choices that disciplines make.

What if we turn from the humanities as a profession to the “classroom.” Should we be teaching students how to be effective participatory learners? How do we cultivate critical participatory learners or participatory knowledge creators?
I think that students are fabulous at participatory learning outside the classroom. When they are in the classroom, at any institution of higher learning, they have succeeded their entire life by excelling in a hierarchical model of learning of the kind that Ichabod Crane would be quite familiar with. To switch to the flickr “this photo sucks” kind of learning in an educational setting where–at least metaphorically–you’re used to sitting in rows, looking straight ahead to the teacher, handing in your work on time, getting your A from the teacher, doing what’s necessary to get that A, passing your PSATs, passing your SATs with flying colors, taking after-school cram school in order to do better on your SATs: after a lifetime of such preparation, it’s really hard to switch modes. I mean, we’ve been training kids from infancy.

We know that even two-year-olds recognize when they are in “teaching situations.” Infant developmental studies show that when you address toddlers in teaching mode, they sit straighter, their pupils dilate, they turn their heads less. By the time they are 18, they think education is this posture of attention to superiors who have knowledge to impart to them–the whole hierarchy. Kids who are coming into college now were born around 1991, 1990. So we’ve had a whole generation not just trained in Web 2.0, but also in the fact that once you enter the schoolroom, Web 2.0 is over. It’s not easy to teach them how to integrate the participatory learning from their social interactions and online extracurricular life into an educational setting that, structurally, remains entirely Ichabodian. You can’t exactly say “Participate freely or I’ll smack you with this (institutional) yardstick!” Right now, for most students, the Internet’s openness is like a dirty secret you’re not allowed to talk about in front of your teachers. The whole system of credentialing, grading, evaluating, writing recommendations, all of that, is antithetical to true participatory learning formats and learning communities. Higher education has never figured out if its primary goal is learning or if its primary goal is training citizens for elite positions of class power and leadership. The whole system of ranking (among institutions and among students) is based on “distinctions,” as Bourdieu would say. Participatory learning, especially when it is anonymous, contests the bases and even the sanctity of many of those distinctions.

Do you think it would be possible, either within the HASTAC network or outside of it, to have some kind of thriving community among higher education faculty that would actually help us understand what we are learning, help people make sense of where participatory pedagogies are going?
Yes, I think a lot of it is happening already, even if it is around the edges. As I’ve said, change happens from the edge and then moves back into the center so this is as it should be, although I wish it were happening far faster. A lot of new networks are being formed, such as Classroom 2.0 which is mostly for high school teachers. Or, for example, Savage Minds is this great collaborative blog in anthropology that a number of young scholars have started which is getting enough attention that some people within the cultural anthropology establishment have even worried about it, asking, Hey, how come you are making pronouncements? What entitles you? Who gave you permission? What gives you the right to comment on anthropology? Every field needs the equivalent of Savage Minds. And that is happening, more and more.

Within HASTAC, we have an exciting new program which gives intellectual leadership not just to junior faculty but to graduate and even some undergraduate students and some practitioners in the field. We asked board members to support with a very modest fellowship ($300 per student) up to six students per institution whom they would nominate as HASTAC Scholars. The selection was rigorous and so the director of the program, Erin Gentry Lamb (who is herself a doctoral student at Duke), wrote each HASTAC Scholar an impressive letter signaling for them, their chairs, and their deans, and for future employers that they have been chosen to be the intellectual leaders of a new field.  We now have fifty-six HASTAC scholars representing twenty-one institutions. They can blog any time they want about what’s happening at their institutions, what’s happening around the world, what’s happening in their intellectual lives. Every two or three weeks a HASTAC Scholar also hosts an online forum, typically using SEESMIC (a vlog-to-vlog format) as well as blogs with discussion boards.  We’ve had HASTAC Scholars forums on teaching in Second Life and other metaverses, on fair use, on academic electronic publishing, and on the role of history in the study of new media. A HASTAC Scholar also co-hosted a forum with Howard Rheingold (Smart Mobs) and over 6000 people tuned in and many participated in that forum on participatory learning.  The HASTAC Scholars themselves model the excitement with their own work across many different fields.

What were the key themes of that Forum? Where did he locate learning issues in relation to participatory learning?
In that Forum, Rheingold talked about how we should think about social media environments as where today’s students live, and how he has observed that “student-led collaborative inquiry, and some student involvement in the selection and application of the texts to that inquiry, enlists their enthusiasm in ways that even very good lectures and excellent texts and otherwise excellent class discussions don’t.”  In that session, he emphasized the importance of helping students to develop “meta-skills” of critical inquiry around these media. For example, he thinks some of his most effective teaching happens when he doesn’t lay out the connections in the material too clearly, leaving the students to develop what he calls “the meta-skill of path-finding.” Or, and this goes back to our discussion on peer review, he talks about how the responsibility for questioning the authority of the text belongs not to the publisher but to the readers. He also describes the “meta-skill of developing an individual voice in a collaborative environment.”6

That sounds like the application of participatory learning to the project of educational transformation itself!

Yes, that’s the point. With these HASTAC Scholar forums, we have the most exciting group of  undergraduates and graduate students putting their interests out there, and showing their professors and advisors how much interest there is in these new intellectual areas. What we’re doing is saying, Let’s jump ahead by going directly to the students to see what their interests are and let’s support those interests in every way we can. Let’s see if we can’t push education in a Web 2.0 way through a network we’re creating from the students on up instead of from the top down. But we certainly give them a safety net in the fact that they are nominated by scholars who are among the most respected in the country. We don’t want young scholars to have to fight this fight; we want to be able to support their future by exemplifying what they contribute rather than “plea bargaining” for it. In other words, instead of trying to preach to people who aren’t converted yet, we’re trying to build strength and networks and solidarity and credentialing and refereeing and respectability for the people who are there, on the assumption that if something’s really exciting, people gravitate to it. We are positive that being an active and visible presence in the HASTAC Scholars program will be an asset when these students are pursuing their careers. What will be exciting is when, a few years out, we turn to these assistant professors and have them nominate their best students as HASTAC Scholars.

HASTAC is a virtual community of about 1700 members. It is voluntary and very loose. No dues. If you participate, you’re likely to be put on the Steering Committee.  It is what people want it to be, and decentralization is key. We advertise one another’s projects and work and, if we do the advertising, then the home institution credits HASTAC as one of the contributors to the project. Other viral communities are springing up. At present, I think this is the right way to go. Maybe that will change but, at present, it seems as if it would be exactly wrong to try to capture the flux.  It’s better, I think, to try to ride this moment of transition as the Information Age changes just about every aspect of social interaction, political organization, intellectual exchange, and, more slowly but surely, education. Personally, I think it would be wrong to institutionalize because institutions move far more slowly than the Information Age. We live in a time where we all need to relax a little and accept the fact that we live in one of the world’s great, epistemic eras of communication and information and intellectual transformation. We cannot stop it.  And I, for one, wouldn’t want to. The best we can do, as true intellectuals, is for each of us to work to understand how what we are doing best capitalizes upon, helps us all to understand, and in other ways appreciates the fact that we live in one of the most exciting and challenging ages in recent human history. As we HASTAC’ers keep saying, this is not the age of technology. It is the age of information. We educators, we human and social scientists, need to accept that this is our age and take up the challenge.


1. HASTAC Initiative, “Digital Media and Learning Competition,” . [return to text]
2. Cathy N. Davidson, “Humanities 2.0: Promise, Perils, Predictions,” PMLA 123, no. 3 (May 2008): 711-712, [return to text]
3.  Davidson, 711. [return to text]
4. Davidson, 712-713. [return to text]
5. Cathy Davidson, “This is Your Brain on the Internet,” (blog entry, Sept. 8, 2008), and “Youth in the Humanities Fourth Great Internet Age,” (blog entry, Sept. 19, 2008), . [return to text]
6. HASTAC Scholars Discussion, “HASTAC welcomes Howard Rheingold for a discussion on participatory learning” (Aug. 24, 2008), . [return to text]

The Difference that Inquiry Makes: A Collaborative Case Study on Technology and Learning, from the Visible Knowledge Project

This collection of essays from the Visible Knowledge Project is edited by Randy Bass and Bret Eynon, who served together as the Project’s Co-Directors and Principal Investigators. The Visible Knowledge Project was a collaborative scholarship of teaching and learning project exploring the impact of technology on learning, primarily in the humanities.  In all, about seventy faculty from twenty-two institutions participated in the Visible Knowledge Project over five years. Participating campuses included five research universities (Vanderbilt University, the University of Alabama, Georgetown University, the University of Southern California, Washington State University, and the Massachusetts Institute of Technology), four comprehensive public universities (Pennsylvania’s Millersville University, California State University (CSU)–Monterey Bay, CSU Sacramento, Ohio’s Youngstown State University, and participants from several four-year colleges in the City University of New York system, including City College, Lehman, and Baruch), and three community colleges (two from CUNY–Borough of Manhattan Community College and LaGuardia Community College, and California’s Cerritos College). In addition to campus-based teams, a number of independent scholars participated from a half dozen other institutions, such as Arizona State and Lehigh University.

The project began in June 2000 and concluded in October 2005. We engaged in several methods for online collaboration to supplement our annual institutes, including an adaptation of the digital poster-tool created by Knowledge Media Lab (Carnegie Foundation), asynchronous discussion, and web-conferencing. For more detailed information, see the VKP galleries and archives at You can find pdf files formatted for printing attached at the end of each article.

Capturing the Visible Evidence of Invisible Learning

0 Comments | 6366 Page Views
This is a portrait of the new shape of learning with digital media, drawn around three core concepts: adaptive expertise, embodied learning, and socially situated pedagogies. These findings emerge from the classroom case studies of the Visible Knowledge Project, a six-year project engaging almost 70 faculty from 21 different institutions across higher education. Examining the scholarly work of VKP faculty across practices and technologies, it highlights key conceptual findings and their implications for pedagogical design.  Where any single classroom case study yields a snapshot of practice and insight, collectively these studies present a framework that bridges from Web 1.0 to Web 2.0 technologies, building on many dimensions of learning that have previously been undervalued if not invisible in higher education.

Reading the Reader

0 Comments | 3166 Page Views
Many teachers wonder what happens (or doesn’t happen) when students read text. What knowledge do students need, gain, or seek when reading? Through VKP’s early emphasis on technology experimentation, Sharona Levy adapted a proven reading method of annotation from paper to computer. Through using the comment feature in Word, students’ reading processes became more transparent, explicit, and traceable, allowing her to diagnose gaps in understanding and to encourage effective reading strategies.

Close Reading, Associative Thinking, and Zones of Proximal Development in Hypertext

0 Comments | 3563 Page Views
How can we teach students to slow down their reading process and move beyond surface-level comprehension? Patricia O’Connor’s Appalachian Literature students co-constructed hypertexts which capture the connections readers make among assigned texts, reference documents, and multimedia sources. These hypertexts became more than artifacts of student work; rather, they became collaborative, exploratory spaces where implicit literary associations become explicit.

Inquiry, Image, and Emotion in the History Classroom

0 Comments | 2467 Page Views
With increased online access to historical sources, will students “read history” differently among such artifacts as text, image, or video? Questioning his own assumptions of students’ abilities to analyze historical sources, Peter Felten conducted pedagogical investigations to understand student interpretation of a variety of sources. Designing the use of visual artifacts in the classroom helped students learn not only how to interrogate and interpret primary sources, but also how to construct original arguments about history. Students’ understanding of history deepened while they became emotionally engaged with the material.

From Looking to Seeing: Student Learning in the Visual Turn

0 Comments | 3332 Page Views
Rather than simply using primary source images as illustrations for his course on Power, Race, and Culture in the U.S. City, David Jaffee wanted to teach his students how to interpret visual texts as a historian would. By paying close attention to his students’ readings of images, Jaffee was able to develop ways to scaffold their analysis, teaching them how to move beyond “looking” at isolated images to “seeing” historical context, connection and complexity.

Engaging Students as Researchers through Internet Use

0 Comments | 3416 Page Views
Effective habits of research begin early and should be practiced often. Unearthing discoveries, making connections, and evaluating judiciously are research traits valued by Taimi Olsen in her first-year composition course. Not only should these research habits exist in the library, but Olsen advocates the application of these habits in online archives hones students’ abilities to become expert researchers.

Trace Evidence: How New Media Can Change What We Know About Student Learning

0 Comments | 2465 Page Views
Clicker technology, often used in large-enrollment science courses, works well when every question has a single right answer. Lynne Adrian wanted to find out whether clickers could be used in disciplines which raise more questions than answers, and how illuminating the gray areas between “right” and “wrong” could help her students think critically about American studies. She found that the technology allowed her to preserve traces of the otherwise ephemeral class discussions, enabling her to analyze the types of questions she was asking in class and to track their effects on students’ written work throughout the semester.

Shaping a Culture of Conversation: The Discussion Board and Beyond

1 Comments | 6655 Page Views
What happens when the discussion board goes from being just an assignment to a springboard for intellectual community? Foreseeing many benefits to cultivating discussion among his English students, Ed Gallagher worked to develop frameworks to articulate why discussion is not only central to the learning process in the classroom but also beyond its walls. A higher level of critical analysis, reflection, and a synthesis of multiple perspectives turned class discussions into artful conversations.

The Importance of Conversation in Learning and the Value of Web-based Discussion Tools

0 Comments | 3221 Page Views
In this essay Heidi Elemendorf and John Ottenhoff discuss the central role that intellectual communities should play in a liberal education and the value of conversation for our students, and we explore the ways in which web-based conversational forums can be best designed to fully support these ambitious learning goals. Coming from very different fields (Biology and English Literature) and in different course contexts (Microbiology course for non-majors and Shakespeare seminar), they nonetheless discover core values and design issues by looking closely at the discourse produced from online discussions. Centrally, they connect what they identify as expert-like behavior to the complexities of intellectual development in conversational contexts.

Why Sophie Dances: Electronic Discussions and Student Engagement with the Arts

0 Comments | 1386 Page Views
Paula Berggren struggled to engage her students in critical thinking about unfamiliar art forms, until she posed a simple question on the class’s online discussion board: “Why do people dance?” She found that the students’ responses, rather than being just less-polished versions of what they might write in formal essays, warranted close analysis in their own right. In subsequent teaching, Berggren continues to incorporate some version of a middle space for student work, which not only increases students’ engagement but also allows her to observe and document their thought processes.

Connecting the Dots: Learning, Media, Community

0 Comments | 1143 Page Views
Sometimes the research question you ask isn’t the one you end up answering. Elizabeth Stephen recounts how a debate about the use of films in a freshman seminar led to an experiment in forming a community of scholars which could be sustained over time and across distances. Creating online spaces for students in this group to share their reflections with one another strengthened the ties among them, while allowing Stephen to analyze the multiple elements, both academic and social, which made this a successful learning community.

Focusing on Process: Exploring Participatory Strategies to Enhance Student Learning

0 Comments | 1781 Page Views
Confronting the challenge of improving student writing in a large sociology class, Juan José Gutiérrez developed a software-based peer review process. He required students to evaluate one another’s papers based on specific criteria and to provide constructive feedback. He found that not only did this process help with the logistics of paper-grading, but it also allowed him to adapt his teaching to address specific concerns indicated by qualitative and quantitative analysis of the peer reviews.

Theorizing Through Digital Stories: The Art of “Writing Back” and “Writing For”

0 Comments | 3058 Page Views
Discovering how digital stories engage students in critical, theoretical frameworks lives at the center of Rina Benmayor’s work. Through her course, Latina Life Stories, Rina asked each student to tell his or her own life story digitally and then situate the story within a theoretical context. While this process engaged students to theorize creatively, it also allowed her to document methods to recognize the quality of student work resulting in a flexible and intuitive rubric to use beyond this experience.

Video Killed the Term Paper Star? Two Views

0 Comments | 1947 Page Views
Two instructors from separate disciplines discuss what happens when alternative multimedia assignments replace traditional papers. Peter Burkholder found the level of engagement to change dramatically in his history courses while Anne Cross experienced new avenues for talking about sensitive subjects in sociology. Together, both professors explore the advantages and opportunities for video assignments that challenge students to synthesize information in critical and innovative ways.

Producing Audiovisual Knowledge: Documentary Video Production and Student Learning in the American Studies Classroom

0 Comments | 4598 Page Views
Traditionally, academic institutions have segregated multimedia production from disciplinary study. Bernie Cook wondered what his American Studies students would learn from working collaboratively to produce documentary films based on primary sources, and what he in turn might find out about their learning in the process. Students created documentary films on local history, and wrote reflections on their creative and critical process. Not only did students report tremendous engagement with the topics and sources for their projects, they also indicated satisfaction at being able to screen their work for an audience. By allowing his students to become producers of content, Cook enables them to participate fully in the intellectual work of American Studies and Film Studies.

Multimedia as Composition: Research, Writing, and Creativity

0 Comments | 3673 Page Views
Viet Thanh Nguyen reflects on a three-year experiment in assigning multimedia projects in courses designed around the question “How do we tell stories about America?” Determined to integrate multimedia conceptually into his courses, rather than tacking it onto existing syllabi, Nguyen views multimedia as primarily a pedagogical strategy and secondarily a set of tools. Exploring challenges and opportunities for both students and teachers in using multimedia, he outlines principles for teaching with multimedia, and concludes that, while not for everyone, multimedia can potentially create a transformative learning experience.

Looking at Learning, Looking Together: Collaboration across Disciplines on a Digital Gallery

0 Comments | 1345 Page Views
What does it mean for two community college colleagues, teaching in very different disciplines, to work together on a Scholarship of Teaching and Learning (SoTL) project?  What happens when they join together to examine their students’ work, their individual teaching practice, and the possibilities for collaborative research?  And what do they learn when they undertake an electronic publication of that work in a digital gallery?

“It Helped Me See a New Me”: ePortfolio, Learning and Change at LaGuardia Community College

0 Comments | 3725 Page Views
What happens if we shift the focus of our teaching and learning innovations from a single classroom to an entire institution? What new kinds of questions and possibilities emerge? Can an entire college break boundaries, moving from a focus on “what teachers teach” to a focus on “what students learn?” Can we think differently about student learning if we create structures that enable thousands of students to use new media tools to examine their learning across courses, disciplines, and semesters? Bret Eynon explores these questions as he analyzes the college-wide ePortfolio initiative at LaGuardia Community College. Studying individual portfolios and focus group interviews, he also examines quantitative outcomes data on engagement and retention to better consider ePortfolio’s impact on student learning.

From Narrative to Database: Multimedia Inquiry in a Cross-Classroom Scholarship of Teaching and Learning Study

0 Comments | 2855 Page Views
Michael Coventry and Matthias Oppermann draw on their work with student-produced digital stories to explore how the protocols surrounding particular new media technologies shape the ways we think about, practice, and represent work in the scholarship of teaching and learning. The authors describe the Digital Storytelling Multimedia Archive, an innovative grid they designed to represent their findings, after considering how the technology of delivery could impact practice and interpretation. This project represents an intriguing synthesis of digital humanities and the scholarship of teaching and learning, raising important questions about the possibilities for analyzing and representing student learning in Web 2.0 environments.

Multimedia in the Classroom at USC: A Ten Year Perspective

0 Comments | 3993 Page Views
Does multimedia scholarship add academic value to a liberal arts education? How do we know? Looking back at the history of the Honors Program in Multimedia Scholarship at USC, Mark Kann draws on his own teaching experience, discussions with other faculty members, and the university’s curriculum review process to explore these questions. He describes the process of developing the program’s academic objectives and assessment criteria, and the challenges of gathering evidence for his intuitions about the effects of multimedia scholarship. Finally, Kann reports on the program’s first student cohort and looks ahead to the future of multimedia at USC.

Capturing the Visible Evidence of Invisible Learning

by Randy Bass and Bret Eynon

Note: This is a synthesis essay for the Visible Knowledge Project (VKP), a collaborative project engaging seventy faculty at twenty-one institutions in an investigation of the impact on technology on learning, primarily in the humanities. As a matter of formatting to the Academic Commons space, this essay is divided in three parts: Part I (Overview of project, areas of inquiry, introduction to findings); Part II (Discussion of findings with a focus on Adaptive Expertise and Embodied Learning); Part III (Discussion of findings continued with a focus on Socially Situated learning, Conclusion). A full-text version of this essay is available as a pdf document here.
Here, in this forum as part of Academic Commons, the essay complements eighteen case studieson teaching, learning, and new media technologies. Together the essay and studies constitute the digital volume “The Difference that Inquiry Makes: A Collaborative Case Study of Learning and Technology, from the Visible Knowledge Project.” For more information about VKP, see

Déjà 2.0
Facebook. Twitter. Social media. YouTube.Viral marketing. Mashups. Second Life. PBWikis. Digital Marketeers. FriendFeed. Flickr. Web 2.0. Approaching the second decade of the twenty-first century, we’re riding an unstoppable wave of digital innovation and excitement. New products and paradigms surface daily. New forms of language, communication, and style are shaping emerging generations. The effect on culture, politics, economics and education will be transformative. As educators, we have to scramble to get on board, before it’s too late.

Wait a minute. Haven’t we been here before? Less than a decade ago, we rode the first wave of the digital revolution–email, PowerPoint, course web pages, digital archives, listservs, discussion boards, etc. As teachers and scholars, we dove into what is now called Web 1.0, trying out all sorts of new systems and tools. Some things we tried were fabulous. Others, not so much. Can we learn anything from that experience? What insights might we garner that could help us navigate Web 2.0? How can we separate the meaningful from the trivial? How do we decide what’s worth exploring? What do we understand about the relationship of innovations in technology and pedagogy? What can we learn about effective ways to examine, experiment, evaluate, and integrate new technologies in ways that really do advance learning and teaching?
The teaching and research effort of the Visible Knowledge Project (VKP) could be a valuable resource as we consider these questions. Active from 2000 to 2005, VKP was an unusual collective effort to initiate and sustain a discipline-based examination of the impact of new digital media on education. A network of around seventy faculty from twenty U.S. colleges, primarily from American history and culture studies departments, gathered not only to experiment with new technologies in their teaching, but also to document and study the results of their inquiries, using the tools of the scholarship of teaching and learning. In this collaborative and synoptic case study, under the title The Difference that Inquiry Makes, we try to capture and make sense of the visible evidence of this relatively invisible learning as it emerged over five (and more) years of collaborative classroom inquiry. We share participants’ reports on key elements of the VKP inquiry, and integrate their reports into a framework that can help us learn from this experience as we navigate a fast-changing educational landscape.

Invisible Learning
What do we mean by “invisible learning?” We use this phrase to mean at least two things. First, it points us to what Sam Wineburg, in his book Historical Thinking and Other Unnatural Acts, talked about as “intermediate processes,” the steps in the learning process that are often invisible but critical to development.1 All too often in education, we are focused only on final products: the final exam, the grade, the perfect research paper, mastery of a subject. But how do we get students from here to there? What are the intermediate stages that help students develop the skills and habits of master learners in our disciplines? What kinds of scaffolding enable students to move forward, step by step? How do we, as educators, recognize and support the slow process of progressively deepening students’ abilities to think like historians and scholars? In VKP, from the beginning, we tested our conviction that digital media could help us to shine new light on–to make visible–and to pay new attention to these crucial stages in student learning.

Second, by invisible learning we also mean the aspects of learning that go beyond the cognitive to include the affective, the personal, and issues of identity. Cognitive science has made great strides in recent years, scanning the brain and understanding everything from synapses and neurons to perception and memory. Educators are still struggling to grasp the implications of this research for teaching and learning. However, perhaps because it is less “scientific,” higher education has paid considerably less attention to (and is even less well prepared to deal with) the role of the affective in learning and its relationship to the cognitive. How does emotion shape engagement in the learning process? How do we understand risk-taking? Community? Creativity? The relationship between construction of knowledge and the reconstruction of identity? In VKP we explored the ways that digital tools and processes surfaced the interplay between the affective and the cognitive, the personal and the academic.

Visible Evidence
Education at all levels has largely taken on faith that if teachers teach, students will learn–what could be seen as a remarkable, real-life version of “If you build it, they will come.” In recent years, calls for greater accountability have produced a new emphasis on standardized testing as the only appropriate way to assess whether students are learning. Meanwhile, growing numbers of faculty in higher education have taken a different approach, engaging in the scholarship of teaching and learning–using the tools of scholarship to study their own classrooms–to deepen their understanding of the learning process and its relationship to teacher practice. Spurred by the ideas of Ernest Boyer and Lee Shulman of the Carnegie Foundation for the Advancement of Teaching, faculty from many disciplines have posed research questions about student learning, gathered evidence from their classrooms, and gone public with their findings in countless conference presentations, course portfolios, and scholarly journals. This movement, with its focus on classroom-based evidence, provided key tools and language for the Visible Knowledge Project. It allowed VKP faculty to study the impact of new technologies on learning and teaching, and it also helped us frame questions about problems and practice, inquiry and expertise that remain critical as we move into a new phase of technological innovation and change.2

The Visible Knowledge Project
The Visible Knowledge Project emerged in 2000 from the juxtaposition of these two powerful yet largely distinct trends in higher education–the scholarship of teaching and learning movement and the initial eruption of networked digital technologies into the higher education classroom. Responding to a dynamic combination of need and opportunity, faculty engaged in multi-year teaching and learning research projects, examining and documenting the ways the use of new media was reshaping their own teaching and patterns of student learning. Participating faculty came from a wide range of institutions, from community colleges and private liberal arts colleges to research universities; from Georgetown and USC to Youngstown State, the University of Alabama, and City University of New York (CUNY). Meeting on an annual basis, and interacting more frequently in virtual space, we formed our research questions representing a broad spectrum, shared ideas about research strategies, discussed emerging patterns of our evidence, and formulated our findings. The digital resources used ranged from Blackboard and PowerPoint to interactive online archives and Movie Maker Pro. The VKP galleries ( provide a wealth of background information, including lists of participants, regular newsletters, and reports from more than thirty participants, as well as a number of related resources and meta-analyses.3

The VKP ethos was formed by a belief in the value of messiness, of unfolding complexity, of adventurous, participant-driven inquiry that would inform the nature of the collective conversation. A few scientists and social scientists entered the group and helped create exciting projects, but the vast majority of the participants were from the fields of history, literature, women’s studies and other humanist disciplines. While technology was key to our raison d’être, our inquiries often evolved to focus on issues of pedagogy that transcended individual technologies. We wanted to learn about teaching, to learn about learning. We wanted to go beyond “best practice” and “what worked” to get at the questions about why and how things worked–or didn’t work. In some cases, we went further, rethinking our understanding of what it meant for something to “work.” Our questions were evolving, shaped by the exigencies of time and funding as well as our on-going exchange and new technological developments. We struggled with ways to nuance and realize our inquiries, to come up with workable methods and evidence that matched our changing and, we hoped, increasingly sophisticated questions.

Over the course of the Project, we found that participants’ teaching experiments started to group in three areas:

  1. Reading–Engaging ideas through sources/texts: As VKP took shape at the end of the twentieth century, the great museums, universities, and research libraries of this country were mounting their collections on the Web. Web sites such as the American Memory Collection of the Library of Congress vastly expanded the availability of archival source materials on the Web. It was a time, as Cathy Davidson put it recently, of digitally-driven “popular humanism.”4 Responding to this opportunity, VKP’s historians and culture studies faculty explored the effectiveness of active reading strategies using primary sources, both textual and visual, for building complex thinking. Introducing students to the process of inquiry, faculty tested combinations of pedagogy and technology designed to help students “slow down” their learning, interpret challenging texts and concepts, and engage in higher order disciplinary and interdisciplinary practices.For example, Susan Butler, teaching an introductory history survey at Cerritos College, had her students examine primary sources on different facets of the Trail of Tears, made available online by the Great Smoky Mountains National Park, PBS, and the Cherokee Messenger; as students grappled with perspective and the evolving definition of democracy in America, Butler examined evidence of the ways that scaffolded learning modules that incorporated online primary sources could expand students’ capacity for critical analysis. Meanwhile, Sherry Linkon at Youngstown State used online archives to help students in her English course create research papers that contextualized early twentieth-century immigrant novels. And Peter Felten at Vanderbilt integrated online texts, photographs and videos into a history course on the 1960s, analyzing the ways students did–or didn’t–apply critical thinking skills to visual evidence.Across the board, the focus was less on “searching” and “finding” than on analyzing, understanding, and applying evidence to address authentic problems rooted in the discipline. Testing innovative strategies, faculty asked students to model the intellectual behaviors of disciplinary experts, focusing earlier and more effectively on the learning dimensions that characterize complex thinking. (For sample projects addressing these questions, see )
  2. Dialogue–Discussion and writing in social digital environments: As VKP faculty moved into the world of Blackboard and Web-CT, they explored ways that discussion and social writing in online environments can foster learning. Projects explored strategies for using online communication to make the intermediate processes of learning more visible and to provide opportunities for students to develop personal and academic voice. For example, Mills Kelly, teaching a Western Civilization survey at Virginia’s George Mason University, focused on the possibilities of using online tools, including the WebCT discussion board and a special GMU Web Scrapbook, as tools for enhancing collaborative learning. Meanwhile, Ed Gallagher at Lehigh University tested the impact of his detailed and creative guidelines for students in prompting more interactive and substantial discussion in an online context.In general, carefully structured online discussion environments provided students and faculty a context in which to think socially; they also allowed discussion participants to document, retrieve and reflect on earlier stages of the learning process. This ability to “go meta” offered a new way for students and faculty to engage more deeply with disciplinary content and method. Highlighting the scaffolding strategies that might maximize student learning, these projects gathered evidence of learning that reflected the social and affective dimensions of these digitally-based pedagogical practices. (For sample projects, see
  3. Authorship–Multimedia construction as experiential learning: As multimedia authoring became easier to master in these years, faculty became interested not only in creating multimedia presentations and Web sites; they also sought to develop ways to put these tools into the hands of students. Many VKP scholar-teachers were guided by the constructivist notion that learning deepens when students make knowledge visible through public products. In the projects clustered here, student authorship takes place in various multimedia genres of the early twenty-first century, including digital stories and digital histories, Web sites and PowerPoint essays, historically-oriented music videos, electronic portfolios and other historical and cultural narratives. The emergent pedagogies explored by these scholar-teachers involve multiple skills, points of view, and collaborative activities (including peer critique). For example, Patricia O’Connor had her Appalachian literature students at Georgetown University create Web pages about Dorothy Allison’s Bastard Out of Carolina, annotating particular phrases and creating links to historical sources and images, while she investigated the ways that “associative thinking” shaped students’ ability to make nuanced speculations about literary texts.
    Meanwhile, Tracey Weis at Pennsylvania’s Millersville University and several faculty at California State University at Monterey Bay gathered evidence on the cognitive and emotional impact of student construction of short interpretative “films,” or what we came to call “digital stories.” Examining the qualities of student learning evidenced through such assignments, these projects spotlight issues of assessment and the need to move beyond the narrowly cognitive quiz and the critical research essay to find ways to value creativity, design, affect, and new modes of expressive complexity. (For sample projects, see )

Naturally, these three areas of classroom practice–critically engaging primary sources, social dialogue, and multimedia authorship–converged in all kinds of ways. Some of the richest and most intriguing projects engaged students in a scaffolded process of collaborative research and writing, laying the groundwork for multimedia-enhanced performances of their learning. Our fluid categories were defined and redefined by the creativity of our faculty as they experimented within them.

The key to faculty innovations in VKP was not merely trying new teaching strategies but looking closely at the artifacts of student work that emerged from them, not only in traditional summative products such as student writing, but in new kinds of artifacts that captured the intermediate and developmental moments along the way. What did these artifacts look like? They included video evidence of students working in pairs on inquiry questions, as well as student-generated Web archives and research logs; they included careful analysis of discussion threads in online spaces and student reflections on collaborative work; they included not only new forms of multimedia storytelling but evidence of their authoring process through interviews and post-production reflections about their intentions and their learning. One of the consequences emerging from these new forms of evidence was that, as faculty looked more closely and systematically at evidence of learning processes, those processes started to look more complex than ever. The impact of transparency, at least at first, seemed to be complexity, which can be unsettling in many ways.

Pieces of Insight
This phenomenon had a significant impact on the kinds of findings and claims that emerged from this work. We set out looking for answers (“what is the impact of technology on learning?”) and what we mostly found were limited claims about impact, new ways of looking at student learning, and often dynamic new questions. In fact, the VKP projects followed a pattern typical in faculty inquiry.  Whatever the question that initiates the inquiry, it often changes and deepens into something else. For example, Lynne Adrian (University of Alabama) started off investigating the role of personal response systems (“clickers”) in a large enrollment Humanities course to see if the use of concept questions would increase student engagement, but was soon led to reflect much more interestingly on the purpose of questions in class and the very nature of the questions she had been asking for more than twenty years. Similarly, Joe Ugoretz (Borough of Manhattan Community College), in an early inquiry, hoped to study the benefits of a free-form discussion space in an online literature course, but got frustrated because the students would frequently digress and stray off topic; finally it occurred to him that the really interesting inquiry lay in learning more about the nature of digressions themselves, considering which were productive and which were not. The changing nature of questions, and the limited nature of claims, is not a flaw of faculty inquiry but its very nature. John Seely Brown describes the inevitable way that we build knowledge around teaching: “We collect small fragments of data and struggle to capture context from which this data was extracted, but it is a slow process. Context is sufficiently nuanced that complete characterizations of it are extremely difficult. As a result, education experiments are seldom definitive, and best practices are, at best, rendered in snapshots for others to interpret.”5

Here is where the power of collaborative inquiry came into play. That is, what emerged from each individual classroom project was a piece of insight, a unique local and limited vision of the relationship between teaching and learning that yet contributed to some larger aggregated picture. We had, in the microcosm of the Visible Knowledge Project, created our own “teaching commons” in which individual faculty insights pooled together into larger meaningful patterns.6 Each of these snapshots is interesting in itself; together they composite into something larger and significant. What follows below is our effort at putting together the snapshots to create a composite image in which we recognize new patterns of learning and implications for practice.

A Picture of New Learning: Cross-Cutting Findings

Collectively, what emerged from this work was an expansive picture of learning. Although we started out with questions about technology, early on it became clear that the questions were no longer merely about the “impact of tools” on learning; the emergent findings compelled us to confront the very nature of what we recognized as learning, which in turn fed back into what we were looking for in our teaching. Over the years, faculty experienced iterative cycles of innovation in their teaching practice, of reflection on an increasingly expansive range of student learning, and of experimentation shaped by the deepening complexity (and at times befuddlement) that emerged from trying to read the evidence of that learning. From this spiral of activity developed a research framework with broad implications for the now-emergent Web 2.0 technologies. We have come to articulate this range of cross-cutting findings under the headings of three types of learning: adaptive, embodied, and socially situated.

Briefly, by adaptive learning we mean the skills and dispositions that students acquire which enable them to be flexible and innovative with their knowledge, what David Perkins calls a “flexible performance capability.”7 An emphasis on adaptive capacities in student learning emerged naturally from our foundational focus on visible intermediate processes. What became visible were the intermediate intellectual moves that students make in trying to work with difficult cultural materials or ideas, illuminating how novice learners progress toward expertise or expert-like thinking in these contexts.

Our recognition of the embodied nature of learning emerged from this increased attention to intermediate processes–the varied forms of invention, judgment, reflection–when we realized that we were no longer accounting for simply cognitive activities. Many manifestations of the affective dimension of learning opened up in this intermediate space informed by new media, whether it was the way that students drew on their personal experience in social dialogue spaces, or the sensual and emotional dimensions of working with multimedia representations of history and culture. In these intermediate spaces, dimensions of affect such as motivation and confidence loomed large as well. We have come to think of this expansive range of learning as embodied, in that it pointed us to the ways that knowledge is experienced through the body as well as the mind, and how intellectual and cognitive thinking are embodied by whole learners and scholars.

Inasmuch as this new learning is embodied, similarly is it socially situated. Influenced by the range of work on situated learning, communities of practice, and participatory learning, our work with new technologies continuously brought us to see the impact new forms of engagement through media had on the students’ relative stance to learning. This effect was not merely a sense of heightened interest due to the novelty of new forms of social learning. Rather, what we were seeing was evidence of the ways that multimedia authoring, for example, constructed for students a salient sense of audience and public accountability for their work; this, in turn, had an impact on nearly every aspect of the authoring process–visible in the smallest and largest compositional decisions. The socially situated nature of learning became a summative value, capturing what Seely Brown calls “learning to be,” beyond mere knowledge acquisition to a way of thinking, acting, and a sense of identity.

These three ways of looking at pedagogies–as adaptive, embodied, and socially situated–together help constitute a composite portrait of new learning. Each helps us focus on a different dimension of complex learning processes: adaptive pedagogies emphasizing the developmental stages linking learning to disciplines; embodied pedagogies focusing on how the whole person as learner engages in learning; and socially situated learning focusing on the role of context and audience. In this sense, the dimensions are overlapping and reinforcing in any particular set of practices. For example, consider Patricia O’Connor’s work making use of Web authoring tools to lead students to engage in close reading of print fiction. Calling the activity “hypertext amplification,” O’Connor asks students to make increasingly sophisticated “associational” connections, to move from novice reading encounters with texts to more expert ones. She wants them to experience “associational thinking” on multiple levels, from the personal and emotional to the definitional and critical. Ultimately, students’ ability to engage fully along a continuum of expert practice is shaped by their knowledge that their Web pages will be public, and their presentations to their peers a social act. All three key dimensions are in play in her teaching practices, as in so many of the case studies coming out of VKP.

Nevertheless, we believe it is a valuable exercise to slow down and look closely at each of three areas, and to begin making sense of how each dimension might be better understood for its shaping influence on learning. We now explore each of these areas more fully below.

A Note on Findings
Because faculty inquiry lives at the boundary of theory and practice, we have chosen to present the findings in two forms: as conceptual findings (representing the way theory informed practice, and vice versa) and design findings (representing some of the key claims on practice made by these concepts and values about learning). As a further response to the challenge of representing collective findings in a messy research environment, we also present each area with a set of “tags,” keywords that help associate the findings with various trajectories. Finally, at the end of each finding description we link to several relevant case studies within this volume.

[jump to Part II]

1. Sam Wineburg, Historical Thinking and Other Unnatural Acts (Philadelphia: Temple University Press, 2001). [return to text]
2. Many good resources exist on the scholarship of teaching. Two essential resources can be found at the Carnegie Foundation for the Advancement of Teaching ( and the Scholarship of Teaching and Learning tutorial at Indiana University, Bloomington ( [return to text]
3. In all, more than seventy faculty from twenty-two institutions participated in the Visible Knowledge Project over five years. Participating campuses included five research universities (Vanderbilt University, the University of Alabama, Georgetown University, the University of Southern California, Washington State University, and the Massachusetts Institute of Technology), four comprehensive public universities (Pennsylvania’s Millersville University, California State University (CSU)–Monterey Bay, CSU Sacramento, Ohio’s Youngstown State University, and participants from several four-year colleges in the City University of New York system, including City College, Lehman, and Baruch), and three community colleges (two from CUNY–Borough of Manhattan Community College and LaGuardia Community College, and California’s Cerritos College). In addition to campus-based teams, a number of independent scholars participated from a half dozen other institutions, such as Arizona State and Lehigh University.  The project began in June 2000 and concluded in October 2005.  We engaged in several methods for online collaboration to supplement our annual institutes, including an adaptation of the digital poster tool created by Knowledge Media Lab (Carnegie Foundation), asynchronous discussion, and Web-conferencing.  For more detailed information, see the VKP galleries and archives at [return to text]
4. Cathy N. Davidson, “Humanities 2.0: Promise, Perils, Predictions,”  PMLA 123, no. 3 (May 2008): 711. [return to text]
5. John Seely Brown, “Foreword,” in Opening Up Education: The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge (Cambridge: MIT Press, 2008). [return to text]
6. For a broader discussion of the “teaching commons,” see Pat Hutchings and Mary Huber, The Advancement of Learning: Building the Teaching Commons (San Francisco: Jossey-Bass, 2005). [return to text]
7. David Perkins, “What is Understanding?” in Teaching for Understanding: Linking Research with Practice, ed. Martha Stone Wiske (San Francisco: Jossey-Bass, 1998), 39-58. [return to text]

New Media Technologies and the Scholarship of Teaching and Learning: A Brief Introduction to this Issue of Academic Commons

by Randy Bass, Georgetown University

A Bridge to Know-ware
Higher education traditionally has found few systematic ways to build and share knowledge about teaching and learning. It is not surprising then that there has been relatively little interaction between those most interested in new technologies and those invested in the scholarship on teaching and learning. Of course there are examples where the two communities intersect, sometimes for robust conversations. Yet much of this talk stays at the level of individual experimentation and focuses on teaching and classroom practice, with very little attention paid to learning. For whatever reason, the quantity and quality of those conversations are far less that we might hope, given the social impact of new technologies and the growing urgency of conversations around active learning, accountability, and assessment.So, how do we make any headway in a landscape where applied knowledge about learning is inchoate, where forms of learning are expanding in ways higher education is poorly situated to accommodate, and the technological contexts are shifting rapidly and radically? We need, in short, to merge a culture of inquiry into teaching and learning with a culture of experimentation around new media technologies. Our ability to make the best use of any technologies to improve education hinges ultimately on the reciprocal capacities to bring our powers of inquiry to bear on educational technologies, as well as to bring the power of new technologies to bear on our methods of inquiry and our representation of knowledge about teaching practice.Slowing Down and Looking at Learning
In this issue of Academic Commons we take up these questions by looking at the possibilities for building knowledge around teaching and learning in a rapidly changing technological landscape. Through articles, case studies, interviews and roundtables, the January 2009 issue of Academic Commons explores the continuity of learning issues from Web 1.0 to 2.0 technologies, from online discussion tools, hypertext and multimedia authoring to emergent forms of electronic portfolios, blogs, social networking tools, and virtual reality environments. We take these up in the context of a dual challenge: to understand better the changing nature of learning in new media environments and the potential of new media environments to make learning–and faculty insights into teaching–visible and usable.
The issue opens with a bundled set of essays that form a synoptic case study of the Visible Knowledge Project (VKP), a five-year project looking at the impact of technology on learning, primarily in the humanities, through the lens of the scholarship of teaching and learning.  These case studies explore the ways that faculty inquiring into their students’ learning deepened and complicated their understanding of technology-enhanced teaching. Out of these classroom-based insights emerged a set of findings that constitute a research framework, clustering especially around three broad areas:

  1. Learning for adaptive expertise: the role of new media in making visible the thinking processes intrinsic to the development of expert-like abilities and dispositions in novice learners;
  2. Embodied learning: the impact of new media technologies on the expansion of learning strategies that engage affective as well as cognitive dimensions, renewed forms of creativity and the sensory experience of new media, and the importance of identity and experience as the foundation of intellectual engagement; and
  3. Socially Situated learning: the role of social dimensions of new media in creating conditions for authentic engagement and high impact learning.

These broad areas of learning provide a bridge from earlier technology innovation to current new media technologies. They also serve as a way of seeing the capacities of new social media in light of the learning issues intrinsic to disciplinary and interdisciplinary ways of knowing. In this sense, they provide a framework for understanding this period of transformation as one (as Michael Wesch puts it in this issue) where we are shifting from “teaching subjects to subjectivities.” This expansive conception of learning challenges us then not to cope with technological change, but shifts that are essentially social and intellectual. As Michael Wesch puts it in his commentary on the meaning of these changes, “Nothing good will come of these technologies if we do not first confront the crisis of significance and bring relevance back into education.  In some ways these technologies act as magnifiers. If we fail to address the crisis of significance, the technologies will only magnify the problem by allowing students to tune out more easily and completely.”

The six additional vision pieces in this issue all provide different lenses onto this transformation. Two pieces–one by Kathleen Yancey and another that is the transcript of the closing session at the ePortfolio conference at LaGuardia Community College in April 2008–look specifically at the current practices and potential of ePortfolios to provide a site that both serves the needs of students to represent themselves and their learning through an integrative digital space as well as the needs of institutions to find better ways to understand the progress of student learning and intellectual development. A key element in this transformation is shifting the unit of analysis from the learner in a single course to the learner over time, inside and outside the classroom. What does this shift imply for the ways we understand learning and development? If we accept this new learning paradigm, what kinds of accommodations do we need to make in our approaches to the curriculum, the classroom, the role of faculty, and the empowerment of learners?

Other pieces in this issue consider similar shifts. For example, in a sampling excerpt from their book Opening Up Education: The Collective Advancement of Education Through Open Technology, Toru Iiyoshi and M. S. Vijay Kumar look at the potential of “open content, opening technology and open knowledge” to transform higher education. “We must develop not only the technical capability but also the intellectual capacity for transforming tacit pedagogical knowledge into commonly usable and visible knowledge: by providing incentives for faculty to use (and contribute to) open education goods, and by looking beyond institutional boundaries to connect a variety of settings and open source entrepreneurs” (Iiyoshi and Kumar, coming in February).

Confronting our Biases
Yet, it seems all too clear that higher education is mostly unprepared to make the most of this new potentiality–of open education and an expansive conception of learning. Gathering and sharing knowledge about educational effectiveness is tricky in an environment in which we rush on to the “next new thing,” as new media pedagogies (as with other emergent pedagogies) often lead to forms of learning that do not neatly fit into traditional frameworks of disciplinary learning and cognitive and critical skills. These new forms of learning–including emotional and affective dimensions, capacities for risk-taking and uncertainty, creativity and invention, blurred boundaries between personal and public expression, or the importance of self-identity to the development of disciplinary understanding, etc.–traditionally have been invisible in higher education. As Bret Eynon and I point out in our synthesis essay for the Visible Knowledge Project, “when the invisible becomes visible it is often disruptive,” although usually in productive and generative ways.
That theme of generative disruption runs throughout all of these pieces in this issue, none more than in Cathy Davidson’s interview about “participatory learning and the new humanities,” where her celebration of the potential for “Humanities 2.0” is counter balanced by entrenched reluctance to rethink basic practices in our fields, especially around the ways we recognize expertise, collaboration, and creativity. As Davidson puts it (in ways that could speak for most of the authors here),

I guess part of me just doesn’t understand why this isn’t the most exciting time for all of us in our profession and why we aren’t figuring out ways that we can use this to bolster our mission in the world, our methods in the world, our reach in the world, our understanding of what we do and what we have to offer our students in the world? It just feels like we’re in an age where we educators should be the thought leaders and so many of us are futzing around the edges, and I don’t get it.

In this issue of Academic Commons we take the disconnection between experimentation with new media technologies and conversations about learning as a presenting symptom of what Davidson calls “futzing around the edges.” That is, we can only futz because we do not have a vocabulary or a tradition for engaging with learning in meaningful communal ways. In this environment it is especially important to flank classroom-based inquiry with institutional learning, where we can put into practice wide-scale views of learning outcomes as textured as those of faculty who look at learning in their own classrooms. Many of the pieces in this issue provide a starting point for these connections, whether looking at the best institutional practices around electronic portfolios (see Roundtable), or the aspirations of a national project developing flexible rubrics for evaluating the intellectual work of students over time and through diverse intellectual products (“Can We Bridge an Expansive View of Student Learning with Institutional Learning? The VALUE Project Thinks we Can, and Here’s How,” an interview with Terry Rhodes, coming in February), or the visionary specifications for a flexible repository for the scholarship of teaching and learning, linking local expertise to collective wisdom (Tom Carey, John Rakestraw, and Jennifer Meta Robinson, “Expanding the Teaching Commons in Web 2.0: A New Vision for a Scholarship of Teaching and Learning Repository,” coming in February).

From the local to the virtual, from classroom innovation to “opening up education,” this issue of Academic Commons seeks to make a modest contribution to these questions and our collective endeavor toward addressing them. What binds these case studies and vision pieces together are the aggregated bonds of the three dimensions of learning emerging from the VKP framework: expertise, embodiment, and the social. If we could bridge our incipient sense of meaning for these dimensions in student learning with the full social embodiment of our collective expertise as educators, then we would indeed have a bridge to the future.

Acknowledgements: In putting together this issue I want to thank the supervising editors, Mike Roy and John Ottenhoff for the invitation and opportunity. I also want to thank Lisa Gates, managing editor of AC for her infinite patience and skill in working with such complicated and multi-faceted content. Many thanks to Pat Hutchings of the Carnegie Foundation for the Advancement of Teaching, for her support through the years, and especially her reading of the synthesis essay for this volume. I also want to thank my longtime collaborator, Bret Eynon, for his intellectual and spiritual companionship throughout the process; many thanks also to current and former colleagues at the Center for New Designs in Learning and Scholarship (CNDLS) and the Visible Knowledge Project who worked on dimensions of this issue, especially Theresa Schlafly, Susannah McGowan, Eddie Maloney, John Rakestraw. -RB

Return to Table of Contents for the January 2009 Issue of Academic Commons

In addition to the articles listed in the Table of Contents, the following are forthcoming:

  • Opening Up Education: The Remix, by Toru Iiyoshi and Vijay Kumar. Excerpts from the book Opening Up Education: The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge, editors Toru Iiyoshi and M.S. Vijay Kumar (Coming in February)
  • Tom Carey, John Rakestraw, and Jennifer Meta Robinson, Expanding the Teaching Commons in Web 2.0: A New Vision for a Scholarship of Teaching and Learning Repository (Coming in February)
  • Can We Bridge an Expansive View of Student Learning with Institutional Learning? The VALUE Project Thinks we Can, and Here’s How, an Interview with Terry Rhodes  (Coming in February)