Learning Outcomes Related to the Use of Personal Response Systems in Large Science Courses

by Jolee West, Wesleyan University


The use of Personal Response Systems, or polling technology, has been receiving wider attention within academia and also in the popular press. While neither the technology nor the pedagogical goals are new, general knowledge and implementation of course-related polling appears to recently have reached the critical threshold. Between 2004 and 2005, the implementations by “early adopters”[1]began to seriously influence the “early majority” resulting in wider visibility of the technology. This trend is illustrated by the increasing number of references to “clickers” and “personal response systems” on the EDUCAUSE website from 2004 until the present, as well as a recent spate of newspaper and e-zine articles.[2]

Many institutions, including community colleges, liberal arts colleges, and large research universities have now adopted Personal Response Systems (i.e., polling technology) in their larger lecture courses across the curriculum. For example, MIT, University of Massachusetts-Amherst, Harvard, Yale, Brown, University of Virginia, Vanderbilt, and Duke, have all implemented personal response systems for larger physics and/or biology lecture courses. A number of implementations took place under the auspices of granting programs, such as the Pew Center for Academic Transformation and the Davis Educational Foundation’s Creating Active Learning Through Technology, which focus on the economics of teaching large lecture courses and the transformation of these typically passive learning style courses into active learning experiences for students.

But as is often the case in the adoption of new instructional technologies, arguments for adoption rarely rest on published analyses demonstrating improvements in learning outcomes. Commonly, such assessments simply have not been performed. Nevertheless, in researching the technology for my own institution, I did a hard search for learning outcome studies. I found that data abound on student satisfaction with personal response systems, on whether it made their class more interesting, improved attendance and the like.[3] But reports of learning outcomes are few and far between. What follows is a discussion of four references I found reporting learning outcome analyses related to the use of interactive engagement pedagogical methods in large science courses. Only the last two cases are personal response systems specifically mentioned. But as we will see, the technology is not really the star in this show; not surprisingly, it is the pedagogy that takes center stage.

A controlled study by Ebert-May et al. shows that student confidence in their knowledge of course materials is significantly increased in courses taught using interactive engagement methods over those taught by traditional lecture: “Results from the experimental lectures at NAU suggest that students who experienced the active-learning lecture format had significantly higher self-efficacy and process skills than students in the traditional course. A comparison of mean scores from the self-efficacy instrument indicated that student confidence in doing science, in analyzing data, and in explaining biology to other students was higher in the experimental lectures (N = 283, DF = 3, 274, P < 0.05).”[4]

A large study by Hake of 63 introductory physics courses taught with traditional methods versus interactive engagement (IE) methods, examined student learning outcomes using a commonly applied pre- and post-test design based on the Halloun-Hestenes Mechanics Diagnostic test and Force Concept Inventory. The study, which included 6,542 students, concluded that “A plot of average course scores on the Hestenes/Wells problem-solving Mechanics Baseline test versus those on the conceptual Force Concept Inventory show a strong positive correlation with coefficient r = + 0.91. Comparison of IE and traditional courses implies that IE methods enhance problem-solving ability. The conceptual and problem-solving test results strongly suggest that the use of IE strategies can increase mechanics-course effectiveness well beyond that obtained with traditional methods [original emphasis].”[5]

The Pew Center for Academic Transformation has been interested in examining transformation of courses from passive to active learning experiences by using controlled experiments. One of its beneficiaries, the University of Massachusetts-Amherst, conducted a two year study of courses redesigned for use of a Personal Response System. The Office of Academic Planning and Assessment at University of Massachusetts concluded that in these courses “[attendance] in the redesigned sections was consistently high, and students performed consistently better on the new problem-centered exams compared to the old exams based on recall of facts.”[6]

Lastly, a recent study by Kennedy and Cutts examined actual response data per student over the course of a single semester. In-class questions were of two types, one which asked the student to self-assess their study habits, and the other which focused on course content. These data were analyzed with end-of-semester and end-of-year exam performance results using cluster analysis and MANOVA. Their investigation showed that students who more frequently participated in use of the personal response system and who were frequently correct in their responses, performed better on formal assessments. Students who infrequently responded, but did so correctly, nevertheless performed poorly on formal assessments, suggesting level of involvement during the class is positively correlated with better learning outcomes.[7]

To sum up, what my search found was that where data exist, they are positive in supporting not just the use of personal response systems, but more specifically, the pedagogy associated with the use of these systems. These studies suggest that better learning outcomes are really the result of changes in pedagogical focus, from passive to active learning, and not the specific technology or technique used. This is an important caveat to interested faculty–the technology is not a magic-bullet. Without a focused, well-planned transformation of the large lecture format and pedagogical goals, the technology provides no advantage. If the manner in which the technology is implemented in class is not meaningful nor interesting to the student, participation lapses. Ultimately, what these studies demonstrate is that student participation is key to positive learning outcomes.


  1. See E. M. Rogers, Diffusion of Innovations (New York: Collier Macmillan, 1983).2. C. Dreifus, “Physics Laureate Hopes to Help Students Over the Science Blahs,” New York Times(Nov. 1, 2005), http://www.nytimes.com/2005/11/01/science/01conv.html?ex=1132376400&en=c13349a4a1f8cf78&ei=5070&oref=login; Alorie Gilbert, “New for Back-to-school: ‘Clickers,'” CNET’s News.com (2005), http://news.com.com/New+for+back-to-school+clickers/2100-1041_3-5819171.html?tag=html.alert; Natalie P. McNeal, “Latest Campus Clicks a Learning Experience,” The Miami Herald (Oct 17, 2005), http://www.miami.com/mld/miamiherald/news/12920758.htm.3. Steven R. Hall, Ian Waitz, Doris R. Brodeur, Diane H. Soderholm, Reem Nasr, “Adoption of Active Learning in a Lecture-based Engineering class” IEEE Conference, (Boston, MA, 2005), http://fie.engrng.pitt.edu/fie2002/papers/1367.pdf; SW Draper and MI Brown, “Increasing Interactivity in Lectures Using an Electronic Voting System,” Journal of Computer Assisted Learning, 20 (2004): 81-94, http://www.blackwell-synergy.com/links/doi/10.1111/j.1365-2729.2004.00074.x/full/; Ernst Wit, “Who Wants to be… The Use of a Personal Response System in Statistics Teaching” MSOR Connections 3(2) (2003), http://ltsn.mathstore.ac.uk/newsletter/may2003/pdf/whowants.pdf4. Diane Ebert-May, Carol Brewer and Sylvester Allred, “Innovation in Large Lectures–Teaching for Active Learning” BioScience 47 (1997): 601-607, 604.

    5. Richard R. Hake, “Interactive-engagement Versus Traditional Methods: a Six-thousand-student Survey of Mechanics Test Data for Introductory Physics Courses,” American Journal of Physics 66 (1998): 64-74, http://www.physics.indiana.edu/~sdi/ajpv3i.pdf,18.

    6. Office of Academic Planning and Assessment, University of Massachusetts, Amherst, Faculty Focus on Assessment, v.3(2) (Spring 2003), http://www.umass.edu/oapa/oapa/publications/faculty_focus/faculty_focus_spring2003.pdf, 2.

    7. GE Kennedy, QI Cutts, “The Association Between Students’ Use of an Electronic Voting System and their Learning Outcomes,” Journal of Computer Assisted Learning21 (2005): 4, 260-268, http://www.blackwell-synergy.com/doi/pdf/10.1111/j.1365-2729.2005.00133.x

Interactive Engagement with Classroom Response Systems

by S. Raj Chaudhury, Christopher Newport University



Instructor Name:

  1. Raj Chaudhury

Course Title:

Introductory science for non-majors


Christopher Newport University

What is the overall aim of the course?:

Introductory science courses for non-majors are often among the larger courses taught by liberal arts colleges, and while they fulfill “breadth” requirements within core curricula, their very size and nature often pose a challenge for properly assessing student learning and engagement with the material. Even though such courses often stress “finding the right answer,” I’m interested in generating  discussion among students and engendering a sense of shared inquiry.

Course design and scope of the project:

Physical science courses for non-majors have been a special interest of mine over the last several years – from classes that follow a “studio” model with integrated lecture and laboratory – to large lecture-only courses where student interest, attendance and motivation are all open to question. Much of this work has been completed at Norfolk State University (a Historically Black University), but I am continuing it at Christopher Newport University, also a state institution in Virginia. I’ve been especially interested in exploring classroom response systems (aka “clickers”) to promote understanding of the material, collaboration, and metacognitive awareness.

Incorporation of Technology:

Interactive handheld response systems (“clickers”) lie at the heart of my approach. Multiple choice questions are posed by the instructor to the class who each anonymously respond using their devices–which look like TV remote controls. Once all responses are received, a histogram displaying the results displays on the screen. Ideally, the chosen question will generate a bi-modal distribution. The instructor asks students to engage in Peer Instruction–“ “turn to your neighbor and try to convince them to change their answer to yours.” This period usually lasts 60-90 seconds. As the buzz in the room subsides, the instructor polls the class again using the same question. Depending on the outcome of this poll, the instructor may chose to revisit the topic, clarify a point, or simply proceed with the lesson. A 50-minute lecture broken into 3 segments of 10 minute direct instruction followed by one or two “clicker” questions keeps students engaged and provides the instructor with useful formative assessment data.

Lessons Learned:

Response systems are, in my opinion, excellent tools for scholars of teaching and learning because of their data generation capabilities. Student-work artifacts in pencil and paper form can take long hours to grade to obtain evidence for an inquiring teacher; with the response systems the work goes into creating excellent questions – once, since they can be reused and the data is automatically generated and stored by the system. Even though I teach introductory science – where there is often an emphasis on “finding the right answer,” I use the system principally to generate discussion among students and to engender a sense of shared inquiry, where the assessment data is shared in real-time by the students and the instructor. This approach is applicable across many disciplines – wherever there are lectures that could be made more interactive.

As students and instructor view a histogram of results together, they connect, in a powerful way, around the material – creating a pathway for the development of students’ metacognitive skills in a manner not easily possible without the technology.

References, links:

There are several classroom response systems available – both in Higher Education and in the Secondary School markets. I am currently using the CPS (Classroom Performance System) from e-Instruction. Their website is a good place to start. Research on response systems has been growing – especially in the Physics Education field, where papers from Eric Mazur’s group at Harvard and the U-Mass Amherst group are well regarded. A number of presentations at the Carnegie Colloquium and meetings such as AAC&U have focused on implementations of response systems. I shall be developing an online poster on my usage of the CPS system at CNU in Fall 2005. Links will be available from my home page.

Measured Results:

While I have always received positive anecdotal feedback from students regarding the usage of response systems in my classes at Norfolk State University, much of my attention there was focused on encouraging other faculty members to adopt an approach of pursuing interactive engagement in their lectures using Peer Instruction. We used the Personal Response System (PRS) technology – now sold as Interwrite PRS. Its data aggregation facilities were primitive and made it hard to store and analyze data from multiple class sessions. This year at CNU I have been pursuing a very systematic strategy of storing the results of each session (using the CPS technology). I have asked students to comment on the Student Evaluation forms about the effectiveness of using technology in their course (introductory physics for non-science majors) for learning. I hope to receive this feedback sometime in the Spring semester. In the meantime, my data suggest that many physics misconceptions have been identified with a CPS question, addressed through a short instructional sequence and then assessed with follow-up questions that allowed students to demonstrate increased understanding of the topic. Most recently, this happened during the study of thermal energy and differentiating the temperature of an object from its specific heat capacity.

NOTES & IDEAS: Using Blogs to Teach Philosophy

by Linda E. Patrik, Union College


Students taking their first philosophy course often express surprise when encouraged to use “I” in their papers. Unlike academic writing in most other disciplines, philosophical writing frequently and strongly states the “I” because philosophers have to develop and defend their own positions. They cannot weasel out of taking responsibility for their views, and thus the assertion of the “I” means that they are willing to stand or fall with their expressed position.

This is one reason why blogs are so effective for teaching students how to debate in philosophy. Blogs were initially developed as online diaries, and most college students still associate blogs with their own inward monologues. The blog medium softens students’ resistance to using the philosophical “I” in their writing, since they are accustomed to bloggers expressing their own views and taking personal responsibility for such. Blogs bridge the personal “I” of a diary with the philosophical “I” of an argument offered in public debate. Once these public debates are posted online, the ease of using “I” — and meaning it — makes students more confident that they are capable of having their own views.

The effectiveness of blogs for philosophical debate increases when each student has his or her own blog. It is better to give each student a blog than to have all students participate in a single blog; not only do students write more, but they argue more creatively. When students have to post a blog that is in competition with other students’ blogs, students become attentive to which blogs attract and generate the most interesting and heated debates in the course. They scan the various blogs posted by other students and keep returning to the blogs that have the best debates. When commenting on others’ blogs, students not only aim to make their points in those debates but seek to entice readers back to their own blog. Students spend more time and thought on their individual blogs in order to keep it popular, and they also take care when commenting on others’ blogs because they want reciprocal visits to their own blog.

When students each have a blog for posting their positions on philosophical issues, they not only develop a sense of personal responsibility and confidence in their work, but they also unlock their creativity. Some blog software allows them to select the graphic design and format of their blog; many blog programs allow them to include photos, images, video clips and audio files to personalize their blogs.

Creativity in blogs is not limited to graphics. Students learn to write hypertext and even techno-text[1] papers in their blogs. In the philosophy course on Cyberfeminism that I taught last spring, students posted all of their writing in their blogs. In addition to papers, they wrote summaries of reading assignments and posted commentaries on debate issues raised in class. As they gained familiarity with blogging, they began to experiment with links, images, video, and sound as digital enhancements for their posted written work. Over the course of the term, students gained new inspiration from visiting others’ blogs on a regular basis—the experiments attempted by one or two began to spur the others on to try something new. A few students created complex, multimedia forms of techno-text as the final project in their blog; as the term ended, students not only visited one another’s blog but celebrated these virtuoso multimedia creations. Who would have thought that philosophical writing could have images and video? Hegel’s old metaphor for philosophy—the owl of Minerva flying at dusk—was inadequate for the kind of philosophical writing posted by the most creative students: philosophical writing supported by rainbow colors and complex imagery.

Philosophical creativity involves raising the most thought-provoking questions and defending one’s own answers to such questions. Blogging encourages creativity in philosophical debate, especially when each student has his or her own blog, because it allows for fairly spontaneous expression of ideas and it invites students to journey out of their blogs into the blogworld established by another. In order to debate with one another, students in my Cyberfeminism course posted their own position on an issue on their own blog and then visited one another’s blog to find others’ positions on the issue. They posted their responses to others’ positions either on their own blog or on other students’ blogs. The more technically-adventurous included links to one another’s blogs in their own blog’s discussion of an issue.

Several course management programs have a Discussion medium that is similar to a blog, but most of these programs require students to participate in the same blog (e.g., Blackboard’s Discussion Board). The professor sets up the questions for discussion and debate, and then asks students to log in and comment on the questions. Course chat rooms are also a common online venture, lacking the individual character and control that separate student blogs have. The advantages of grading individual blogs outweigh the ease of grading discussions gathered in one blog or one chat room, considering that each student learns to write for a public beyond the professor; in addition, students can more easily compare their online work to that of others. Grades for individual blogs make more sense to students than do grades on what they have contributed to a common blog or chat room.

In sum, the advantages of individual student blogs for philosophical writing are personal responsibility, confidence in one’s own view, debate excitement, and creativity. The blog medium allows for dialogue and debate, which are essential to philosophical thinking, and the digital enhancements possible in blogs allow for new directions in philosophical expression.

[1] N. Katherine Hayles’ concept of techno-text is that of a digitally enhanced text that reflects back upon its own electronic medium. (Writing Machines, MIT Press, 2002)

Open Access to Scholarship: An Interview with Ray English

by Michael Roy, Middlebury College


What is the open access movement?
Open access is online access to scholarly research that’s free of any charge to libraries or to end-users, and also free of most copyright and licensing restrictions. In other words, it’s scholarly research that is openly available on the Internet. Open access primarily relates to the scholarly research journal literature–works that have no royalties and that authors freely give away for publication without any expectation of monetary reward.

The open access movement is international in scope, and includes faculty and other researchers, librarians, IT specialists, and publishers. There has been especially strong interest from faculty in scientific fields, but open access applies to all disciplines. The movement has gained great impetus in recent years through proclamations on open access, endorsements from major research funding agencies, the advent of new major open access publishers, and through the growth of author self-archiving and author control of copyright.

Are there different forms of open access?
Open access journals and author self-archiving are the two fundamental strategies of the open access movement. Open access journals make their full content available on the Internet without access restrictions. They cover publication costs through various business models, but what they have in common is that they generate revenue and other support prior to the process of publication. Open access journals are generally peer-reviewed and they are, by definition, published online in digital form, though in some instances they may also produce print versions. Author self-archiving involves an author making his or her work openly accessible on the Internet. That could be on a personal website, but a preferable way is in a digital repository maintained by a university or in a subject repository for a discipline. I should point out that author self-archiving is fully in harmony both with copyright and with the peer review process. It involves the author retaining the right to make an article openly accessible. Authors clearly have that right for their preprints (the version that is first submitted to a journal) – but they also can retain that right for post-prints (the version that has undergone peer review and editing).

Do journals generally allow authors to archive their work in that way?
A very large percentage of both commercial and non-profit journals do allow authors to make post-prints of their works openly accessible in institutional or disciplinary archives. There tend to be more restrictions on the final published versions (usually the final pdf file), but many publishers allow that as well. An interesting site that keeps track of that is SHERPA in the United Kingdom.

Why is open access important for higher education?
Open access is one strategy – and actually the most successful strategy so far – for addressing dysfunctions in the system of scholarly communication. That system is in serious trouble. High rates of price increase for scholarly journals (particularly in scientific fields), stagnant library budgets, journal cancellations, declining library monograph acquisitions, university presses in serious economic trouble, and increasing corporate control of journal publishing by a small number of international conglomerates that have grown in size through repeated mergers and acquisitions – those are all symptoms of the problem. Scholars have lost control of a system that was meant to serve their needs; more importantly, they are also losing access to research. Open access has extraordinary potential for overcoming the fundamental problem of access to scholarship. It is a means of reasserting control by the academy over the scholarship that it produces and of making that scholarship openly available to everyone – at anytime and from virtually any place on the globe.

Why does open access matter to liberal arts colleges in particular?
It is especially important for liberal arts colleges because of the access issue. Liberal arts college libraries have smaller budgets, compared to the research universities. While even the major research libraries cannot afford all of the journals that they need, the lack of access is an even bigger problem in the liberal arts college realm. Faculty at many liberal arts colleges are expected to be active researchers and independent study is also a hallmark of a liberal arts college undergraduate education. So the lack of access to journal literature can be even more problematic in the liberal arts college framework than it is for the research universities.

Are there other benefits to open access?
There are many benefits, but the main one that I would point out relates to the growing body of research that demonstrates how open access increases research impact. A number of studies have shown that articles that are made openly accessible have a research impact that is several times larger than that of articles that are not openly accessible. Authors will get larger readership and more citations to their work if they make it openly accessible.

And what about disadvantages?
Well, one of the main objections to open access journals relates to the fact that most of them are new and don’t have the prestige factor of older established journals. So, younger faculty who are working for tenure may not want to publish in open access journals, particularly if they can publish in traditional subscription journals that are high in prestige and impact. That’s not as much of a concern for tenured faculty, though, and some open access journals are becoming especially successful and prestigious. Titles published by the Public Library of Science are a great example of that. Prestige and tenure considerations don’t come into play for self-archiving. All authors can exert control over copyright and can make their work openly accessible in a repository, and that will definitely benefit both themselves and the research process generally.

What about the business viability of open access journals?
As I mentioned, there are a variety of business models that support open access publishing. They include institutional sponsorship, charging authors’ fees, and generating revenue from advertising or related services. Business models will differ, depending upon the discipline and the particular circumstances of a journal. In the sciences, where there is tradition of grant support, charging authors’ fees is very feasible. Both the Public Library of Science (the most prominent nonprofit open access publisher) and BioMed Central (the most prominent commercial open access publisher) are great examples of that. In humanities fields, by contrast, there is very little grant support for research, but publishing is also less costly, so open access there is likely to be fostered primarily through institutional sponsorship. Open access publishing is inherently less expensive than traditional subscription or print publishing. There are virtually no distribution costs and no costs related to maintaining subscriptions, licensing, or IP access control. There are also a number of open source publishing software systems that support the creation of new open access journals. I’m amazed by how many new peer-reviewed open access journals are appearing all the time. One way to get a sense of that is to go from time to time to the Directory of Open Access Journals. As of right now there are almost 2,000 titles listed. Just six months ago there were about 1,450.

Are there over 500 new titles in the last six months, or are there 1,000 new titles, and 500 titles that went out of business? Should faculty who don’t have tenure worry about publishing in journals that might no longer exist when they come up for tenure?
I’m not aware of any conclusive data on the failure rate for open access journals (or new subscription journals, for that matter). A new study that will be published in January indicates that about 9% of the titles listed in the Directory of Open Access Journals have published no articles since 2003. Those titles are still available online, so it’s hard to say if the journals have actually ceased. In addition, a small percentage of titles in the directory (about 3%) were inaccessible during the study. The reasons for those titles being offline are not clear; some may have failed, but some may just be inaccessible temporarily. A significant percentage of open access journals are from well-established publishers and some individual titles have been in existence for a decade or longer. At the same time, a large majority of open access titles are from smaller, more independent contexts – they are produced by non-profit organizations, academic departments, or leaders in a field. Since they are relatively new, their viability isn’t proven yet. So it could be advantageous for untenured faculty to publish in some open access journals, but that may not be the case a lot of open access titles.

What’s the hottest current issue related to open access?
I think it’s the issue of taxpayer-funded research. Both in this country and abroad there is increasing interest in making publicly-funded scientific research openly accessible. We saw the beginnings of that with the National Institute of Health policy that was instituted last year and I think we will soon see a broad national debate about the advisability of this for all U.S. government agencies. The United Kingdom is moving toward a comprehensive policy of mandating open access to all government-funded research.

What is your role in the open access movement?
I have been a member of the steering committee of SPARC (the Scholarly Publishing and Academic Resources Coalition) since its inception. SPARC, which is a coalition of academic and research libraries, has been a prominent advocate for open access. I have also played a leading role in the scholarly communications program of the Association of College & Research Libraries. I chaired a task force that recommended the ACRL scholarly communications initiative and I have been chair of the ACRL Scholarly Communications Committee since it was established. Being involved with both SPARC and ACRL has put me in the middle of a number of these issues for the past several years.

How does open access fit into your role as library director at Oberlin?
We have been doing ongoing work at the campus level to build faculty awareness of scholarly communications issues and also to support open access in concrete ways. We have taken out institutional memberships to major open access journals and we’ve encouraged faculty to publish in open access journals in instances where that made sense for them. I have also been involved as a steering committee member with the creation of a statewide institutional repository that OhioLINK is developing. When that repository system is implemented we will be working very actively with our faculty on the question of author control of copyright and self-archiving.

What are some concrete things that faculty, librarians, and other stakeholders can do to help?
Faculty have great power in the system of scholarly communication (as editors, reviewers, and authors), so they are in the best position to bring about change. They can assert control over their copyright, archive their research openly, and publish in open access journals, among other things. The role of librarians and IT staff necessarily needs to be more educational in nature. They can become informed about these issues and then work with faculty and other researchers to bring about fundamental change. There is a good summary of a lot of these issues, along with concrete suggestions for what faculty, librarians, and others can do, in the ACRL Scholarly Communications Tool Kit.

The Create Change website is another great resource.

Other than Academic Commons, what is your personal favorite open access publication?
My favorite one, for obvious professional reasons, is D-Lib Magazine. It publishes a variety of content – articles, news, commentary, conference reports – related to the development of digital libraries. They’ve had a number of important pieces on open access and scholarly communications issues.

Faculty as Authors of Online Courses: Support and Mentoring

by Deborah Cotler and Gail Matthews-DeNatale, Monmouth University


Our Present Context: How Did We Get Here?

Only a few years ago, if you had polled Simmons College administrators, faculty, students, and even technology staff members, the consensus would have been that “online”  learning is not relevant to the mission of our institution. A “small university”  with a liberal arts undergraduate program and four graduate schools, Simmons’ culture is “high touch”  and personalized. To the uninitiated, distance learning seemed antithetical to our institutional mission and philosophy of learning.

Along with thousands of other institutions of higher education, our views have changed as we have become increasingly sophisticated in our understanding of the tremendous potential for online learning. Today we offer hybrid courses, three fully-online certificate programs, and an online degree program in Physical Therapy. The School of Library Science is a member of WISE, a national network of schools providing online courses in information science. A number of other fully-online and hybrid programs are in development, including courses within the College of Arts and Sciences. Not only do pioneering faculty teach online at Simmons, those in the so-called “second wave”  are also developing hybrid and fully-online courses.

Our current challenge is to ensure the development of online learning that engages learners in the open-ended, inquiry-based learning that we believe is at the heart of a liberal arts education. We are finding that excellent professors whose face-to-face teaching is grounded in a liberal arts approach to learning may sometimes encounter difficulties when they take their teaching into the digital realm.

Our experience also suggests that the distinction between “pioneer”  and “second wave”  faculty is spurious. These labels distract from the insights and unique talents that a particular faculty member can contribute to a project. People don’t fit neatly into categories – they aren’t exclusively pioneers orsecond wave. Some faculty who are “second wave”  in relationship to technology can be pedagogical “pioneers.”  To realize the promise of online learning, we believe that academic technologists must learn how to collaborate with good teachers – even when technology isn’t a professor’s strong suit. Conversely, faculty members need help in learning how to work in partnership with academic technologists.

Good professors excel at engaging groups of students face-to-face, but few are prepared to develop courses online. In addition, their pedagogy is often implicit – developed and fine tuned over the years through trial and error.  Paul Hagner writes:

It is a basic fact that many of the best teachers possess natural communication and information management abilities that, for many of them, are simply assumed rather than the product of intensive self-examination.  Since one requirement for transformation is coming to grips with how the new technologies can enhance learning objectives, a problem results in that many successful teachers have never engaged in this form of articulation and self-examination.[1]

Faculty members and academic administrators who are new to e-learning are likely to overlook or even eschew logistical details that technologically-adventurous professors easily think through, grapple with, and resolve. Likewise, tech-savvy faculty may be undeterred by technical glitches, but have tremendous difficulty conceptualizing online offerings that are pedagogically progressive and grounded in inquiry.

Given this context, it is vitally important for those of us who are involved in academic technology to help faculty and administrators develop understandings and capabilities they may not realize they need.[2]And we may also need to step back and question our own pedagogical assumptions about the role that technology should/can play in teaching and learning at liberal arts institutions.

Just as a good teacher knows how to tailor a course to suit a particular group of learners, academic technologists need to develop a framework of support customized to meet the complex and variable needs of mainstream faculty, a support framework that is also congruent with the culture of the institution. In the same way that an ethnographer takes time to become steeped in the culture of a given community, we need to listen, observe, and thoughtfully assess faculty members’ perspectives and needs.[3]

To deepen our understanding of the range of their perspectives and needs, we interviewed several of our faculty collaborators, including:

Mary Jane Treacy, who directs the Honors Program in the College of Arts and Sciences at Simmons College. In fall 2004 we worked with Mary Jane to help her develop her first hybrid course for graduating seniors. As part of a year-long fellowship, we are currently collaborating with her to integrate ePortfolio work across all years of the Honors Program and curriculum.

Vicki Bacon, who chairs the Counselor Education program at Bridgewater State College and is an adjunct faculty member at Simmons. She developed and teaches a fully-online course in Sports Psychology. Of the three faculty members we interviewed, Vicki had the greatest difficulty making the transition to teaching online. Our work with her is featured in a case study later in this article. We are grateful to Vicki for allowing us to write up the problems she encountered as a case study through which others can learn.

Robert (Bob) Goldman, who is a Mathematics Professor in the College of Arts and Sciences at Simmons College. He has developed two online courses, the most recent of which is “Webstat,” a fully-online statistics course.

What Are The Concerns of Mainstream Faculty?

When asked about preliminary concerns in developing an online course, each of our interviewees shared similar concerns. Bob and Mary Jane were apprehensive about loss of control and quality in their teaching. They also expressed fear of failure.  (see “Preliminary Concerns” video)

Vicki wasn’t initially concerned. Because her ability as a classroom teacher is her “greatest strength,” it didn’t occur to her that she might have difficulty teaching online. Like Bob, she doubted the medium – whether a course like hers could succeed online. But she didn’t anticipate that distance learning would set in motion a process that required her to rethink how she teaches her subject.

Online Authoring: What’s Different?

Online course development challenges faculty to become explicit about their teaching because e-courses force them to “put it in writing”  (or into multimedia). Yet few first-time online professors – and even fewer academic administrators – recognize the course development process as an act of multimedia authorship.

According to Doug Brent, good courses are “like a story in an oral society … created and recreated each year in the complex guided interaction that occurs around [a] constellation of texts.” [4] When courses are offered over the Web, the posting of a session is a distinct act of authorship that precedes student and faculty interaction with the material. The “course”  reads as a musical score to be followed (and hopefully improvised upon) by course participants and facilitators. Each “class”  is an enactment, or performance, of this score, varying from semester to semester according to learners’ needs. The course score must be carefully composed in advance with attention to:

  • tone (desired approach and interpersonal dynamics);
  • part  (expectations for how students will interact with the material and with each other);
  • timing (a realistic assessment of how long each task will take); and
  • flow (how each component connects, furthers goals, and contributes to the learning experience as a whole).

As faculty members become immersed for the first time in the writing-intensive process of course development, they struggle to understand the genre. What constitutes a “session”  or “lesson?”  Lacking sufficient orientation, they tend to misapply familiar formats: cryptic lesson plan notes, PowerPoint slides that lack the speaker’s narrative, or lengthy academic articles. Faculty need guidance in developing a mental template for online learning that suits their personality, discipline, and pedagogical philosophy.

The collaborative dimension of online course development also requires faculty to become accustomed to a different pace and working style. With the exception of team-taught courses, most faculty members develop lesson plans on their own, using an idiosyncratic process that involves little or no interaction with others.

But for mainstream faculty who do not do their own technical implementation, online course development inevitably involves the give and take of working with a team of instructional designers and technologists. Ideally, team members are full collaborators with the faculty member. Instead of viewing others on the team as technicians who are solely responsible for “putting the course online,”  the faculty member needs to learn how to partner with people who possess professional perspectives, skills, and abilities. The work of educational technologists may be a heretofore invisible dimension of the process for the faculty member. For example, Instructional Designers, expert in web-based course design, implementation, and assessment, may suggest approaches that feel counter-intuitive to those who have never taught online. In addition, the technical implementation of course materials takes time, requiring faculty to adhere to deadlines that are well in advance of those that would be needed for a face-to-face course. According to Bob Goldman:

I’ve gotten used to working with the team that is preparing the course. I think that’s worked out well. I now know that I have to give them a lot of lead time. I know what they can do, and what they can’t do. And I’m now able to work within that framework much better than I was before.

Online Course Authorship Requires Faculty to Develop a New Skill Set

Assuming that online courses are a new genre of writing, what’s entailed in this type of authorship? In addition to asking our three inteviewees about their preliminary concerns, we also asked them to tell us what they think first-time authors of online courses need to know (see “What First Timers Need to Know”  video).

In reflecting on our interview data and on our own experiences working with faculty, we believe that faculty need support in developing the following understandings and capabilities:

Understand How to Author a Coherent, Integrated Learning Experience:  Most faculty members are unaware of the explanations they provide “in the moment”  when they teach face-to-face. Their first stab at translating sessions for online delivery reads like a set of lesson notes. For many, this is a necessary first step – putting the broad strokes in writing. When asked to flesh out the session, the second draft will often read like cookbook directions- with some clarifying details and the desired sequence of activity (“First, do this.  Then, read that.  Finally, do this.” ). But for the course to be a gratifying learning experience, sessions need a narrative dimension, the textual equivalent to verbal orientation and context setting. Sessions also need to be revised and polished in a manner usually reserved for print publications.

Understand What Needs To Be Composed in Advance and What Can Be Improvised: In a face-to-face setting, the teacher goes to class with a repertoire of strategies, discussion questions, and other resources jotted down in her lesson notes (or in her head). If students do not connect with one approach, she can improvise. In developing an online course, first timers have difficulty distinguishing between materials that need to be incorporated into the course text and things that can be communicated in impromptu announcements and discussion posts.

Understand the Emotional Needs of Online Learners: In the face-to-face classroom, good teachers know how to use subtle gestures and tone of voice to set an emotional tone that is conducive to learning. In preparing a course for delivery online, faculty are often inattentive to issues of tone. They need to learn how to use words, color, and images to communicate that their course welcomes intellectual risk-taking, inquiry, and deep thought.

Understand How To Keep Students Engaged and Oriented: Perhaps the most difficult challenge for faculty is to develop online sessions that are both explicit and engaging. Well-crafted sessions address the metacognitive dimension of learning. For example, callout boxes can be used to help learners see how discrete activities connect up with larger learning goals.

Faculty members who are new to teaching online often focus on the limitations of the medium – overlooking types of learning that can only take place “at a distance.”  For example, instead of doing all coursework online, students can get up from their computers to do activities around their homes and communities in geographically diverse settings. They can then report back. Within a relatively short time frame class members can benefit from information or stories that peers have gathered from across the country or even the world.   Groups can compare, contrast, analyze, debate, and synthesize their experiences into a multi-dimensional understanding of the topic.

Understand How The Course Looks and Feels From The Students’ Perspective: In the face-to-face setting, there are numerous cues about how a session is going – students’ body language and questions indicate when the learning is off course. But in an online course, serious problems can go unnoticed and compromise student learning. For this reason, we ask first-time course developers to solicit feedback through frequent formative assessment surveys. While the problems with a given session are still fresh in students’ minds, we use the following three questions at the end of each learning module:

  • How many hours did you spend working on this module?
  • What are your suggestions for improving this module? Please also fill us in on any problems you encountered with the technology, directions, or organization of materials.
  • Considering the objectives for this module, what do you think is the most important thing you learned? What questions remain?

The three-question format helps us disentangle technical and pedagogical glitches. Some things can be fixed in the moment. Student engagement intensifies when they realize that their input results in on-the-fly course revisions. Other issues are duly noted and “fixed”  in the next “edition”  of the course.

This skill set serves as the framework we use in consultation with faculty. But what does it look like in action? The following case study serves as an example.

Case Study

In 2003, Simmons launched a fully-online certificate program in Nutrition. Sports Psychology, taught by Professor Vicki Bacon, is one course in the program.

Well-regarded by her students and by others in her field, Vicki prides herself in her ability to “walk into a classroom, quickly size up the dynamic and mold the classroom experience accordingly.”  Her courses are pedagogically progressive and take a liberal arts approach to health science learning. She makes extensive use of novels (A River Runs Through It), films (“Fearless”), community-based interviews, and case studies. Course discussions are shaped by open-ended questions that have no clear answer – queries that are thoughtfully designed to engage students in inquiry, reflection, and critical thinking.

Vicki’s class was first taught live on Simmons’ campus and then piloted online. Modifications were made in response to formative assessment and the course was taught a second time online in spring of 2005.

Challenges: The Sports Psychology course faced a number of barriers to success in its online debut. This was the department’s first foray into distance learning. Other departments had taken the plunge into web-based distance learning. But, in the absence of an institutional mechanism for intentional information-sharing, communication among faculty and departmental administrators about distance learning took place on an ad hoc basis.

Other challenges involved gaps in support at the institutional level. Academic Technology was in the process of hiring two fulltime instructional designers to work with faculty, but at the time that Vicki was authoring a first draft for her course there was insufficient support in place. In retrospect, all involved acknowledged the need for more training, modeling, and guidance prior to the course development phase.

In addition, both the department and Vicki assumed that the project entailed “putting the course online.”  In reality, as Vicki noted during her interview, online course development involves rethinking fundamental aspects of oneself as a teacher and how to best engage students in learning.

Finally, as someone who had never taken or facilitated an online course before, it was difficult for Vicki to know what was required of her. Perhaps her biggest challenge was learning how to teach in a context in which she was unable to “read”  the expressions and reactions of her students. While her skill at reading a room served her quite well in the classroom environment, it hindered her ability to author course materials that anticipated the needs of virtual students.

As mentioned previously, online course development constitutes a new genre of writing for most academics – both the process and the product that differ from their previous experiences authoring books, scholarly articles, book reviews, or even email messages and PowerPoint presentations. The text Vicki produced for the pilot version was skeletal. The outline was explicit, but the narrative that helps students connect the dots was noticeably absent. This is not unusual for a first time online course author. All three faculty members interviewed for this article mentioned that translating “lecture notes”  into a coherent online learning experience for students was one of their biggest hurdles.

Predictably, the course debuted with a bumpy start. Course modules pointed students to articles, case studies and lecture notes, but failed to set the context for learning. Participation lagged – students submitted the required work, but the learning and level of engagement stagnated. Vicki expressed frustration that the students were failing to “take it to the next level.”  She was concerned that these students’ discussions, reflections, and questions were not indicative of the type of learning she usually observes in her classes – conceptual understanding and insight did not seem to build from one module to the next.

Weekly formative assessment, gathered through WebCT surveys, confirmed what was already evident; students were not engaged, they didn’t come away from the modules having grasped the key concepts, and they were often confused about what they should be doing.

Intervention and Revisions: Fortunately, as these challenges unfolded, Simmons College was increasing its infrastructure for faculty support. As the newly hired instructional designers, one of our first tasks was to provide Vicki with the guidance and support she needed to succeed. In addition to face-to-face consultation and coaching, we also introduced her to the literature about best practices in online teaching.

The Evolution of an Activity: The following example presents the evolution of one assignment, illustrating how we worked with Vicki to turn it into a successful experience of learning through inquiry.

The genogram assignment required students to use Inspiration software to construct a diagram of their own family’s roles and dynamics. The purpose of this assignment was to help students examine their family history and reflect on potential “hot buttons”  that might impede their ability to work with a client.

Pilot Version:  Directions for the assignment, in the first iteration of the course, read as follows:

You should complete construction on your family genogram this week. In the discussion forum, first post about your experience developing your own genogram. Given your experience, what do you think is the genogram’s value for client assessment? Then, review your classmates’ posts and post at least one reply to another thread.

The formative assessment and implementation of that plan quickly revealed that students were struggling. Because there was no on-site demonstration, it took students longer to learn how to use the software. Because students weren’t explicitly told to attach their genogram files to their posts, they couldn’t understand details in peer comments on the experience and had no basis for comparative discussion. Because this was the first week of the class and community norms were still in flux, they felt awkward sharing personal details about family dynamics. Finally, because the assignment guidelines and discussion prompt were vague, the discussion fell flat.

The following are typical student comments from formative assessment surveys conducted during the pilot:

“Things are too scattered around.”  “I was confused with this module.”  “I tried to develop a conversation … and until the last day received little to no feedback.”

As an “on the fly”  change in response to formative assessment, Vicki decided to extend the discussion into a second week – this time encouraging students to post their genograms. But at best this was damage control – before the course was offered again, Vicki worked with Deborah Cotler to revise and reformat the entire course, including the genogram assignment.

Online Course Revised:  After analyzing students’ formative feedback, Deborah and Vicki realized that the goals for the assignment were unclear – both for the students and for Vicki. For example, the stated goal was for students to identify prior life experiences that might affect their ability to work with clients on certain issues. But the assignment’s discussion prompt also asked students to consider the value of using client genograms as a tool for assessment.

Deborah asked Vicki to describe how she would teach the assignment in the context of a face-to-face class. Vicki said that she would probably begin the discussion by focusing on what students learned by doing their own genograms and then ask follow-up questions to extend the conversation to cover the value of genograms in a sports psychology context. But in the online context, absent facilitation in the moment, presenting both discussion topics at once resulted in confusion about the assignment.

Deborah worked with Vicki to hone the assignment to make the rationale, process for implementation, and expectations explicit. They also moved the genogram assignment to the third week of the class, allowing time for community-building before asking students to disclose personal family information. Comments made during the second round of formative assessment indicate dramatic improvement:

“I learned to look at the possible conflicts I can have with patients because of their beliefs and lifestyles. I did realize this before, but this module made me focus and think about the possibility of this happening in my clinical practice.”

“Another great application of our learning to real life. It’s great to apply this knowledge to a real person and see how it actually fits in real time. My confidence about applying this to my patients outside of this class is growing.”

At the end of the semester, course evaluation comments were equally gratifying:

“Dr. Bacon was the best facilitator in my entire Simmons College online experience. She was extremely insightful and provided food for thought in several of the modules. It was encouraging that she responded to all the modules. This gave us a feedback as to that we are on the right track.”

Support for Developers of Online Learning: What’s Helpful?

To get a better understanding of what we were already doing “right,”  during faculty interviews we asked which aspects of our support had been most helpful (see “What Helped?”  video). Based on this feedback and our own observations, we offer the following suggestions:

Establish optimal conditions for dialogue. Before you begin working in depth with a faculty member, point them to a copy of the literature that informs your approach to online pedagogy. We find that when faculty members come to the table with a foundational understanding of the principles that guide your approach, the dialogue starts at a much more productive level.

Articulate goals for student understanding and skill development. By identifying learning goals at the outset of the project— and frequently reassessing these— you will ensure that course materials and activities support the desired learning.

Clarify how students will learn. Brainstorm ideas for what students will do or experience to further their understanding of course concepts. Identify, in advance, the artifacts of learning (discussion posts, work samples, chat logs, etc.) that will provide the professor with insight into students’ learning needs and progress toward goals. Help faculty keep cognizant of the fact that, in the online environment, you can never be too explicit in writing up assignment directions – but that doesn’t mean that assignments need to take an objectivistic approach to learning.  The assignment tasks need to be crystal clear, but the process of enacting those tasks – projects, research, discussion, reflection, etc. – will ideally engage students in constructivist meaning-making.

Work with faculty as writers. The most critical turning point for many faculty members is the moment they recognize this effort as an act of authorship. Suggest a process for authorship and help them develop a consistent format for session modules. Model a sequence for authorship that begins with analysis of students’ ideas. For example, instead of beginning with “what I want to say,”  begin with “what are common student misconceptions, where do the students tend to struggle?”  Then develop the course with these patterns of need in mind. Help them reflect on the desired class culture, or sense of community, and what needs to be included in the course to achieve that dynamic.

Work with faculty as revisers. Just as an author would never write an academic paper without multiple rounds of revisions, a course author must be prepared to revise the course based on feedback from others. Offer to be a reviewer. Encourage the faculty member to solicit peers as additional reviewers.

Final Words

Collaborating with course authors can be an immensely satisfying experience. When the pieces fall into place and an online course runs well, the result is intensely generative. Rather than increase the distance between faculty and students, faculty are discovering that web-enhanced learning engenders the type of personalized learning that is at the heart of Simmons’ mission.  According to Mary Jane Treacy,

I have never learned as much about a group of students in all my years at Simmons College. I am just amazed by what I know about them – and also amazed by how they’re coming together, getting close, but also bumping elbows, and how they’re getting closer to me. It feels very, very good.  It’s the right thing to do.

At Simmons, we have had the pleasure of enjoying many such positive partnerships. It is our hope that the suggestions and experiences we have detailed will assist you in your own consultative work with liberal arts faculty.

[1] Paul Hagner, “Faculty Engagement and Support in the New Learning Environment,”  Educause Review (September/October 2000), 31.

[2] Though beyond the scope of this article, a set of suggested guiding questions we developed for administrators and faculty involved in developing online programs is available athttp://my.simmons.edu/services/technology/ptrc/pdf/educause04_handouts.pdf.

[3] Video clips from interviews we conducted in preparation for this article are available online athttp://my.simmons.edu/services/technology/ptrc/resources/articles.shtml.

[4] Doug Brent, “Teaching as performance in the electronic classroom”  in First Monday 10, no. 4 (2005):60, http://firstmonday.org/issues/issue10_4.

Taking Culture Seriously: Educating and Inspiring the Technological Imagination

Posted December 12th, 2005 by Anne Balsamo, University of Southern California

Introduction:  On the Relationship of Technology and Culture

Ignorance costs.

Cultural ignorance — of language, of history, and of geo-political contexts — costs real money.

Microsoft learned this lesson the hard way. A map of India included in the Windows 95 OS represented a small territory in a different shade of green from the rest of the country. The territory is, in fact, strongly disputed between the Kashmiri people and the Indian government; but Microsoft designers inadvertently settled the dispute in favor of one side. Assigning the territory (roughly 8 pixels in size on the digital map) a different shade of green signified that the territory was definitely not part of India. The product was immediately banned in India and Microsoft had no choice but to recall 200,000 copies. Through a release of another version of its famous operating system, Microsoft again learned the cost of cultural ignorance. A Spanish-language version of Windows XP OS marketed to Latin American consumers presented users with three options to identify gender: “non-specified,” “male,” or “bitch.”  In a different part of the world, with yet a different product, Microsoft again was forced to recall several thousand units. In this case the recall became necessary when the Saudi Arabian government took offence at the use of a Koran chant as a soundtrack element in a Microsoft video game. The reported estimate of lost revenue from these blunders was in the millions of dollars.[1]

These examples illustrate the very real ways in which cultural ignorance costs money and good will in the big business of technological innovation. In this case, several seemingly insignificant details incorporated into state-of-the-art digital applications not only resulted in the recall of several widely distributed products and damage to a global brand, but also demonstrated a grand failure of multicultural intelligence within the ranks of a multinational corporation.

Although it is tempting to deploy these examples as a contribution to the popular pastime of Microsoft bashing, that response is neither creative nor particularly insightful. Rather, I use the examples of the costliness of a multinational corporation’s cultural blunders to assert that the process of technological innovation must take culture seriously. Moreover, I argue that the process of technological innovation is not solely about the design and development of new products or services, but rather is the very process that creates the cultures that we inhabit around the globe.

Technology is not an epiphenomenon of contemporary culture, but rather is deeply intertwined with the conditions of human existence across the globe. Although we are now more than a century past the dawn of the industrial age, the global distribution of the benefits of industrialism, i.e., basic health and subsistence-level resources, remains disturbingly uneven. In considering the significant loss of life due to recent hurricanes in the southern U.S. it is clear that the demarcation between rich and poor does not map simply onto the division between the global North and South. The tragedy revealed a wide-scale ignorance of the reality of the technological situation of people living in those regions. Evacuation orders were not only late in coming, they only addressed those who were already technologically endowed with the means to flee to safer ground, i.e., the automobile, or to those who had access to other technological resources, such as planes, trains, or buses. When lives are at stake, which is often the case with the deployment of large-scale or new technologies, it is ethically imperative that the technological imagination must explicitly consider cultural, social, and human consequences. This imagination must be trained to imagine the unimaginable—that is, to actively imagine unintended consequences.

When developing new technologies, culture needs to be taken into consideration at even a more basic level: as the foundation upon which the technological imagination is formed in the first place. I define the technological imagination as a character of mind and creative practice of those who use, analyze, design and develop technologies.[2] It is a quality of mind that grasps the doubled-nature of technology: as determining and determined, as both autonomous of and subservient to human goals. This imagination embraces the possibility of multiple and contradictory effects. This is the quality of mind that enables people to think with technology, to transform what is known into what is possible, and to evaluate the consequences of such creation from multiple perspectives.

The Interdisciplinary Education of the Technological Imagination

Every discipline within the contemporary university has been transformed by the development of new technologies, whether technology now becomes an “object” of study, as in the humanities and legal studies; a tool of knowledge production, as in the social and medical sciences; or a domain of new disciplinary knowledge, as in the engineering sciences, cinema, and communication studies. This means that every discipline within the university has something important to contribute to the development of new technologies.

Universities need to actively educate and inspire researchers, teachers and students to develop a robust technological imagination. This is an educated “quality of mind” that is by nature thoroughly interdisciplinary. To understand technology deeply one needs to apprehend it from multiple perspectives: the historical, the social, the cultural, as well as the technical, instrumental and the material. We must develop interdisciplinary research and educational programs that enact and teach skills of creative synthesis of the important insights from a range of disciplines in the service of producing incisive critique of what has already been done. From this critique emerges the understanding of what is to be done. In this formulation, the traditional role of criticism is expanded. No longer an end in itself, criticism of what has already been done is a step in the process of determining what needs to be done differently in the future. Our educational programs need to teach skills of critical thinking that lead to creative proposals for doing things differently. Then we need to teach students methods for doing things differently with technology: how technologies are built, how they are implemented, how they are reproduced and how they affect cultural arrangements. This is the foundation of innovative research and new knowledge production. This is the work of the university-educated technological imagination.

Figure 1: How the university contributes to significant cultural change through the development of new technologies

Educational programs that seek to develop a robust technological imagination must include training in 1) the history of technology, 2) critical frameworks for assessing technology and identifying effects, 3) creative and methodological use of technological tools, 4) pedagogical activities and exercises that create new technological applications, devices, and services, 5) architectural and virtual spaces for social exchange and creative production, and 6) international studies and policy analysis that provide appropriate cultural and institutional contexts of assessment of effects. This is the necessary multidisciplinary foundation for the development of new technologies.

Moreover, there is a category of technology—what might be labeled technologies of literacy—that serve as the stage for the elaboration, reproduction, performance, and dissemination of culture across the globe. Technologies of literacy include the development of pedagogical methods for educating literate citizens who not only understand the technologies already available, but who will be equipped with the intellectual foundation and habits of mind to respond and use the new technologies that will become commonplace in the future. This is a crucial dimension of the education of life-long learners. Thus these educational programs must experiment and develop innovative pedagogies that engage multiple intelligences: the social, cultural, and emotional, as well as the cognitive and the technical. Furthermore these pedagogies must utilize the full range of new technologies that enable multiple-modes of expression in the production of educational materials and educational output: visual, textual, aural, corporeal, and spatial. In this way these programs both draw on new technological literacies and engage faculty and students in the creation of the literacies of the future.

In a research context, the manifestation of this imagination comes through the collaboration of faculty and researchers from different disciplines working together on projects of social and cultural significance to create human-centric technologies. The output of their research may take several forms: innovative technological devices, applications, research monographs, presentations, demonstrations, performances, and installations. The guiding strategy for all these research projects is that they “take culture seriously.” Culture serves as both the context for the formulation of the research problem in the first place, and as the domain within which significant technological developments will unfold. In this way, this kind of technology-based research understands its ethical dimensions and acknowledges its ethical responsibilities.

To do this right, we need to ground these interdisciplinary efforts in new ways of thinking about technology. We need a new educational philosophy that can guide our efforts to create “original synners”—students who can synthesize information from multiple perspectives.[3] We need to develop new institutional structures for research and new pedagogies that support the development of the technological imagination and inspire its practical application. We need new analytical frameworks that enable us to imagine the multiple consequences of the deployment of new technologies. I also argue that we need to specify the ways in which all of us within the university are accountable for the future of technological development. Designers and engineers need to address their cultural responsibilities.  Humanists and social scientists must contribute creative direction as well as critical analyses. In an effort to suggest a starting point for new multidisciplinary collaborative applied technology-based research projects that take culture seriously, I offer the following three broad questions:
What are the most pressing cultural issues within the US and across the world?
All technologies rearrange culture. We know that new technologies are especially useful in facilitating interactions among people from different cultures. How is the project of cultural reproduction served by new technologies? How will current as well as traditional cultural memories be preserved over time? How should we choose what to forget? What role does narrative play in the technological reproduction of culture? How is narrative itself a technology of culture? What new narrative devices/applications need to be developed to aid the reproduction of culture? The use of new digital devices for entertainment and pleasure yields contradictory effects. While some people in the developed world enjoy an expanded range of mobility, enabled by the development of mobile communication devices, others become more sedentary and confined within a limited orbit Through the use of global telecommunication networks people can expand their global awareness through virtual visits. What are the cultural possibilities and consequences of virtual mobility? What is the future of embodied play and entertainment? What implication does this have for the design of playgrounds, digitally- augmented performance spaces, and the development of creative toys? What are the implications of virtual tourism for the reproduction of privilege and mobility?  What are the cultural possibilities of technologically-augmented reality?

What are the literacies of the 21st century?
Literacy is a technological phenomenon. The development of new technologies of communication and of expression not only influence but demand the development of new literacies. These literacies do not compete with traditional print-based literacies, but rather build on and complement them. Current undergraduate students will become the next generation of scholars and researchers who will go on to develop new technologies of literacy, new genres and devices of cultural expression, and new forms of scholarship and research. How will we prepare them for this important cultural work? What technologies can be developed to teach basic literacy? What new kinds of reading devices will be useful in the future? How will our educational materials need to change to address the many kinds of literacy that will be required of future generations: reading, writing, digital, technological, multimedia? What will the textbook of the future look like? What are the possibilities of multi-player distributed gaming for the development of educational experiences?

What will scholarship look like in 10-15 years? 
Interdisciplinary collaborations and research provoke the need to develop new forms of scholarship, publications and other modes of cultural outreach. These new forms in turn offer an opportunity to experiment with modes of expression made possible by the development of new digital technologies. In the process, new forms of knowledge production emerge. New forms of scholarship will require the development of new authoring and publishing tools. We already know that authoring and designing are merging; what kinds of digital authoring environments are needed to support scholarship across the curriculum? Collaborative scholarship is a global phenomenon: how can social networking applications be used for scholarly and educational purposes? These social networking applications facilitate communication among scholars and lay people, thus offering a stage for the forging of radically new collaborations for the production of knowledge. Traversing the binary distinction between “scholar” and “amateur” promises to transform the educational scene within the university, effectively opening up the university to the world in unprecedented ways. How can the communication of scholarship and new research be enhanced through the development of multilingual digital applications, widely distributed digital archives, and new collaboration platforms?  What are the stages for knowledge transfer from the university to the broader public, which now includes so-called “amateurs” who are also actively engaged in new knowledge construction (through the development of folksonomies, for example)?

A trained technological imagination is the critical foundation required by the next generation of technologically and culturally literate scholars, scientists, engineers, humanists, teachers, artists, policy makers, leaders, and global citizens. Creating research programs and new curricula that explicitly address the education of the technological imagination are the ways in which the university will contribute to significant cultural change.

Instead of a Bridge, How about a Collaboratory?

In 1959, when C.P. Snow first described the gulf between the sciences and the humanities as a “two culture” problem, he implored educators to find ways to bridge the divide.[4] He took pains not to blame one side or the other for the failure to communicate because he believed that neither “the scientists” nor the “literary intellectuals” had an adequate framework for addressing significant world problems. In the intervening half-century since the publication of Snow’s manifesto there have been several attempts to bridge the “two culture” divide. While some of these attempts resulted in spectacular failures (“The Science Wars” of the early- to mid-1990s), others represent modest, but on-going interventions (The Society for Literature, Science and the Arts.[5] The development of Science and Technology Studies programs (STS) are noteworthy academic programs that train students to investigate the cultural and social implications of science and technology. Few if any of these programs or institutional experiments have successfully brought humanists, social scientists, scientists, and engineers together—as peers—to collaborate on the production of new applied research that results in the creation of new technologies. Future attempts to bridge the two cultures will be of limited success as long as these groups of scholars continue to see themselves as standing on opposite sides of the divide, or if the groups continue to regard each other as hierarchically advantaged or disadvantaged. I believe that the time is right to take up Snow’s challenge once again, not to work on building bridges per se, but rather to create a new place for the practice of multidisciplinary, collaborative technology-based research.

In 1989, a professor at the University of Virginia coined the term collaboratory to describe a new institutional structure for collaborative research. As of Fall 2005, there are dozens of collaboratories around the world, most of which are virtual spaces that utilize digital network technologies to support the collaboration among researchers at distant physical locations. Many of these collaboratories are actually collaborations among laboratories located around the world, where the individual laboratories are (presumably) still organized in the typical fashion around a single PI’s research or a single topic.

To date the collaboratories that involve humanities scholars focus almost exclusively on humanities computing research, where the projects involve the development and use of a high-end digital infrastructure for digitizing, archiving and searching specialized collections of historic materials, most typically books, manuscripts, and images. While these efforts and others such as the various “digital library” projects are absolutely necessary and valuable, they represent only one vector of research that unites the humanistic with the technological.

In 2002, a group of humanities program directors formed a virtual collaboratory called HASTAC: The Humanities, Arts, Science and Technology Advanced Collaboratory designed to promote the development of humane technologies and technological humanism.[6] The programs participating in HASTAC each have attempted to create some sort of institutional space for collaborative research involving humanists and technologists. The efforts include humanities computing programs as well as interdisciplinary humanities institutes that have a particular focus on science and technology.

Inspired by HASTAC discussions and meetings, I assert that there is a critical need to create physical collaboratories that bring humanists, artists, media producers and technologists together to build human-centric technologies. This requires a physical space where researchers from multiple disciplines work together as peers to design, prototype, and actually fabricate new technologies. In combining the critical methods of the humanities and social sciences with innovative engineering/design methods such as rapid prototyping and user-centered design, these collaborators will create innovative methodologies. Thus, the research output includes not simply new technology-based projects and demonstrations, but also insights into the nature of interdisciplinary collaboration and the creation of new methodologies for collaboration. Instead of a single PI, the business of the collaboratory would be coordinated by a representative group of researchers whose interests span the disciplinary spectrum: humanities, social and cognitive sciences, arts, engineering and sciences. As participants in this collaboratory, researchers from various disciplines each bring something important to the collaborations:

Special role of the humanist: Contributes expertise in the assessment and critique of the ethical, social, and practical affordances of new technologies; provides expertise on the process of meaning-making which is central to the development of successful new technologies; provides appropriate historical contextualization.

Special role of the social and cognitive scientist: Contributes expertise in the assessment of social impact and in the analysis of institutional, policy, and global effects of the development and deployment of new technologies; addresses the cognitive impact of new technologies; provides methods for analyzing social uses.

Special role of the technologist: Contributes expertise in the innovation of new devices and applications; provides analytical skills in the assessment of problem formation and solution design; demonstrates methods of design, creation, and prototyping; recommends specific tools, processes, and materials.

Special role of the scientist: Contributes expertise in the development of new theoretical possibilities; provides methodologies for assessing and evaluating implementation efforts, and for formulating possible (theoretical) outcomes; develops experiments with new materials; contributes understanding about environmental impacts and waste management.

Special role of the artist: Contributes expertise in the performance, expression, and demonstration of technological insights; provides skills in different modes of engagement: the tactile, the visual, the kinesthetic, and the aural.

The goal is to create space for the constitution of a research community that collaborates on technology-based projects that take culture seriously. While it is tempting to offer a list of suggested projects, this would undermine one of the critical components of the collaborative effort. While any participant can suggest a project, the project must be, in effect adopted by the community. This is to say that there needs to be consensus that a project is important to pursue. This, of course, is the basis of all good research; but it is rare that humanists, artists, and social scientists have a voice in this kind of evaluation of technology-based research projects. It is even rarer still that they have peer-status as researchers who will design, build, and fabricate new technologies. This is one of the important innovations of such a collaboratory. The output of these research projects might include typical research monographs, but also possibly public demonstrations, new pedagogical technologies, and new technologies of literacy. All the collaborators will serve as important “technology-translators” who can help make the meaning of new technologies more accessible to a wider public, both within and outside of the academy.

The social engineering of this endeavor is a crucial element of its success. The price of admission to this collaboratory is an individual’s commitment to embrace collaborative work. A key requirement of the research participants is that they work against the facile division of labor that would have the humanists doing the “critique,” the technologists doing the building, and the artists offering art direction. While there is a special role to be played by each participant, they must all be willing —  indeed, eager– to learn new skills, new analytical frameworks, new methods, and new practices. A personal commitment to life-long learning is the foundation for these collaborations. Each participant must be willing to uphold the ethical foundation of multidisciplinary work: intellectual  flexibility, intellectual  generosityintellectual confidence, and intellectual  humility. Only by doing so will the collaborations result in the kind of work where the sum is greater than the parts, and where the technological imagination can be freely exercised and employed to create futures that are desirable for all people around the world, not just for those already-privileged and technological-empowered.

Excerpted from Chapter 1: The Technological Imagination Revisited; Designing Culture: A Work of the Technological Imagination,  Anne Balsamo, Duke University Press, forthcoming.

[1].   “How eight pixels cost Microsoft millions,” Jo Best, c|net News.com.  http://news.com.com/How+eight+pixels+cost+Microsoft+millions/2100-1014_3-5316664.html.

[2].   The resonance with C.W. Mills’ notion of the “Sociological Imagination” is intentional here. C. Wright Mills, The Sociological Imagination (London: Oxford UP, 1959). See also:  Michel Benamou, 1980. “Notes on the Technological Imagination,” in Teresa De Lauretis, Andreas Huyssen, and Kathleen Woodward, eds. The Technological Imagination: Theories and Fictions. (Madison, WI: Coda Press, pp: 65-75).

[3]  This is an explicit reference to Pat Cadigan’s novel, Synners (New York: HarperCollins, 1991). For a more complete discussion of the education of original synners:  “Engineering Cultural Studies: The Postdisciplinary Adventures of Mindplayers, Fools, and Others” Science + Culture: Doing cultural studies of science, technology and medicine, eds. Sharon Traweek and Roddey Reid (New York: Routledge, 2000: 259-274).

[4].  C.P. Snow, The Two Cultures: and a Second Look (New York: Cambridge University Press, 1963).

[5]  http://slsa.press.jhu.edu

[6]  http://www.hastac.org

Technology as Epistemology

Posted December 12th, 2005 by Peter Schilling, Amherst College

Early in the 20th century Gertrude Stein wrote that America was the oldest country because it was the first to arrive at the new Century. Today’s students have formed their habits of mind by interacting with information that is digital and networked. They are, in a way, older than their teachers, whose relationships with information are governed by earlier generations of technology. There is more. Not only do our students possess skills and experiences that previous generations do not, but the very neurological structures and pathways they have developed as part of their learning are based on the technologies they use to create, store, and disseminate information. Importantly, these pathways and the categories, taxonomies, and other tools they use for thinking are different from those used by their teachers.

(http://www.userfriendly.org indicates use in this manner is not an infringement of the creator’s intellectual property.)

To say that “new technology is changing the way we think” is as obvious as it is ambiguous. While it may be popular, and accurate, to complain that Microsoft Word’s grammar checker has a greater influence on American English than any teacher, curriculum, or book, I would like to consider the relationship between technology and thinking explicitly in the context of education, where the mission is to help students learn to think.

Let us start with the role that patterns and categories play in learning and knowing. Although the patterns and categories we use are never perfect ways of creating meaning, they influence the way we think, remember, and anticipate information. For instance, in biology we divide the world into domain, kingdom, phylum, class, order, family, genus, and species, which, at its final category, is a division based on the ability to reproduce sexually. For this reason, we have the families of canidae and felidae, dogs and cats. If, in our world of cloning and other forms of assisted reproduction, we, instead, divide the world primarily by means of locomotion, dogs and cats would both be in one group as the digitigrade. (I suspect that, no matter how we categorize them, the digitigrade with the longer nose and floppy ears would still chase the digitigrade that purrs and flicks its tail.) In addition, the particular way we learn information, as well as when in our lives we learn, creates specific neural pathways (or patterns) in our brains. Once the patterns and pathways become too familiar or set, however, we become less adept at seeing information which does not fit the pattern. At times we may even start adding phantom data to fill in gaps. It is very important to keep this in mind.

All of our cognitive tools help us perceive our world and sort the flood of information that continually flows across our senses. We regularly filter and winnow this information in order to focus, group, and extract meaning. If our brain and senses did not do this, we would be overwhelmed by our inability to differentiate foreground from background.

In the photo of the two dogs on the log, we can differentiate the dogs from the woods that surround them. We have a sense of the field of vision in the photo, and perceive that one of the dogs is standing closer to the viewer than the other. We know that the trees are wood and the dogs are not. We also know that this is a photo on a computer screen and that it is unlikely that either dog will start chasing a squirrel.

Neurologist and author Oliver Sacks described the cognitive and neurological development of a man, blind since childhood, who regains his sight in his 50s. The once blind man, Virgil, cannot do all of the things with the dog photo we described above. Sacks shows that, for Virgil, information does not follow the same neural pathways that it does for other, sighted adults. However, once Virgil can feel a scene with his hands, such as the contents of a room or a person’s face, he can then describe the information that he sees. So, while his eyes function properly, his brain has developed strategies and pathways for processing information that do not accommodate visual data.[1]
Time and experience train our senses to interpret information. They also lead to the development of a facility (or opportunity, from an illusionist’s point of view) to fill in information not available to our senses. Optical illusions are perhaps the most widely-known demonstration of this kind of learned behavior. Our mind fills in or adds information so that we can perceive depth, relationships, and other data not actually present in an image or scene.

The mind also fills in such things as context and informs our understanding by, for instance, utilizing our familiarity with the tools of information creation and dissemination. So, while patterns and categories are necessary for us to sort through the information to find meaning, once we have created our categories and patterns, they can be hard to put aside. In these cases, one cannot see familiar information without the categories or meaning with which we have associated it.

Much has been said and written about the importance of categories and patterns for thinking. The National Research Council has reported on “research demonstrating that when a series of events are presented in a random sequence, people reorder them…. the mind creates categories for processing information. . . . the mind imposes structure on the information available from experience.”[2]

It is problematic when we lose sight of the constructs we bring to our interaction with the data around us, but it is hard not to. What Nietzsche has said about metaphors holds equally true for our use of patterns to help formulate meaning.

What, then, is truth? A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.[3]

The patterns and categories we use can constrict our ability to understand new things. For instance, Salman Rushdie points out in Midnight’s Children, a novel about Indian culture, that any people whose word for “yesterday” is the same as their word for “tomorrow” cannot be said to have a firm grip on time[4], yet academics studying Rushdie’s novel are tempted to develop a timeline of the events of the story. Similarly, the U.S. publisher of Gabriel Garcia Marquez’s Hundred Years of Solitude have added a family tree to their English-language edition of the novel,[5] perhaps missing the point that in a book where twenty-one characters have the same name, the concept of individual identity is not really key for understanding.

Similarly, we tend to use known patterns to help us learn, or manage new information. Context and what we know affects the ways in which we establish meaning, such that if one were to have come across this image of the saffron gates in New York’s Central Park anytime before February of 2005, one would likely have assumed that Photoshop had been used to create the image. But after February 2005, this would not be the reaction. The geese in the foreground are, now, as likely the result of work with Photoshop as the gates themselves.

For centuries, humans have used various technologies to help manage data, whether it was Incan knots or Egyptian hieroglyphs. The introduction of new technologies, therefore, is an important part of the context in which we set meaning for new information. For this reason, although we have had stories about three-headed dogs in our culture from Cerberus to Fluffy, today most viewers of this photo of a three-headed dog will (hopefully) immediately consider it a product of image-editing software.

Education has the contradictory tasks of teaching us to work within patterns, but also to think beyond them. If we are not careful, disciplinary thinking can slip into rote formalism or a mere act of classifying data with established taxonomies. For instance, students exposed for years to narrative will likely incorporate narrative pattern into the way they anticipate information. Consider, for instance, this Hyundai Commercial. Try stopping the video every few seconds and narrate the unfolding scene yourself. Although there is no dialogue, you will probably notice that you can tell a fairly detailed story on your own.

The same phenomenon of filling in information gaps occurs when we try to proofread our own writing (by which I mean to plead forgiveness for any errors in this text. . .).

These claims I make can be overstated, however. For instance, we may recall reports in the popular press about research at Cambridge University that showed our ability to recognize words  when all letters other than the first and last are jumbled. Nevertheless, after the press releases, many easily refuted the research, showing, among other things, that it was not done at Cambridge, does not work for all languages, does not work when all the letters are capitals, does not work when letters are simply removed, etc.

That said, the way we learn, when we learn, and the technologies we use to learn all influence what we know as well as the neural pathways we use when accessing our knowledge. Researchers such as Wayne Reeves have emphasized the differences in the ways that experts and novices in a given area or topic solve problems and react to information.[6]

As part of a well-known 1965 study on thought and choice in chess, de Groot noticed that, when a chess master, a proficient chess player, and a novice were shown a chessboard for five seconds with all pieces on it in mid-game, the master could recall the position of sixteen pieces, the proficient play eight pieces, and the novice four. When all were shown the same board for a second five second look, they doubled the number of pieces and locations they could recall. However, when the same subjects were shown a board but with all the pieces randomly placed, each could recall pieces and positions only at the level of the novice.[7]

Analogous studies have been done with mathematicians, physicists, and historians, though the emphasis was on the ways in which experts and novices approach information differently. In short, experts can chunk information in ways novices cannot and they can access and apply appropriate overarching principals, laws, and methods to the new data, which, again, the novices cannot.

Research conducted by Eleanor Maguire of University College London has shown that London taxi drivers have an enlarged region of the posterior hippocampus. This region is believed to be associated with “spatial navigation” and is a “memory bank” for the spatial representation of the complex maze of streets in the city of London, England. There is a positive correlation between the number of years on the job and the size of the posterior hippocampal region.[8] Additional research conducted by Lewis R. Baxtor et al of UCLA in 2001 demonstrated that the physical characteristics of the brain of subjects who receive psychotherapy (talk therapy) changed the brain in much the same way as psychotropic medication.[9]

In 2003, research at the University of Rochester demonstrated that action video games, such as single player shooters, train the brain to better process certain types of visual information. Students who played video games for as short as a two-week period had a greater facility seeing and processing multiple stimuli in their peripheral vision.[10]

As reported in Nature in 2004, a neural pattern has also been associated with language learners. According to Andrea Mechelli, a neuroscientist at University College London “[t]he grey matter in this region increases in bilinguals relative to monolinguals — this is particularly true in early bilinguals who learned a second language early in life . . . . The degree is correlated with the proficiency achieved.”[11] Learning another language after 35 years of age also alters the brain but the change is not as pronounced as in early learners. Mechelli said their research “reinforces the idea that it is better to learn early rather than late because the brain is more capable of adjusting or accommodating new languages by changing structurally. This ability of the brain decreases with time.”[12]

But what happens when the content of one’s expertise, developed over years of study and research using one generation of technology, gets separated from the tools now used to generate and disseminate information within that content area? The following QTVR versions of a chessboard may prove disorienting for those who, while masters of chess, are novices to QTVR.

Chess Example 1
Chess Example 2:

Not only do today’s novices use technologies unavailable at the time their teachers were becoming masters, but the quantity and types of information students need to assess has also expanded exponentially. Part of this shift in learning brought about by today’s digital, networked information results from the fact that we now often work, share, and search at the data level as opposed the level of conclusions, narratives, catalogs, or indices. That is, students are not limited to browsing a card catalogue to find just those books that their college library had the resources to purchase and that were described with Library of Congress subject terms as addressing a particular topic and which a publishing house has selected for publication by an author who had created a narrative by sorting and synthesizing years’ worth of research into a comprehensible whole. They can use search and collaboration tools to get at the primary source data as well as a wider variety of studies of the data. By so doing, they can wade through and remove four levels of filters between themselves and the information.

What it means to master a field of study has changed. Rather than developing an encyclopedic knowledge of all literature on a single topic, today’s students need to know how to find, evaluate, and contextualize information in numerous, different formats on more interdisciplinary topics, but they also need to know how to locate and use the underlying data as well as the technology to sort and present it. To teach the history of the English language today, for instance, an instructor would most likely want to train students to use popular Geographic Information Systems (GIS) and create data layers of audio files demonstrating the pronunciation of Old English and Old Norse town names, point data for the town’s location, data relating to the slope and aspect of northwestern Britain, and have knowledge of the military technology of pre-Norman England. Reading a book or listening to a lecture on the topic is no longer sufficient. An educated person today knows how to access and use appropriate tools and the appropriate data as well as understands the abilities and limitations of each. It is likely that the way in which they know these things — as well as the ways in which they go about finding, assimilating, and representing information — utilize specific areas of their brains. Photoshop and other such tools change the way we process visual data.

Epistemology, and epistemological inquiries, have a long history, arching from superstition toward what Gurvitch called the “social frameworks of knowledge.”[13] Technology has always been present as an essential component of how we think, of our thinking about our thinking, and of what we teach. When the technology changes, as it is now, its role becomes all the more evident. For the new generation of thinkers, knowledge includes del.icio.us and other forms of immediate and readily-available folksonomies. Colleges continue to push writing as the skill students must have to be articulate thinkers. Yet they risk stagnation in an epistemological eddy if they do not also appreciate digital video production, database programming, or even the underlying functionality of MediaWiki, as necessary for developing the cognitive abilities to create and share knowledge.

As educators, we can discuss the ways in which learning changes the brain. Following Nietzsche, we can also reason that it is hard to change our patterns and categories of thought. Nevertheless, we must perceive our own technology-dependent constructs in order to integrate the valuable information and skills we have developed over a lifetime with the new tools now used to create and share knowledge.


  1. Oliver Sacks, “To See and Not See,” An Anthropologist on Mars (Vintage Books, 1995), 108-152.
  2. Council Committee on Learning Research and Educational Practice et al, How People Learn: Brain, Mind, Experience, and School: Expanded Edition (National Academies Press, 2000),http://www.nap.edu/books/0309070368/html/124.html – /125.html.
  3. Friedrich Nietzsche, “On Truth And Lie in an Extra-Moral Sense,” The Portable Nietzsche, trans. Walter Kaufman (New York: Penguin Books, 1982), 46-47.
  4. Salman Rushdie, Midnight’s Children (New York: Avon Books, 1980), 123.
  5. Gabriel Garcia Marquez, One Hundred Years of Solitude, trans.Gregory Rabassa (New York: Harper and Row Publishers, 1970).
  6. See Wayne Reeves, Learner-Centered Design: A Cognitive View of Managing Complexity in Product, Information, and Environmental Design (Sage Publications, Inc., 1999).
  7. See Adriann deGroot, Thought and Choice in Chess, (The Hague: Mouton De Gruyter, 1965).
  8. Eleanor Maguire, Proceedings of the National Academy of Sciences 97, no 8 (April 11, 2000): 4398-4403, http://www.pnas.org/cgi/content/full/97/8/4398.
  9. Arthur L. Brody, MD; Sanjaya Saxena, MD; Paula Stoessel, PhD; Laurie A. Gillies, PhD; Lynn A. Fairbanks, PhD; Shervin Alborzian, BS; Michael E. Phelps, PhD; Sung-Cheng Huang, PhD; Hsiao-Ming Wu, PhD; Matthew L. Ho, BS; Mai K. Ho; Scott C. Au, BS; Karron Maidment, RN; Lewis R. Baxter, Jr, MD, Regional Brain Metabolic Changes in Patients With Major Depression Treated With Either Paroxetine or Interpersonal Therapy,” Archives of General Psychiatry 58, no 7 (2001): 631-640. http://archpsyc.ama-assn.org/cgi/content/abstract/58/7/631
  10. “Altered perception: The science of video gaming,” Currents (University of Rochester, 2003), http://www.rochester.edu/pr/Currents/V31/V31SI/story04.html13. See Georges Gurvitch, The Social Frameworks of Knowledge, trans. Margaret A. Thompson and Kenneth A. Thompson, with an introductory essay by Kenneth A. Thompson (New York: Harper & Row, 1971).
  11. Mechelli et al., “Neurolinguistics: Structural plasticity in the bilingual brain.” Nature 431 (14 October 2004), 757. Abstract at: http://www.nature.com/nature/journal/v431/n7010/abs/431757a.html).
  12. ibid.

Interactive Reading, Early Modern Texts and Hypertext: A Lesson from the Past

Posted December 12th, 2005

by Tatjana Chorney, Saint Mary’s University

Views Over the past decade, the increasing presence of hypermedia environments in the lives of a growing number of readers and learners has contributed to a change in the definition of “text.” However, we still do not have adequate ways of speaking about the implications of the gradual extension of the notion of text—an entity existing usually in print, with clearly defined borders and presenting information in a highly structured manner—to e-text or hypertext, a much more fluid concept, whose borders are not at all clearly defined, and whose manner of presenting information is non-linear. Because “hypertext is a mental process, as well as a digital tool,”[1] one of the larger cultural implications arising from this change in the meaning of text concerns the role of the reader. Text in print implies and, to a certain degree, constructs a passive reader, one who is often a “receptacle” of information. Hypertext is shaping an appropriative reader who is interacting with the text, and is involved in knowledge construction.

Although this shift in the position the reader in many ways arises from the new technology, the manner of active reading in which the reader is empowered to construct meaning and to change the “original” text is at least as old as the early modern period.[2] The Renaissance reader was accustomed to applying “alien” texts to new purposes in a method of appropriative reading that was a consequence of the Renaissance technique of collecting commonplaces.

Increasing our historical awareness of Renaissance reading habits will not only help us avoid technological determinism but will also extend our awareness of the current changes in the definition of text and the concomitant shift in the nature of reading and knowledge management. This in turn will inform our pedagogy by increasing our ability to relate to a student body whose reading and learning habits are already a product of the “digital age” and continue to be shaped by the new medium. Many of our students are “Net-generation” learners whose minds are accustomed to bite-sized bits of information that can be easily transferred, manipulated and appropriated into different contexts and integrated into different “wholes.”[3]

To begin, interactive reading can be defined as a process in which readers have control over the texts they are reading. This control enables them to influence the nature of the reading process in that they are able and free to participate actively in the construction of meaning of whatever they are reading. Renaissance reading habits and those fostered by the hypertext environment (which has become synonymous with the Internet), are similar with regard to four broad issues: 1. non-linearity; 2. a protean sense of text and its functions; 3. affinity with oral models of communication, and 4. a changing concept of authorship.[4]

In my work on the manuscript circulation of John Donne’s poetry, I have come across a number of records revealing the extent to which 17th-century anonymous readers—those who did not belong to Donne’s coterie composed mainly of friends and patrons—interacted with the texts they were copying into their own manuscript compilations. A compelling and generally overlooked aspect of English manuscript collections and commonplace books from the 16th and 17th centuries is that most of the writers of the texts they contain, such as Donne, Jonson, King, Herrick, and others, have been identified by bibliographers and textual scholars only centuries after their compilation, and not by those who copied the poems. A large number of poems in these collections appear without any indication as to the author’s identity (whether known or unknown to the scribe or owner at the time of recording); texts are often untitled or retitled (at least, with titles different from the ones we have come to associate with them); they also often appear in fragments, and these are sometimes blended seamlessly into other fragments or entire poems.

Single lines, like line 24 from Donne’s “The Dreame” (“That love is weake, where feare’s as strong as hee”) were taken out as sententiae with aphoristic value. The last four lines of Donne’s “The Bracelet,” appear recorded in the Fullman MS as a new short poem (Bodleian MS CCC) bearing the new title “A Creditor.” The excerpted lines were treated as poetic “commonplaces,” “generally applicable ideas, precepts and images pointing to or illustrating universal truths.”[5] These ideas could be used later to adorn or enhance formal arguments as well as informal discussions to increase the copia or eloquence of the reader/writer.[6]

Formal and conceptual reworking were not uncommon either. Donne’s “A valediction: forbidding mourning” found in a mid-seventeenth-century anthology of poetry, shows that the collector, under a different title, converted Donne’s nine tetrameter quatrains into five pentameter six-line stanzas, each ending on a rhymed couplet after “the first four lines replicated the alternating rhymes of the ‘original.’” This is not simply a “version” of Donne’s poem, but a “major reworking,” “done with the creative freedom that collectors and imitators in the system of manuscript transmission felt free to exercise.”[7]

The interactive tradition of reading in the Renaissance is not confined to the fluid manuscript environment. During the Reformation in 16th-century Italy, the Dominican Giovanni Rubeo was in the habit of copying passages and sometimes entire pages from the works of Bucer, Zwingli and Calvin inserting them later into his own sermons, while Michel Montaigne, in his Essais, claimed, “I only speak others in order better to speak myself.”[8]

These examples indicate that early modern readers assumed three functions or roles: they were readers, but the reading process implied that each reader was also, in the words of Henry King, “both the Scribe and the Author.”[9]

Interactive reading in the Renaissance was part of the characteristic model of learned reading based on the intellectual technique on collecting “commonplaces.” A reader read texts in order to “extract quotations and examples from them, then note down the more striking passages for easy retrieval or indexing,” or for later use either in writing or in speaking. The “reference” style of reading is symbolized in the reading wheel, “a vertical wheel turned with the help of a system of gears permitting the readers to keep a dozen or so books, placed on individual shelves, open before them at one time.” [10] Reading for “linear narrative” is here replaced by reading multi-linearly or for points of interest that can later be arranged into a new “narrative” according to individual needs and contexts.

The reading of texts in manuscript also emphasized a “communal” sense of textuality.[11] Manuscript culture especially was orally-inflected and “conversational” because writer and audience knowingly participated in the form of “publication as performance.”[12] For 17th-century poet Katherine Philips, for example, and for many others like her, poetry set in print “wrested” the texts out of their natural, fluid manuscript environment in which they were closer to the living word, and set them in ways that stood oddly fixed and immutable.[13]

The appropriative treatment of and approach to various texts implies a cultural attitude to writing and reading similar to the one articulated by some twentieth-century reader-response theories, or the reader shaped by the hypertext environment. In all three models, readers are seen as having a co-creative role. It is in the idea of the “living” text open to transformations, and in the approach to reading as a creative and re-creative engagement with the text, that past and present resemble one another.

The experience of reading texts in hypertext, the best known example of which is the World Wide Web, is very similar to the experience of reading with the help of a “reading wheel.” It encourages reading not for “linear narrative” but for points of interest, empowering readers to shape and control the reading process by selecting and reading only those parts of texts that are memorable or relevant to them. Similar to the past model, here author and reader often have in common the knowledge of “publication as performance.” Authors “conceive of their works so that readers have many choices along the reading path; they are invited to transform and contribute to the texts, which in turn transforms the literary work into a more open-ended experience.”[14] This approach to writing and reading in hypertext allows the modern reader to assume the three functions mentioned earlier with regard to Renaissance readers, that of reader, “scribe” (one who transcribes or copies the texts of others), and author.

The experience of composing and reading poems in hypertext, as recorded by poet Stephanie Strickland, echoes my description of reading in the Renaissance and captures the spirit of the shift from a print-oriented textuality to hyper-textuality: “When a set of poems is composed in or into hypertext, the space in which they exist literally opens up, [r]eleased from the printed page into this floating space, readers are often uneasy. What is the poem?…Only slowly does one assimilate the truth that one may return each time differently.”[15] “Returning each time differently” encapsulates one of the dominant aspects of hypertext. It is a format that does not depend on a print-informed sense of “original narrative as only context.” It allows for “multiple entrances and exits” from a text. As “wherever the reader plunges in, we find a beginning,” linearity becomes “a quality of the individual reader’s experience.”[16]

As a personal-public pastiche, just like the manuscript environment in the Renaissance, the Internet questions the boundaries between authorship and readership. In his hyperpoem, “Medical Notes of an Illegal Doctor,” poet Alexis Kirke invites readers to envision the poem as a space for social dialogue, and “mutate” the poem by entering their changes in the section “text to be added or changed,” which will after a few clicks, transform the initial poem.[17] The reader is here invited to author as he reads, by adding new text with new links and titles. While reading in the Renaissance was described as “poaching,”[18] reading in hypertext is described in very similar terms, as “welding,” “where the meanings extracted—decontextualized—from different parts of the text can be crafted—re-contextualized”— into something new.[19]

In acknowledging and validating the polysemic nature of language and human expression and experience, hypertext is linked with “orality” and the idea of the “living word.”[20] Internet based communication tools such as email, IRC (Internet relay-chat), forums, and synchronous conferencing illustrate well the association between spoken and written language. Many, and especially “digital natives,” those who do not know life without computers, treat the interactions enabled by these communicative spaces informally. They use expression almost as verbal communication, and their texts bear informal “oral” markers in the lack of punctuation and capitalization, and the use of emoticons whose nature and meaning is modeled on body language.

On-line scholarly essays, too, often function on an interactive principle in that their basic structure subverts the idea that readers have to read in an order intended by the author. They are most frequently organized episodically; the content is broken into relatively short units held together by loosely related ideas, each with a different title and each connected to the next or the previous one with a link. For instance, Kaplan’s recent essay on “politexts” emphasizes an “out of order” reading paradigm: “There are a number of ways to read this essay, none of which will exactly replicate the text of the talk I gave. Take chances with your choices.”[21] This aspect of reading in hypertext will gradually lead to the development of different argumentation strategies, and generally a different sense of narrative structure.[22]

Hypertext thus offers an alternative to what Lyotard calls “the tyranny of coherence,” and indicates that the thinking modality encouraged by it is “closer to the way the mind works.”[23] Hypertext thus compels us to reconsider the nature of text in essential ways. By encouraging a “piecemeal” approach to composition and reading, it reeducates us into a form of the “commonplace” tradition of reading and information management. Interactive reading reminds us that knowledge can be transmitted not only through self-referential, extended narratives emphasizing closures, but also as “collections of ideas that can arrange themselves into a kaleidoscope of hierarchical and associative patterns—each pattern meeting the needs of one class of readers [and writers] on one occasion.”[24]

We seem to be experiencing a form of convergence in reading paradigms with past models. However, while the interactive model of reading in the Renaissance was a product of a wider cultural attitude to texts and the world, the interaction enabled by hypertext, and its implications, are often perceived as running against and threatening most cultural and institutionalized notions about texts, reading, and education based largely on print models.

My main point in placing the past and present sense of text and reading experience side by side, therefore, is: 1) to draw attention to the cognitive aspect involved in managing and understanding information, and 2) to make explicit the major assumptions that govern interactive reading in any context.

And while one may be a student who in her spare time reads novels from back to front, and middle to back, that same student placed in a traditional learning environment will soon realize that there may be no legitimate or readily articulated context for her quirky reading habit. Traditional education emphasizes submission to authority, often rote memorization (more frequent in disciplines other than English studies) and what Freire called the “banking concept of education,” in which learned teachers deposit knowledge into passive students, implicitly inculcating conformity.[25] This is likely one of the reasons why it is proverbially difficult, as it is often heard in academic teaching circles, to “get your students to talk” and why so many pedagogical seminars are held on the same topic. Becoming a student and a teacher who engages in multiple forms of interactive practices and honors the results of these practices does in many cases require practice.

The new model of education calls for multi-linear problem solving, and an “interactive” and “participatory” workforce. In a recent article, Andrea Leskes, vice-president for education and quality initiatives of the Association of American Colleges and Universities (AAC&U), reminds us that economic globalization, “fueled by the transformative power of modern communications” poses particular local challenges for institutions of higher learning across the world. The so-called “Greater Expectations Report,” formulated by the Association in 2000, examines the changing role of the academy and liberal studies in the 21st century.[26] The report stipulates the central aims of global liberal education as having to respond to a world characterized by change and interconnection by preparing students to be integrative thinkers. As integrative global thinkers, students would be able to take a more active part in their learning, and then transfer easily what they learn from one context to another. Integrative learning is based on an essential flexibility in how we conceive of knowledge creation and management, which in turn allows for integrating apparently unrelated or various ideas and methods into new and unforeseen paradigms, contexts, and unities. Studying the dynamic of interactive reading is thus not only a look back on past practice, but also a model for studying integrative teaching and learning in a global world, and a way of responding to the perceived “lack of clarity of purpose in undergraduate education” as the outcome of “….escalating demands created by changes in both the campus experience and the emergence of high-technology industries and applications.”[27]


1. Paul Gilster, Digital Literacy (New York: John Wiley, 1997),136.

2. The Renaissance inherited the tradition of collecting commonplaces from Antiquity and from the Middle Ages. Thus, the idea of reading as interaction with the aim of remodelling and reusing the whole or parts of the given material is very old. However, as Walter Ong reminds us, it was the Renaissance humanists who distinguished themselves particularly in this practice, and who formulated the contemporary theory of education based on the commonplace technique. Erasmus and his followers, “broke down virtually the whole of classical antiquity into these bite-size snippets or sayings (adages or proverbs, and apothegms or more learned sayings), which could then be introduced into discourse as they stood or be imitated.” See Ong, The Presence of the Word: Some Prolegomena for Cultural and Religious History (New Haven: Yale U P, 1967), 62-3. Also see Ong, Interfaces of the Word : Studies in the Evolution of Consciousness and Culture (Cornell UP, 1977) and Orality and Literacy: The technologizing of the word (Methuen, 1982), and Ann Moss, Printed commonplace-books and the structuring of Renaissance thought (Oxford: Clarendon Press, 1996).

3. See D. P. Tackaberry, “The Digital Sound Sampler: Weapon of the Technological Pirate or Pallet of the Modern Artist?,” Entertainment Law Review 87, 1990; Thomas Schumacher, “’This is Sampling Sport’: Digital Sampling, Rap Music and the Law in Cultural Production,” Media, Culture and Society 17 (1995): 253-273; John Perry Barlow, “The Economy of Ideas: A Framework for Rethinking Patents and Copyrights in the Digital Age,” Wired (March 1994), and B. R. Seecof, “Scanning Into the Future of Copyrightable Images: Computer-Based Image Processing Poses Present Threat,” High Technology Law Journal 5 (1990): 371-400; Ronald Deibert, Parchment, Printing, and Hypermedia: Communications in World Order Transformation (New York: Columbia University Press, 1997); Michael Rogers and David Starrett, “Techped: Don’t Be Left in the E-Dust” National Teaching and Learning Forum Newsletter 14.5 (1996-2205), online version accessible at http://www.ntlf.com/.

4. While my claims here are made in relation to a past model of reading characteristic of Western Europe and records of reading habits gleaned from English manuscript collections, the full range of changes brought about by the new technology with regard to the process of reading and the social attitude toward textuality, and their similarities to various, other past models of reading is an emerging area of study.

5. Peter Beal, “Notions in Garrison: The Seventeenth-Century Commonplace Book,” New Ways of Looking at Old Texts. Papers of the Renaissance English Society, 1985-1991, ed. Speed Hill (Binghampton: MRTS in conjunction with the Renaissance English Text Society, 1993), 135.

6. In Harley Rawlinson MS (British Library, Harley MS 3991). See Peter Beal, Index of English Literary Manuscripts 1450-1625, Vol. 1 (London: Mansell, 1980), 332.

7. Arthur Marotti, Manuscript, Print and the Renaissance Lyric (Ithaca: Cornell UP, 1995), 152-3.

8. See Jean-Francois Gilmont’s “Protestant Reformations and Reading,” The History of Reading in the West (eds. Guglielmo Cavallo and Roger Chartier, trans. Lydia Cochrane, U of Masachusetts P, 1999), 231; Terence Cave, “Mimesis of Reading in the Renaissance,” Mimesis: From Mirror to Method, Augustine to Descartes, eds. John Jyons and Steven Nichols, Jr (Hanover: UP of New England 1982), 156. See also Cave, “Problems of Reading in the Renaissance,” Montaigne: Essays in Memory of Richard Sayce, eds. I.W.F. Maclean and I.D. McFarlane (Oxford: Clarendon P, 1982), and The Cornucopian Text: Problems of Writing in the French Renaissance (Oxford: Clarendon, 1979).

9. Cited in Margaret Crum, “Notes on the Physical Characteristics of Some Manuscripts of the Poems of Donne and Henry King,” The Library, 5.16 (1961), 121.

10. Guglielmo Cavallo and Roger Chartier, A History of Reading In the West, eds. G. Cavallo and R. Chartier, trans. Lydia G. Cochrane (Amherst: U of Massachusetts P, 1999), 29.

11. A fascinating example of this “communal” aspect of Renaissance textuality is the manuscript of De Doctrina Christiana, a theological treatise ascribed to John Milton, but whose actual composition as it stands today is the works of at least a few others who have added or changed sections of the text without clearly indicating their involvement in this “co-authoring” of Milton’s text. See Gordon Campbell, Thomas N. Corns, John K. Hale, David I. Holmes, Fiona J. Tweedie, “The Provenance of De Doctrina Christiana,” Milton Quarterly 31.3 (1997): 67-119.

12. After McLuhan, there have been many very useful discussions of the conversational, social dimension of the manuscript culture in the Renaissance, especially with regard to Donne. See, for example, Harold Love, The Culture and Commerce of Texts: Scribal Publication in Seventeenth-Century Culture (Amherst: U of Massachusetts P, 1993); Arthur Marotti, John Donne, Coterie Poet (Madison: U of Wisconsin P, 1986); Ted-Larry Pebworth, “John Donne, Coterie Poetry and the Text as Performance,” Studies in English Literature 29 (1989): 61-75.

13. Margaret Ezell, Social Authorship and the Advent of Print (Baltimore: The John Hopkins UP, 1999), 53-4.

14. Eduardo Kac, “Holopoetry, Hypertext, Hyperpoety,” Originally published in Holographic Imaging and Materials (Proc. SPIE 2043), ed. Tung H. Jeong (Bellingham, WA: SPIE, 1993). Accessible at: http://www.ekac.org/Holopoetry.Hypertext.html.

15.Talk given at Hamline University, St. Paul, MN, April 10, 1997. Accessible at:http://www.altx.com/ebr/ebr5/strick.htm.

16. Ingrid Hoofd, “Aristotle’s Poetics: some affirmations and critiques.” Accessible at: http://www.cyberartsweb.org/cpace/ht/hoofd3.

17. The poem can be accessed at: http://wings.buffalo.edu/epc/ezines/brink/brink02/medical.html.

18. Gilmont, “Protestant Reformations and Reading,” 231 (see note 8).

19. Andreas Luco (1999), whose Website features numerous other discussions dealing with the relationships between critical theory and cyberspace: http://www.cyberartsweb.org/cpace/theory/luco/Hypersign/Play.html.

20. I am aware of the host of possible models of communication enabled through the Internet, including the variety of combinations among text, sound and image content. I cannot help but notice that the association between image and text in particular is a very interesting “comeback” of the emblem tradition. This, however, is a different topic; here I am concerned primarily with text-based hypertext.

21. Nancy Kaplan, “Politexts, Hypertexts, and Other Cultural Formations of the Late Age of Print.” Computer-Mediated Communication Magazine, Vol 2.3 (1994), page 3.

22. See George P. Landow, The Convergence of Contemporary Critical Theory and Technology (Baltimore: John Hopkins UP, 1992), and Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology (Baltimore: John Hopkins UP, 1997).

23. Jean Mason, “From Gutenberg’s Galaxy to Cyberspace: the Transforming Power of Electronic Hypertext,” (Diss. McGill U, 2000), accessible at: https://tspace.library.utoronto.ca/citd/JeanMason/about.html. Mason’s work is one of the very few discussions that aims to examine hypertext and its implications with regard to pedagogical practice and long held assumptions about literacy and creativity.

24. Jay David Bolter, Writing Space: The Computer, Hypertext, and the History of Writing (Hillsdale, NJ: Erlbaum, 1991), 87.

25. Douglas Kellner, “Technological Transformation, Multiple Literacies, and the Re-Visioning of Education,” 3, accessible at <http://www.gseis.ucla.edu/faculty/kellner>

26. The Report is part of the AAC &U online publications, and can be accessed at: http://www.greaterexpectations.org/.

27. ibid.