The Difference that Inquiry Makes: A Collaborative Case Study on Technology and Learning, from the Visible Knowledge Project

This collection of essays from the Visible Knowledge Project is edited by Randy Bass and Bret Eynon, who served together as the Project’s Co-Directors and Principal Investigators. The Visible Knowledge Project was a collaborative scholarship of teaching and learning project exploring the impact of technology on learning, primarily in the humanities.  In all, about seventy faculty from twenty-two institutions participated in the Visible Knowledge Project over five years. Participating campuses included five research universities (Vanderbilt University, the University of Alabama, Georgetown University, the University of Southern California, Washington State University, and the Massachusetts Institute of Technology), four comprehensive public universities (Pennsylvania’s Millersville University, California State University (CSU)–Monterey Bay, CSU Sacramento, Ohio’s Youngstown State University, and participants from several four-year colleges in the City University of New York system, including City College, Lehman, and Baruch), and three community colleges (two from CUNY–Borough of Manhattan Community College and LaGuardia Community College, and California’s Cerritos College). In addition to campus-based teams, a number of independent scholars participated from a half dozen other institutions, such as Arizona State and Lehigh University.

The project began in June 2000 and concluded in October 2005. We engaged in several methods for online collaboration to supplement our annual institutes, including an adaptation of the digital poster-tool created by Knowledge Media Lab (Carnegie Foundation), asynchronous discussion, and web-conferencing. For more detailed information, see the VKP galleries and archives at You can find pdf files formatted for printing attached at the end of each article.

Capturing the Visible Evidence of Invisible Learning

0 Comments | 6366 Page Views
This is a portrait of the new shape of learning with digital media, drawn around three core concepts: adaptive expertise, embodied learning, and socially situated pedagogies. These findings emerge from the classroom case studies of the Visible Knowledge Project, a six-year project engaging almost 70 faculty from 21 different institutions across higher education. Examining the scholarly work of VKP faculty across practices and technologies, it highlights key conceptual findings and their implications for pedagogical design.  Where any single classroom case study yields a snapshot of practice and insight, collectively these studies present a framework that bridges from Web 1.0 to Web 2.0 technologies, building on many dimensions of learning that have previously been undervalued if not invisible in higher education.

Reading the Reader

0 Comments | 3166 Page Views
Many teachers wonder what happens (or doesn’t happen) when students read text. What knowledge do students need, gain, or seek when reading? Through VKP’s early emphasis on technology experimentation, Sharona Levy adapted a proven reading method of annotation from paper to computer. Through using the comment feature in Word, students’ reading processes became more transparent, explicit, and traceable, allowing her to diagnose gaps in understanding and to encourage effective reading strategies.

Close Reading, Associative Thinking, and Zones of Proximal Development in Hypertext

0 Comments | 3563 Page Views
How can we teach students to slow down their reading process and move beyond surface-level comprehension? Patricia O’Connor’s Appalachian Literature students co-constructed hypertexts which capture the connections readers make among assigned texts, reference documents, and multimedia sources. These hypertexts became more than artifacts of student work; rather, they became collaborative, exploratory spaces where implicit literary associations become explicit.

Inquiry, Image, and Emotion in the History Classroom

0 Comments | 2467 Page Views
With increased online access to historical sources, will students “read history” differently among such artifacts as text, image, or video? Questioning his own assumptions of students’ abilities to analyze historical sources, Peter Felten conducted pedagogical investigations to understand student interpretation of a variety of sources. Designing the use of visual artifacts in the classroom helped students learn not only how to interrogate and interpret primary sources, but also how to construct original arguments about history. Students’ understanding of history deepened while they became emotionally engaged with the material.

From Looking to Seeing: Student Learning in the Visual Turn

0 Comments | 3332 Page Views
Rather than simply using primary source images as illustrations for his course on Power, Race, and Culture in the U.S. City, David Jaffee wanted to teach his students how to interpret visual texts as a historian would. By paying close attention to his students’ readings of images, Jaffee was able to develop ways to scaffold their analysis, teaching them how to move beyond “looking” at isolated images to “seeing” historical context, connection and complexity.

Engaging Students as Researchers through Internet Use

0 Comments | 3416 Page Views
Effective habits of research begin early and should be practiced often. Unearthing discoveries, making connections, and evaluating judiciously are research traits valued by Taimi Olsen in her first-year composition course. Not only should these research habits exist in the library, but Olsen advocates the application of these habits in online archives hones students’ abilities to become expert researchers.

Trace Evidence: How New Media Can Change What We Know About Student Learning

0 Comments | 2465 Page Views
Clicker technology, often used in large-enrollment science courses, works well when every question has a single right answer. Lynne Adrian wanted to find out whether clickers could be used in disciplines which raise more questions than answers, and how illuminating the gray areas between “right” and “wrong” could help her students think critically about American studies. She found that the technology allowed her to preserve traces of the otherwise ephemeral class discussions, enabling her to analyze the types of questions she was asking in class and to track their effects on students’ written work throughout the semester.

Shaping a Culture of Conversation: The Discussion Board and Beyond

1 Comments | 6655 Page Views
What happens when the discussion board goes from being just an assignment to a springboard for intellectual community? Foreseeing many benefits to cultivating discussion among his English students, Ed Gallagher worked to develop frameworks to articulate why discussion is not only central to the learning process in the classroom but also beyond its walls. A higher level of critical analysis, reflection, and a synthesis of multiple perspectives turned class discussions into artful conversations.

The Importance of Conversation in Learning and the Value of Web-based Discussion Tools

0 Comments | 3221 Page Views
In this essay Heidi Elemendorf and John Ottenhoff discuss the central role that intellectual communities should play in a liberal education and the value of conversation for our students, and we explore the ways in which web-based conversational forums can be best designed to fully support these ambitious learning goals. Coming from very different fields (Biology and English Literature) and in different course contexts (Microbiology course for non-majors and Shakespeare seminar), they nonetheless discover core values and design issues by looking closely at the discourse produced from online discussions. Centrally, they connect what they identify as expert-like behavior to the complexities of intellectual development in conversational contexts.

Why Sophie Dances: Electronic Discussions and Student Engagement with the Arts

0 Comments | 1386 Page Views
Paula Berggren struggled to engage her students in critical thinking about unfamiliar art forms, until she posed a simple question on the class’s online discussion board: “Why do people dance?” She found that the students’ responses, rather than being just less-polished versions of what they might write in formal essays, warranted close analysis in their own right. In subsequent teaching, Berggren continues to incorporate some version of a middle space for student work, which not only increases students’ engagement but also allows her to observe and document their thought processes.

Connecting the Dots: Learning, Media, Community

0 Comments | 1143 Page Views
Sometimes the research question you ask isn’t the one you end up answering. Elizabeth Stephen recounts how a debate about the use of films in a freshman seminar led to an experiment in forming a community of scholars which could be sustained over time and across distances. Creating online spaces for students in this group to share their reflections with one another strengthened the ties among them, while allowing Stephen to analyze the multiple elements, both academic and social, which made this a successful learning community.

Focusing on Process: Exploring Participatory Strategies to Enhance Student Learning

0 Comments | 1781 Page Views
Confronting the challenge of improving student writing in a large sociology class, Juan José Gutiérrez developed a software-based peer review process. He required students to evaluate one another’s papers based on specific criteria and to provide constructive feedback. He found that not only did this process help with the logistics of paper-grading, but it also allowed him to adapt his teaching to address specific concerns indicated by qualitative and quantitative analysis of the peer reviews.

Theorizing Through Digital Stories: The Art of “Writing Back” and “Writing For”

0 Comments | 3058 Page Views
Discovering how digital stories engage students in critical, theoretical frameworks lives at the center of Rina Benmayor’s work. Through her course, Latina Life Stories, Rina asked each student to tell his or her own life story digitally and then situate the story within a theoretical context. While this process engaged students to theorize creatively, it also allowed her to document methods to recognize the quality of student work resulting in a flexible and intuitive rubric to use beyond this experience.

Video Killed the Term Paper Star? Two Views

0 Comments | 1947 Page Views
Two instructors from separate disciplines discuss what happens when alternative multimedia assignments replace traditional papers. Peter Burkholder found the level of engagement to change dramatically in his history courses while Anne Cross experienced new avenues for talking about sensitive subjects in sociology. Together, both professors explore the advantages and opportunities for video assignments that challenge students to synthesize information in critical and innovative ways.

Producing Audiovisual Knowledge: Documentary Video Production and Student Learning in the American Studies Classroom

0 Comments | 4598 Page Views
Traditionally, academic institutions have segregated multimedia production from disciplinary study. Bernie Cook wondered what his American Studies students would learn from working collaboratively to produce documentary films based on primary sources, and what he in turn might find out about their learning in the process. Students created documentary films on local history, and wrote reflections on their creative and critical process. Not only did students report tremendous engagement with the topics and sources for their projects, they also indicated satisfaction at being able to screen their work for an audience. By allowing his students to become producers of content, Cook enables them to participate fully in the intellectual work of American Studies and Film Studies.

Multimedia as Composition: Research, Writing, and Creativity

0 Comments | 3673 Page Views
Viet Thanh Nguyen reflects on a three-year experiment in assigning multimedia projects in courses designed around the question “How do we tell stories about America?” Determined to integrate multimedia conceptually into his courses, rather than tacking it onto existing syllabi, Nguyen views multimedia as primarily a pedagogical strategy and secondarily a set of tools. Exploring challenges and opportunities for both students and teachers in using multimedia, he outlines principles for teaching with multimedia, and concludes that, while not for everyone, multimedia can potentially create a transformative learning experience.

Looking at Learning, Looking Together: Collaboration across Disciplines on a Digital Gallery

0 Comments | 1345 Page Views
What does it mean for two community college colleagues, teaching in very different disciplines, to work together on a Scholarship of Teaching and Learning (SoTL) project?  What happens when they join together to examine their students’ work, their individual teaching practice, and the possibilities for collaborative research?  And what do they learn when they undertake an electronic publication of that work in a digital gallery?

“It Helped Me See a New Me”: ePortfolio, Learning and Change at LaGuardia Community College

0 Comments | 3725 Page Views
What happens if we shift the focus of our teaching and learning innovations from a single classroom to an entire institution? What new kinds of questions and possibilities emerge? Can an entire college break boundaries, moving from a focus on “what teachers teach” to a focus on “what students learn?” Can we think differently about student learning if we create structures that enable thousands of students to use new media tools to examine their learning across courses, disciplines, and semesters? Bret Eynon explores these questions as he analyzes the college-wide ePortfolio initiative at LaGuardia Community College. Studying individual portfolios and focus group interviews, he also examines quantitative outcomes data on engagement and retention to better consider ePortfolio’s impact on student learning.

From Narrative to Database: Multimedia Inquiry in a Cross-Classroom Scholarship of Teaching and Learning Study

0 Comments | 2855 Page Views
Michael Coventry and Matthias Oppermann draw on their work with student-produced digital stories to explore how the protocols surrounding particular new media technologies shape the ways we think about, practice, and represent work in the scholarship of teaching and learning. The authors describe the Digital Storytelling Multimedia Archive, an innovative grid they designed to represent their findings, after considering how the technology of delivery could impact practice and interpretation. This project represents an intriguing synthesis of digital humanities and the scholarship of teaching and learning, raising important questions about the possibilities for analyzing and representing student learning in Web 2.0 environments.

Multimedia in the Classroom at USC: A Ten Year Perspective

0 Comments | 3993 Page Views
Does multimedia scholarship add academic value to a liberal arts education? How do we know? Looking back at the history of the Honors Program in Multimedia Scholarship at USC, Mark Kann draws on his own teaching experience, discussions with other faculty members, and the university’s curriculum review process to explore these questions. He describes the process of developing the program’s academic objectives and assessment criteria, and the challenges of gathering evidence for his intuitions about the effects of multimedia scholarship. Finally, Kann reports on the program’s first student cohort and looks ahead to the future of multimedia at USC.

TK3: A Tool to (Re)Compose

by Virginia Kuhn, University of Southern California

For a copy of this review in TK3, please click here. If you need to download the TK3 reader, please click here. The download is quick and easy.

Ever since reading George Landow’s Hypertext 2.0 in the mid 90’s, I have been leery of application-specific theory, at least when it is not acknowledged as such. Hypertext 2.0 includes copious references to StorySpace, a software program which allows one to create hypertext and to view and manipulate nodes into various spatial arrangements. These references are not accompanied by full disclosure of Landow’s role in the program’s development; it was developed by academics, many of whom work with Landow. Though it is not clear whether he is a developer himself, there is undoubtedly a reciprocal relationship between him and Eastgate Systems, Inc., the group that created and distributes the program. As I scanned the index entry for StorySpace, noticing that it is longer than that for the World Wide Web, I felt slightly duped at having shelled out $200+ for this rather limited program, particularly when the Eastgate site was hawking Landow’s book. Now, however, I find myself in a curiously similar position. That is to say, much of the scholarly work I have done lately centers on a software program with which I have been teaching for several semesters and in which I created my dissertation. So while my reasons for this partiality ought to be evident if I have done my job in this review, I still feel the need to state my bias from the outset. Fair warning then: I have a rather solid (though non-monetary) investment in this platform. And perhaps it is time for academics to be intimately involved in software development, as I believe Landow was with Story Space, as I am with TK3; if we educators do not help to shape the tools we use, they will be shaped for us and will, no doubt, be imposed upon us by corporate entities such as Microsoft.

When I first encountered TK3, I was teaching at a large urban university and was looking for a way to have students engage with emergent digital technologies; this required them to not only consume or analyze multimodal “texts” but to create them as well. Bob Stein, the founder of NightKitchen, TK3’s developer, was the keynote at the 2003 Computers & Writing Conference held at Purdue; I was there facilitating a pre-conference workshop on technology training for new teachers. Another of the pre-conference activities included a collaborative e-book, Digital Publishing F5 | Refreshed, created by Bob Stein, David Blakesley and thirty other intellectuals—both publishers and academics—interested in the scholarship, pedagogy and publishing potentials of emergent technologies. Indeed, since the New London Group’s 1996 manifesto calling for educators to attend to “multiliteracy,” there has been increasing attention among writing scholars to interrogate  reading and writing with new media.

I was keenly interested in TK3’s pedagogical possibilities since it promised to allow all students, regardless of their technological prowess, to create digital texts. TK3 seemed an alternative to web page construction and given that I could not count on students’ ability to know HTML—indeed there is no baseline for students’ technological proficiency—and being that I did not want (nor did I have the time or expertise) to teach programming, I was stymied. I had begun requiring students to compose visual compositions but I had to allow for a range of vehicles for completion of their projects, and this was problematic for many reasons. Although there are seemingly numerous programs that allow multimedia authoring, none of them was adequate for my needs, as I shall explain.

I tried TK3 in a course I taught called Writing in Cyberspace, and soon began requiring it in classes which ranged from Media Literacy to Multicultural America. This program, meant to allow humans with few computer skills to create electronic “books” which include verbal text, still images, sound and video clips, as well as web links and internal jumps, is by far the best of its type, particularly in terms of making meaning in several registers (aural, filmic, imagistic, textual). Its next version, Sophie, will improve upon its precursor and answer the pedagogical needs of those committed to teaching their students a wide range of literate practices; many of the changes came from feedback from those who have used the program. With TK3, one can easily become a new media producer and exhibit what the New London Group sees as “competent control” of new media texts. Indeed, the ability to contribute to the discourse of new media helps to foster the sort of advanced literacy appropriate to university level studies; without this sort of transformative use, there is very little liberatory potential for literacy.

When I have mandated a particular program in a class, my students and I perform a careful review of the background of its developers. First, I want students to be aware of the programming choices behind the hardware and software used in class; secondly, I want students to confront the content and design choices necessary to create such “texts” themselves. As such, the provenance of the program—the maker of the platform and also the use for which it was made and has been subsequently deployed—is indeed germane.

Throughout The Critical Theory of Technology, Andrew Feenberg argues that technology is ultimately ambivalent, and can be used for disparate social and political ends. Insofar as it is ambivalent, however, there is an inherent ideology that guides programming choices—the surveillance features of many office-networked programs is a good example of this since there is no technical reason for such features to exist. Thus, we must view interaction with technology not as one occurring between a person and a machine but rather as a dialogue between a human user and a human programmer.[1] No curricular materials, be they textbooks, software or hardware, are politically innocent or ideologically neutral; thus, it is important to teach students not only to critically “read” such “texts” but also to use them in a way that demonstrates both self-consciousness as well as an awareness of both the possibilities and the limitations of constructing meaning in a given environment. I firmly believe that we, as academics, cannot afford to simply use platforms created by corporations or those produced for some other purpose in university-level education. We have to step up to the plate and answer for the ideological implications inherent in the vehicles we require students to use, as well as remaining attuned to the specifics of the projects we assign to be carried out in these vehicles.

Figure 1. Screen shot showing the “find” feature which lists not only the search term but also the words immediately before and after. The list is also hot so one click brings the reader to its reference.

Although I mostly use TK3’s authoring function in class (I have students create TK3 books as their course projects), the reader functions are at least as compelling and offer many teaching strategies. The reader functions take full advantage of the encyclopedic nature of the computer. There is a rich “find” function that allows one to search for words or phrases. The results are then listed with a few words that appear on either side of the searched word in order to give a better sense of the context in which each instance occurs. Further, the resulting list is “hot” so that one click takes you to the place at which the word occurs. The reader can highlight text, add sticky notes or produce a notebook in which she can copy excerpts that import replete with working citations. All of these markings are catalogued in a “marks” history; again this list is hot. For a particularly good example of teacherly comments and directions for their access by students, see Cheryl Panelli’s web page that describes the ways in which her students ought to download and read her comments on their work. As a pedagogical tool, TK3 allows the teacher to approach the student text with the sort of seriousness with which she would approach a published text. As teachers, we are conditioned to engage with published writing via a highlighter, while we approach student writing with a red pen. TK3’s reader functions help to dismantle this dichotomy. Reader functions aside, the ability to “write” new media is what I find most compelling about TK3. As the editors of Literacy: A Critical Sourcebook point out, computers call attention to “the relation between technology and literacy in an especially insistent fashion” and yet this relationship has always been a concern, from the technology of writing itself, to other vehicles of textual and pictographic delivery and dissemination.[2] In Chaucer’s day, they note, the act of reading was becoming more widespread but the act of writing was one left to experts. Further, as several essays in this anthology note, the liberatory potential of literacy cannot actualize without the ability to write; for instance, speaking of cultural critic bell hooks’s account of her education at the feet of her black female teachers, Lisa Delpit notes that hooks learned that she was “capable of not only participating in the mainstream, but of redirecting its currents.”[3] This is why scholars such as Gregory Ulmer call for teachers to move students from literacy to “electracy,” a neologism he adopts to describe the types of abilities necessary to be conversant in digital technologies. Seeing digital technologies as inherently image-based, Ulmer teaches students to use both poststructuralist theory and memory work (psychoanalytic theory) to investigate the ways in which cultural structures shape their identities and desires. Students represent that work via web pages produced in response to a series of exercises that help make them “egents” in an electrate world. I subscribe to Ulmer’s methodology and yet I cannot, as I said, count on my students to be able to code web pages. Even if students were to use a variety of web composing programs such as FrontPage or Netscape’s Composer, there would be less opportunity to interrogate individual programs and, without a lot of training, the pages would be very rudimentary. TK3 simply gives the author far more control over the interplay between word and image as well as sound and video.

There are several aspects of the program worth mentioning and they are as follows.

Book Metaphor: Because TK3 retains the book metaphor (while not being a slave to it), it does not sacrifice print literacy for bells and whistles. Unlike PowerPoint, for instance, there is no limit to the amount of text that fits on a screen. As a result, an author is not encouraged to reduce complex ideas into bullet points so commonly found on PowerPoint slides. In addition, images may either be added into the “stream” of the text, anchored between words and moving with them, or they can be added independently with the text wrapping around them. This latter feature adds visual elegance to the page even as it allows for more complexity in the relationship between text and image. One can teach print and visual literacy in a contrastive way, using word and image to inform each other and to highlight the potentials and limitations of each as semiotic systems. The text-friendliness of TK3 is crucial for those of us who do not wish to sacrifice print literacy even as we endeavor to enhance it with other forms of expression afforded by digital technologies. Edward Tufte argues that PowerPoint’s limited text capability causes the oversimplification of complex ideas and cites the breakdown of information among NASA engineers and business managers that ultimately led to the Challenger explosion.[4] While I believe Tufte overstates the case, it is important to remember that PowerPoint and Keynote are presentation programs, ones meant to be accompanied by a human presenter who supplies more detailed information. The Challenger incident is more an example of using the wrong tool, or at least of relying on PowerPoint alone to convey critical information. This is, indeed, rocket science. The problem occurs when such programs are used to produce stand-alone texts, those associated with college level assignments. By contrast, TK3 may be used as a presentation aid but can also include the complex ancillary documents.

Figure 2. Stuart Pate’s student project. He did not make use of the more linear book metaphor and yet its availibility in TK3 is important in that it does not abandon print literacy in favor of image and sound.

Media Resources: This feature is the key to the program’s brilliance. TK3 can act as a holding tank. One can import multiple types of files including jpegs, movie files, text files, sound clips and web links. From here the author simply drags the assets into her book and manipulates them as she wishes. The media resources feature can be filled with a wide range of resources, allowing one to “write” with numerous modes of expression in a very deliberate way. Thus, the program fosters self-conscious decisions about both form and content. Further, since TK3 uses its own reader, the resulting text is not dependent on the look of various web browsers that might change the look of the e-book. Though the Internet can be used as a method of dissemination, the books retain the integrity of the author’s creation.

Complex Annotation: A stunning feature of the program is the ability to annotate. One can create both complex and simple annotations. The simple annotation is one in which a pop up box contains only one resource: word or image. This annotation may be attached to a word or an image in the main body of the book and can be attached with an “annobeam.” This annotation, with its transparent annobeam, allows one to convey further information without sending the reader to a different page, from which she may not be able to return, even as it visually shows its origin without covering the rest of the page and disrupting the main body. The complex annotation takes this capacity to a higher level. A complex annotation is defined as one that includes more than one media element. Similar to a mini-book, this annotation can house film and sound clips, text and image and also links to other annotations or to outside web sources. Thus, one can create a complex annotation inside a complex annotation ad infinitum, enacting the sort of depth that characterizes the pinnacle of hypertext as Jerome McGann describes it in “The Rationale for Hypertext.”[5]

Though the term “depth” is bandied about frequently in the discourse of hypertextual structures, McGann offers the smartest explication of what that actually might mean and how such capacity can exceed the work of the bound book. The ability for depth of analysis is instantiated in TK3 annotation features.

Using the example of the scholarly or critical edition, a staple of literary studies, McGann describes the inherent shortcoming of using a book to analyze a book due to the one-to-one ratio:

 Brilliantly conceived, these works are nonetheless infamously difficult to read and use. Their problems arise because they deploy a book form to study another book form. This symmetry between the tool and its subject forces the scholar to invent analytic mechanisms that must be displayed and engaged at the primary reading level — e.g., apparatus structures, descriptive bibliographies, calculi of variants, shorthand reference forms, and so forth. The critical edition’s apparatus, for example, exists only because no single book or manageable set of books can incorporate for analysis all of the relevant documents. In standard critical editions, the primary materials come before the reader in abbreviated and coded forms.

The problems grow more acute when readers want or need something beyond the semantic content of the primary textual materials — when one wants to hear the performance of a song or ballad, see a play, or look at the physical features of texts. Facsimile editions answer to some of these requirements, but once again the book form proves a stumbling block in many cases. Because the facsimile edition stands in a one-to-one relation to its original, it has minimal analytic power — in sharp contrast to the critical edition. Facsimile editions are most useful not as analytic engines, but as tools for increasing access to rare works.[6]

Programmatic Support: There are a number of premises under which I approach digital technologies and they are as follows:

  1. Literacy includes both reading and writing (decoding and encoding).
  2. Digital technologies change what it means to read and to write.
  3. We are on the verge of a new semiotic system that is rooted in print and image but exceeds them.
  4. Literacy cannot be unhooked from its materiality.
  5. Technology is never politically innocent, nor ideologically neutral.

Given these guiding beliefs, TK3’s developer is a strong asset to the program. After receiving two large grants, one from the Mellon Foundation and one from the MacArthur Foundation, the Institute for the Future of the Book was formed in order to create the next generation of tools for digital scholarship. This Institute has launched many projects in addition to overseeing the development of the next generation of TK3 which will be called Sophie. Due out within months, Sophie will incorporate feedback and suggestions from the teachers, students, writers, and artists who have used TK3. It will be open source and networked, both of which will encourage user enhancement and wider use. The people who staff the Institute for the Future of the Book are smart, supportive and responsive to the needs of the educators they serve; they are involved in numerous projects at the forefront of digital technologies, including digital textbooks, academic blogs, conversations with authors of prominent new books and the archiving of art and photography. Their blog, the if:book blog, is widely-read and cited in many e-zines and on other blogs. Their work is collaborative and not committed to revenue and other corporate concerns; as such they are great intellectual partners. They are a strong feature of TK3, a program that allows writers to both theorize and enact the types of literacies necessary for life in the 21st-century, wired world.

Figure 3. Below is the opening screen of the Institute for the Future of the Book site, the Institute will release TK3’s next version, Sophie.


1. Andrew Feenberg, Critical Theory of Technology (Oxford: Oxford UP, 1992), 183.

2. Ellen Cushman et al, Literacy: A Critical Sourcebook (Boston: Bedford St.Martins, 2001), 4.

3. Lisa Delpit, “The Politics of Teaching Literate Discourse” in Literacy: A Critical Sourcebook (Boston: Bedford St. Martins, 2001), 552.

4. Edward Tufte, “PowerPoint Does Rocket Science: Assessing the Quality and  Credibility of Technical Reports” (Home Page, Selected Writings, 6 September 2005)  Tufte concludes that Technical Reports are superior to PowerPoint for conveying complex information but this seems obvious since PowerPoint is a presentation software and does not pretend to be a reporting format.

5. Jerome McGann, “The Rationale of Hypertext”, 1995, screen 2.

6. ibid.

Blakesly, David, et al.  Digital Publishing F5 | Refreshed.   West Layafette: Parlor Press, 2003.

Landow, George. Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins UP, 1992.

New London Group, Multiliteracies: Literacy Learning and the Design of Social Futures. New York: Routledge, 1999.

Ulmer, Gregory. Internet Invention: From Literacy to Electracy. New York: Longman, 2002.

Taking Culture Seriously: Educating and Inspiring the Technological Imagination

Posted December 12th, 2005 by Anne Balsamo, University of Southern California

Introduction:  On the Relationship of Technology and Culture

Ignorance costs.

Cultural ignorance — of language, of history, and of geo-political contexts — costs real money.

Microsoft learned this lesson the hard way. A map of India included in the Windows 95 OS represented a small territory in a different shade of green from the rest of the country. The territory is, in fact, strongly disputed between the Kashmiri people and the Indian government; but Microsoft designers inadvertently settled the dispute in favor of one side. Assigning the territory (roughly 8 pixels in size on the digital map) a different shade of green signified that the territory was definitely not part of India. The product was immediately banned in India and Microsoft had no choice but to recall 200,000 copies. Through a release of another version of its famous operating system, Microsoft again learned the cost of cultural ignorance. A Spanish-language version of Windows XP OS marketed to Latin American consumers presented users with three options to identify gender: “non-specified,” “male,” or “bitch.”  In a different part of the world, with yet a different product, Microsoft again was forced to recall several thousand units. In this case the recall became necessary when the Saudi Arabian government took offence at the use of a Koran chant as a soundtrack element in a Microsoft video game. The reported estimate of lost revenue from these blunders was in the millions of dollars.[1]

These examples illustrate the very real ways in which cultural ignorance costs money and good will in the big business of technological innovation. In this case, several seemingly insignificant details incorporated into state-of-the-art digital applications not only resulted in the recall of several widely distributed products and damage to a global brand, but also demonstrated a grand failure of multicultural intelligence within the ranks of a multinational corporation.

Although it is tempting to deploy these examples as a contribution to the popular pastime of Microsoft bashing, that response is neither creative nor particularly insightful. Rather, I use the examples of the costliness of a multinational corporation’s cultural blunders to assert that the process of technological innovation must take culture seriously. Moreover, I argue that the process of technological innovation is not solely about the design and development of new products or services, but rather is the very process that creates the cultures that we inhabit around the globe.

Technology is not an epiphenomenon of contemporary culture, but rather is deeply intertwined with the conditions of human existence across the globe. Although we are now more than a century past the dawn of the industrial age, the global distribution of the benefits of industrialism, i.e., basic health and subsistence-level resources, remains disturbingly uneven. In considering the significant loss of life due to recent hurricanes in the southern U.S. it is clear that the demarcation between rich and poor does not map simply onto the division between the global North and South. The tragedy revealed a wide-scale ignorance of the reality of the technological situation of people living in those regions. Evacuation orders were not only late in coming, they only addressed those who were already technologically endowed with the means to flee to safer ground, i.e., the automobile, or to those who had access to other technological resources, such as planes, trains, or buses. When lives are at stake, which is often the case with the deployment of large-scale or new technologies, it is ethically imperative that the technological imagination must explicitly consider cultural, social, and human consequences. This imagination must be trained to imagine the unimaginable—that is, to actively imagine unintended consequences.

When developing new technologies, culture needs to be taken into consideration at even a more basic level: as the foundation upon which the technological imagination is formed in the first place. I define the technological imagination as a character of mind and creative practice of those who use, analyze, design and develop technologies.[2] It is a quality of mind that grasps the doubled-nature of technology: as determining and determined, as both autonomous of and subservient to human goals. This imagination embraces the possibility of multiple and contradictory effects. This is the quality of mind that enables people to think with technology, to transform what is known into what is possible, and to evaluate the consequences of such creation from multiple perspectives.

The Interdisciplinary Education of the Technological Imagination

Every discipline within the contemporary university has been transformed by the development of new technologies, whether technology now becomes an “object” of study, as in the humanities and legal studies; a tool of knowledge production, as in the social and medical sciences; or a domain of new disciplinary knowledge, as in the engineering sciences, cinema, and communication studies. This means that every discipline within the university has something important to contribute to the development of new technologies.

Universities need to actively educate and inspire researchers, teachers and students to develop a robust technological imagination. This is an educated “quality of mind” that is by nature thoroughly interdisciplinary. To understand technology deeply one needs to apprehend it from multiple perspectives: the historical, the social, the cultural, as well as the technical, instrumental and the material. We must develop interdisciplinary research and educational programs that enact and teach skills of creative synthesis of the important insights from a range of disciplines in the service of producing incisive critique of what has already been done. From this critique emerges the understanding of what is to be done. In this formulation, the traditional role of criticism is expanded. No longer an end in itself, criticism of what has already been done is a step in the process of determining what needs to be done differently in the future. Our educational programs need to teach skills of critical thinking that lead to creative proposals for doing things differently. Then we need to teach students methods for doing things differently with technology: how technologies are built, how they are implemented, how they are reproduced and how they affect cultural arrangements. This is the foundation of innovative research and new knowledge production. This is the work of the university-educated technological imagination.

Figure 1: How the university contributes to significant cultural change through the development of new technologies

Educational programs that seek to develop a robust technological imagination must include training in 1) the history of technology, 2) critical frameworks for assessing technology and identifying effects, 3) creative and methodological use of technological tools, 4) pedagogical activities and exercises that create new technological applications, devices, and services, 5) architectural and virtual spaces for social exchange and creative production, and 6) international studies and policy analysis that provide appropriate cultural and institutional contexts of assessment of effects. This is the necessary multidisciplinary foundation for the development of new technologies.

Moreover, there is a category of technology—what might be labeled technologies of literacy—that serve as the stage for the elaboration, reproduction, performance, and dissemination of culture across the globe. Technologies of literacy include the development of pedagogical methods for educating literate citizens who not only understand the technologies already available, but who will be equipped with the intellectual foundation and habits of mind to respond and use the new technologies that will become commonplace in the future. This is a crucial dimension of the education of life-long learners. Thus these educational programs must experiment and develop innovative pedagogies that engage multiple intelligences: the social, cultural, and emotional, as well as the cognitive and the technical. Furthermore these pedagogies must utilize the full range of new technologies that enable multiple-modes of expression in the production of educational materials and educational output: visual, textual, aural, corporeal, and spatial. In this way these programs both draw on new technological literacies and engage faculty and students in the creation of the literacies of the future.

In a research context, the manifestation of this imagination comes through the collaboration of faculty and researchers from different disciplines working together on projects of social and cultural significance to create human-centric technologies. The output of their research may take several forms: innovative technological devices, applications, research monographs, presentations, demonstrations, performances, and installations. The guiding strategy for all these research projects is that they “take culture seriously.” Culture serves as both the context for the formulation of the research problem in the first place, and as the domain within which significant technological developments will unfold. In this way, this kind of technology-based research understands its ethical dimensions and acknowledges its ethical responsibilities.

To do this right, we need to ground these interdisciplinary efforts in new ways of thinking about technology. We need a new educational philosophy that can guide our efforts to create “original synners”—students who can synthesize information from multiple perspectives.[3] We need to develop new institutional structures for research and new pedagogies that support the development of the technological imagination and inspire its practical application. We need new analytical frameworks that enable us to imagine the multiple consequences of the deployment of new technologies. I also argue that we need to specify the ways in which all of us within the university are accountable for the future of technological development. Designers and engineers need to address their cultural responsibilities.  Humanists and social scientists must contribute creative direction as well as critical analyses. In an effort to suggest a starting point for new multidisciplinary collaborative applied technology-based research projects that take culture seriously, I offer the following three broad questions:
What are the most pressing cultural issues within the US and across the world?
All technologies rearrange culture. We know that new technologies are especially useful in facilitating interactions among people from different cultures. How is the project of cultural reproduction served by new technologies? How will current as well as traditional cultural memories be preserved over time? How should we choose what to forget? What role does narrative play in the technological reproduction of culture? How is narrative itself a technology of culture? What new narrative devices/applications need to be developed to aid the reproduction of culture? The use of new digital devices for entertainment and pleasure yields contradictory effects. While some people in the developed world enjoy an expanded range of mobility, enabled by the development of mobile communication devices, others become more sedentary and confined within a limited orbit Through the use of global telecommunication networks people can expand their global awareness through virtual visits. What are the cultural possibilities and consequences of virtual mobility? What is the future of embodied play and entertainment? What implication does this have for the design of playgrounds, digitally- augmented performance spaces, and the development of creative toys? What are the implications of virtual tourism for the reproduction of privilege and mobility?  What are the cultural possibilities of technologically-augmented reality?

What are the literacies of the 21st century?
Literacy is a technological phenomenon. The development of new technologies of communication and of expression not only influence but demand the development of new literacies. These literacies do not compete with traditional print-based literacies, but rather build on and complement them. Current undergraduate students will become the next generation of scholars and researchers who will go on to develop new technologies of literacy, new genres and devices of cultural expression, and new forms of scholarship and research. How will we prepare them for this important cultural work? What technologies can be developed to teach basic literacy? What new kinds of reading devices will be useful in the future? How will our educational materials need to change to address the many kinds of literacy that will be required of future generations: reading, writing, digital, technological, multimedia? What will the textbook of the future look like? What are the possibilities of multi-player distributed gaming for the development of educational experiences?

What will scholarship look like in 10-15 years? 
Interdisciplinary collaborations and research provoke the need to develop new forms of scholarship, publications and other modes of cultural outreach. These new forms in turn offer an opportunity to experiment with modes of expression made possible by the development of new digital technologies. In the process, new forms of knowledge production emerge. New forms of scholarship will require the development of new authoring and publishing tools. We already know that authoring and designing are merging; what kinds of digital authoring environments are needed to support scholarship across the curriculum? Collaborative scholarship is a global phenomenon: how can social networking applications be used for scholarly and educational purposes? These social networking applications facilitate communication among scholars and lay people, thus offering a stage for the forging of radically new collaborations for the production of knowledge. Traversing the binary distinction between “scholar” and “amateur” promises to transform the educational scene within the university, effectively opening up the university to the world in unprecedented ways. How can the communication of scholarship and new research be enhanced through the development of multilingual digital applications, widely distributed digital archives, and new collaboration platforms?  What are the stages for knowledge transfer from the university to the broader public, which now includes so-called “amateurs” who are also actively engaged in new knowledge construction (through the development of folksonomies, for example)?

A trained technological imagination is the critical foundation required by the next generation of technologically and culturally literate scholars, scientists, engineers, humanists, teachers, artists, policy makers, leaders, and global citizens. Creating research programs and new curricula that explicitly address the education of the technological imagination are the ways in which the university will contribute to significant cultural change.

Instead of a Bridge, How about a Collaboratory?

In 1959, when C.P. Snow first described the gulf between the sciences and the humanities as a “two culture” problem, he implored educators to find ways to bridge the divide.[4] He took pains not to blame one side or the other for the failure to communicate because he believed that neither “the scientists” nor the “literary intellectuals” had an adequate framework for addressing significant world problems. In the intervening half-century since the publication of Snow’s manifesto there have been several attempts to bridge the “two culture” divide. While some of these attempts resulted in spectacular failures (“The Science Wars” of the early- to mid-1990s), others represent modest, but on-going interventions (The Society for Literature, Science and the Arts.[5] The development of Science and Technology Studies programs (STS) are noteworthy academic programs that train students to investigate the cultural and social implications of science and technology. Few if any of these programs or institutional experiments have successfully brought humanists, social scientists, scientists, and engineers together—as peers—to collaborate on the production of new applied research that results in the creation of new technologies. Future attempts to bridge the two cultures will be of limited success as long as these groups of scholars continue to see themselves as standing on opposite sides of the divide, or if the groups continue to regard each other as hierarchically advantaged or disadvantaged. I believe that the time is right to take up Snow’s challenge once again, not to work on building bridges per se, but rather to create a new place for the practice of multidisciplinary, collaborative technology-based research.

In 1989, a professor at the University of Virginia coined the term collaboratory to describe a new institutional structure for collaborative research. As of Fall 2005, there are dozens of collaboratories around the world, most of which are virtual spaces that utilize digital network technologies to support the collaboration among researchers at distant physical locations. Many of these collaboratories are actually collaborations among laboratories located around the world, where the individual laboratories are (presumably) still organized in the typical fashion around a single PI’s research or a single topic.

To date the collaboratories that involve humanities scholars focus almost exclusively on humanities computing research, where the projects involve the development and use of a high-end digital infrastructure for digitizing, archiving and searching specialized collections of historic materials, most typically books, manuscripts, and images. While these efforts and others such as the various “digital library” projects are absolutely necessary and valuable, they represent only one vector of research that unites the humanistic with the technological.

In 2002, a group of humanities program directors formed a virtual collaboratory called HASTAC: The Humanities, Arts, Science and Technology Advanced Collaboratory designed to promote the development of humane technologies and technological humanism.[6] The programs participating in HASTAC each have attempted to create some sort of institutional space for collaborative research involving humanists and technologists. The efforts include humanities computing programs as well as interdisciplinary humanities institutes that have a particular focus on science and technology.

Inspired by HASTAC discussions and meetings, I assert that there is a critical need to create physical collaboratories that bring humanists, artists, media producers and technologists together to build human-centric technologies. This requires a physical space where researchers from multiple disciplines work together as peers to design, prototype, and actually fabricate new technologies. In combining the critical methods of the humanities and social sciences with innovative engineering/design methods such as rapid prototyping and user-centered design, these collaborators will create innovative methodologies. Thus, the research output includes not simply new technology-based projects and demonstrations, but also insights into the nature of interdisciplinary collaboration and the creation of new methodologies for collaboration. Instead of a single PI, the business of the collaboratory would be coordinated by a representative group of researchers whose interests span the disciplinary spectrum: humanities, social and cognitive sciences, arts, engineering and sciences. As participants in this collaboratory, researchers from various disciplines each bring something important to the collaborations:

Special role of the humanist: Contributes expertise in the assessment and critique of the ethical, social, and practical affordances of new technologies; provides expertise on the process of meaning-making which is central to the development of successful new technologies; provides appropriate historical contextualization.

Special role of the social and cognitive scientist: Contributes expertise in the assessment of social impact and in the analysis of institutional, policy, and global effects of the development and deployment of new technologies; addresses the cognitive impact of new technologies; provides methods for analyzing social uses.

Special role of the technologist: Contributes expertise in the innovation of new devices and applications; provides analytical skills in the assessment of problem formation and solution design; demonstrates methods of design, creation, and prototyping; recommends specific tools, processes, and materials.

Special role of the scientist: Contributes expertise in the development of new theoretical possibilities; provides methodologies for assessing and evaluating implementation efforts, and for formulating possible (theoretical) outcomes; develops experiments with new materials; contributes understanding about environmental impacts and waste management.

Special role of the artist: Contributes expertise in the performance, expression, and demonstration of technological insights; provides skills in different modes of engagement: the tactile, the visual, the kinesthetic, and the aural.

The goal is to create space for the constitution of a research community that collaborates on technology-based projects that take culture seriously. While it is tempting to offer a list of suggested projects, this would undermine one of the critical components of the collaborative effort. While any participant can suggest a project, the project must be, in effect adopted by the community. This is to say that there needs to be consensus that a project is important to pursue. This, of course, is the basis of all good research; but it is rare that humanists, artists, and social scientists have a voice in this kind of evaluation of technology-based research projects. It is even rarer still that they have peer-status as researchers who will design, build, and fabricate new technologies. This is one of the important innovations of such a collaboratory. The output of these research projects might include typical research monographs, but also possibly public demonstrations, new pedagogical technologies, and new technologies of literacy. All the collaborators will serve as important “technology-translators” who can help make the meaning of new technologies more accessible to a wider public, both within and outside of the academy.

The social engineering of this endeavor is a crucial element of its success. The price of admission to this collaboratory is an individual’s commitment to embrace collaborative work. A key requirement of the research participants is that they work against the facile division of labor that would have the humanists doing the “critique,” the technologists doing the building, and the artists offering art direction. While there is a special role to be played by each participant, they must all be willing —  indeed, eager– to learn new skills, new analytical frameworks, new methods, and new practices. A personal commitment to life-long learning is the foundation for these collaborations. Each participant must be willing to uphold the ethical foundation of multidisciplinary work: intellectual  flexibility, intellectual  generosityintellectual confidence, and intellectual  humility. Only by doing so will the collaborations result in the kind of work where the sum is greater than the parts, and where the technological imagination can be freely exercised and employed to create futures that are desirable for all people around the world, not just for those already-privileged and technological-empowered.

Excerpted from Chapter 1: The Technological Imagination Revisited; Designing Culture: A Work of the Technological Imagination,  Anne Balsamo, Duke University Press, forthcoming.

[1].   “How eight pixels cost Microsoft millions,” Jo Best, c|net

[2].   The resonance with C.W. Mills’ notion of the “Sociological Imagination” is intentional here. C. Wright Mills, The Sociological Imagination (London: Oxford UP, 1959). See also:  Michel Benamou, 1980. “Notes on the Technological Imagination,” in Teresa De Lauretis, Andreas Huyssen, and Kathleen Woodward, eds. The Technological Imagination: Theories and Fictions. (Madison, WI: Coda Press, pp: 65-75).

[3]  This is an explicit reference to Pat Cadigan’s novel, Synners (New York: HarperCollins, 1991). For a more complete discussion of the education of original synners:  “Engineering Cultural Studies: The Postdisciplinary Adventures of Mindplayers, Fools, and Others” Science + Culture: Doing cultural studies of science, technology and medicine, eds. Sharon Traweek and Roddey Reid (New York: Routledge, 2000: 259-274).

[4].  C.P. Snow, The Two Cultures: and a Second Look (New York: Cambridge University Press, 1963).