Taking Culture Seriously: Educating and Inspiring the Technological Imagination

Posted December 12th, 2005 by Anne Balsamo, University of Southern California

Introduction:  On the Relationship of Technology and Culture

Ignorance costs.

Cultural ignorance — of language, of history, and of geo-political contexts — costs real money.

Microsoft learned this lesson the hard way. A map of India included in the Windows 95 OS represented a small territory in a different shade of green from the rest of the country. The territory is, in fact, strongly disputed between the Kashmiri people and the Indian government; but Microsoft designers inadvertently settled the dispute in favor of one side. Assigning the territory (roughly 8 pixels in size on the digital map) a different shade of green signified that the territory was definitely not part of India. The product was immediately banned in India and Microsoft had no choice but to recall 200,000 copies. Through a release of another version of its famous operating system, Microsoft again learned the cost of cultural ignorance. A Spanish-language version of Windows XP OS marketed to Latin American consumers presented users with three options to identify gender: “non-specified,” “male,” or “bitch.”  In a different part of the world, with yet a different product, Microsoft again was forced to recall several thousand units. In this case the recall became necessary when the Saudi Arabian government took offence at the use of a Koran chant as a soundtrack element in a Microsoft video game. The reported estimate of lost revenue from these blunders was in the millions of dollars.[1]

These examples illustrate the very real ways in which cultural ignorance costs money and good will in the big business of technological innovation. In this case, several seemingly insignificant details incorporated into state-of-the-art digital applications not only resulted in the recall of several widely distributed products and damage to a global brand, but also demonstrated a grand failure of multicultural intelligence within the ranks of a multinational corporation.

Although it is tempting to deploy these examples as a contribution to the popular pastime of Microsoft bashing, that response is neither creative nor particularly insightful. Rather, I use the examples of the costliness of a multinational corporation’s cultural blunders to assert that the process of technological innovation must take culture seriously. Moreover, I argue that the process of technological innovation is not solely about the design and development of new products or services, but rather is the very process that creates the cultures that we inhabit around the globe.

Technology is not an epiphenomenon of contemporary culture, but rather is deeply intertwined with the conditions of human existence across the globe. Although we are now more than a century past the dawn of the industrial age, the global distribution of the benefits of industrialism, i.e., basic health and subsistence-level resources, remains disturbingly uneven. In considering the significant loss of life due to recent hurricanes in the southern U.S. it is clear that the demarcation between rich and poor does not map simply onto the division between the global North and South. The tragedy revealed a wide-scale ignorance of the reality of the technological situation of people living in those regions. Evacuation orders were not only late in coming, they only addressed those who were already technologically endowed with the means to flee to safer ground, i.e., the automobile, or to those who had access to other technological resources, such as planes, trains, or buses. When lives are at stake, which is often the case with the deployment of large-scale or new technologies, it is ethically imperative that the technological imagination must explicitly consider cultural, social, and human consequences. This imagination must be trained to imagine the unimaginable—that is, to actively imagine unintended consequences.

When developing new technologies, culture needs to be taken into consideration at even a more basic level: as the foundation upon which the technological imagination is formed in the first place. I define the technological imagination as a character of mind and creative practice of those who use, analyze, design and develop technologies.[2] It is a quality of mind that grasps the doubled-nature of technology: as determining and determined, as both autonomous of and subservient to human goals. This imagination embraces the possibility of multiple and contradictory effects. This is the quality of mind that enables people to think with technology, to transform what is known into what is possible, and to evaluate the consequences of such creation from multiple perspectives.

The Interdisciplinary Education of the Technological Imagination

Every discipline within the contemporary university has been transformed by the development of new technologies, whether technology now becomes an “object” of study, as in the humanities and legal studies; a tool of knowledge production, as in the social and medical sciences; or a domain of new disciplinary knowledge, as in the engineering sciences, cinema, and communication studies. This means that every discipline within the university has something important to contribute to the development of new technologies.

Universities need to actively educate and inspire researchers, teachers and students to develop a robust technological imagination. This is an educated “quality of mind” that is by nature thoroughly interdisciplinary. To understand technology deeply one needs to apprehend it from multiple perspectives: the historical, the social, the cultural, as well as the technical, instrumental and the material. We must develop interdisciplinary research and educational programs that enact and teach skills of creative synthesis of the important insights from a range of disciplines in the service of producing incisive critique of what has already been done. From this critique emerges the understanding of what is to be done. In this formulation, the traditional role of criticism is expanded. No longer an end in itself, criticism of what has already been done is a step in the process of determining what needs to be done differently in the future. Our educational programs need to teach skills of critical thinking that lead to creative proposals for doing things differently. Then we need to teach students methods for doing things differently with technology: how technologies are built, how they are implemented, how they are reproduced and how they affect cultural arrangements. This is the foundation of innovative research and new knowledge production. This is the work of the university-educated technological imagination.


Figure 1: How the university contributes to significant cultural change through the development of new technologies

Educational programs that seek to develop a robust technological imagination must include training in 1) the history of technology, 2) critical frameworks for assessing technology and identifying effects, 3) creative and methodological use of technological tools, 4) pedagogical activities and exercises that create new technological applications, devices, and services, 5) architectural and virtual spaces for social exchange and creative production, and 6) international studies and policy analysis that provide appropriate cultural and institutional contexts of assessment of effects. This is the necessary multidisciplinary foundation for the development of new technologies.

Moreover, there is a category of technology—what might be labeled technologies of literacy—that serve as the stage for the elaboration, reproduction, performance, and dissemination of culture across the globe. Technologies of literacy include the development of pedagogical methods for educating literate citizens who not only understand the technologies already available, but who will be equipped with the intellectual foundation and habits of mind to respond and use the new technologies that will become commonplace in the future. This is a crucial dimension of the education of life-long learners. Thus these educational programs must experiment and develop innovative pedagogies that engage multiple intelligences: the social, cultural, and emotional, as well as the cognitive and the technical. Furthermore these pedagogies must utilize the full range of new technologies that enable multiple-modes of expression in the production of educational materials and educational output: visual, textual, aural, corporeal, and spatial. In this way these programs both draw on new technological literacies and engage faculty and students in the creation of the literacies of the future.

In a research context, the manifestation of this imagination comes through the collaboration of faculty and researchers from different disciplines working together on projects of social and cultural significance to create human-centric technologies. The output of their research may take several forms: innovative technological devices, applications, research monographs, presentations, demonstrations, performances, and installations. The guiding strategy for all these research projects is that they “take culture seriously.” Culture serves as both the context for the formulation of the research problem in the first place, and as the domain within which significant technological developments will unfold. In this way, this kind of technology-based research understands its ethical dimensions and acknowledges its ethical responsibilities.

To do this right, we need to ground these interdisciplinary efforts in new ways of thinking about technology. We need a new educational philosophy that can guide our efforts to create “original synners”—students who can synthesize information from multiple perspectives.[3] We need to develop new institutional structures for research and new pedagogies that support the development of the technological imagination and inspire its practical application. We need new analytical frameworks that enable us to imagine the multiple consequences of the deployment of new technologies. I also argue that we need to specify the ways in which all of us within the university are accountable for the future of technological development. Designers and engineers need to address their cultural responsibilities.  Humanists and social scientists must contribute creative direction as well as critical analyses. In an effort to suggest a starting point for new multidisciplinary collaborative applied technology-based research projects that take culture seriously, I offer the following three broad questions:
What are the most pressing cultural issues within the US and across the world?
All technologies rearrange culture. We know that new technologies are especially useful in facilitating interactions among people from different cultures. How is the project of cultural reproduction served by new technologies? How will current as well as traditional cultural memories be preserved over time? How should we choose what to forget? What role does narrative play in the technological reproduction of culture? How is narrative itself a technology of culture? What new narrative devices/applications need to be developed to aid the reproduction of culture? The use of new digital devices for entertainment and pleasure yields contradictory effects. While some people in the developed world enjoy an expanded range of mobility, enabled by the development of mobile communication devices, others become more sedentary and confined within a limited orbit Through the use of global telecommunication networks people can expand their global awareness through virtual visits. What are the cultural possibilities and consequences of virtual mobility? What is the future of embodied play and entertainment? What implication does this have for the design of playgrounds, digitally- augmented performance spaces, and the development of creative toys? What are the implications of virtual tourism for the reproduction of privilege and mobility?  What are the cultural possibilities of technologically-augmented reality?

What are the literacies of the 21st century?
Literacy is a technological phenomenon. The development of new technologies of communication and of expression not only influence but demand the development of new literacies. These literacies do not compete with traditional print-based literacies, but rather build on and complement them. Current undergraduate students will become the next generation of scholars and researchers who will go on to develop new technologies of literacy, new genres and devices of cultural expression, and new forms of scholarship and research. How will we prepare them for this important cultural work? What technologies can be developed to teach basic literacy? What new kinds of reading devices will be useful in the future? How will our educational materials need to change to address the many kinds of literacy that will be required of future generations: reading, writing, digital, technological, multimedia? What will the textbook of the future look like? What are the possibilities of multi-player distributed gaming for the development of educational experiences?

What will scholarship look like in 10-15 years? 
Interdisciplinary collaborations and research provoke the need to develop new forms of scholarship, publications and other modes of cultural outreach. These new forms in turn offer an opportunity to experiment with modes of expression made possible by the development of new digital technologies. In the process, new forms of knowledge production emerge. New forms of scholarship will require the development of new authoring and publishing tools. We already know that authoring and designing are merging; what kinds of digital authoring environments are needed to support scholarship across the curriculum? Collaborative scholarship is a global phenomenon: how can social networking applications be used for scholarly and educational purposes? These social networking applications facilitate communication among scholars and lay people, thus offering a stage for the forging of radically new collaborations for the production of knowledge. Traversing the binary distinction between “scholar” and “amateur” promises to transform the educational scene within the university, effectively opening up the university to the world in unprecedented ways. How can the communication of scholarship and new research be enhanced through the development of multilingual digital applications, widely distributed digital archives, and new collaboration platforms?  What are the stages for knowledge transfer from the university to the broader public, which now includes so-called “amateurs” who are also actively engaged in new knowledge construction (through the development of folksonomies, for example)?

A trained technological imagination is the critical foundation required by the next generation of technologically and culturally literate scholars, scientists, engineers, humanists, teachers, artists, policy makers, leaders, and global citizens. Creating research programs and new curricula that explicitly address the education of the technological imagination are the ways in which the university will contribute to significant cultural change.

Instead of a Bridge, How about a Collaboratory?

In 1959, when C.P. Snow first described the gulf between the sciences and the humanities as a “two culture” problem, he implored educators to find ways to bridge the divide.[4] He took pains not to blame one side or the other for the failure to communicate because he believed that neither “the scientists” nor the “literary intellectuals” had an adequate framework for addressing significant world problems. In the intervening half-century since the publication of Snow’s manifesto there have been several attempts to bridge the “two culture” divide. While some of these attempts resulted in spectacular failures (“The Science Wars” of the early- to mid-1990s), others represent modest, but on-going interventions (The Society for Literature, Science and the Arts.[5] The development of Science and Technology Studies programs (STS) are noteworthy academic programs that train students to investigate the cultural and social implications of science and technology. Few if any of these programs or institutional experiments have successfully brought humanists, social scientists, scientists, and engineers together—as peers—to collaborate on the production of new applied research that results in the creation of new technologies. Future attempts to bridge the two cultures will be of limited success as long as these groups of scholars continue to see themselves as standing on opposite sides of the divide, or if the groups continue to regard each other as hierarchically advantaged or disadvantaged. I believe that the time is right to take up Snow’s challenge once again, not to work on building bridges per se, but rather to create a new place for the practice of multidisciplinary, collaborative technology-based research.

In 1989, a professor at the University of Virginia coined the term collaboratory to describe a new institutional structure for collaborative research. As of Fall 2005, there are dozens of collaboratories around the world, most of which are virtual spaces that utilize digital network technologies to support the collaboration among researchers at distant physical locations. Many of these collaboratories are actually collaborations among laboratories located around the world, where the individual laboratories are (presumably) still organized in the typical fashion around a single PI’s research or a single topic.

To date the collaboratories that involve humanities scholars focus almost exclusively on humanities computing research, where the projects involve the development and use of a high-end digital infrastructure for digitizing, archiving and searching specialized collections of historic materials, most typically books, manuscripts, and images. While these efforts and others such as the various “digital library” projects are absolutely necessary and valuable, they represent only one vector of research that unites the humanistic with the technological.

In 2002, a group of humanities program directors formed a virtual collaboratory called HASTAC: The Humanities, Arts, Science and Technology Advanced Collaboratory designed to promote the development of humane technologies and technological humanism.[6] The programs participating in HASTAC each have attempted to create some sort of institutional space for collaborative research involving humanists and technologists. The efforts include humanities computing programs as well as interdisciplinary humanities institutes that have a particular focus on science and technology.

Inspired by HASTAC discussions and meetings, I assert that there is a critical need to create physical collaboratories that bring humanists, artists, media producers and technologists together to build human-centric technologies. This requires a physical space where researchers from multiple disciplines work together as peers to design, prototype, and actually fabricate new technologies. In combining the critical methods of the humanities and social sciences with innovative engineering/design methods such as rapid prototyping and user-centered design, these collaborators will create innovative methodologies. Thus, the research output includes not simply new technology-based projects and demonstrations, but also insights into the nature of interdisciplinary collaboration and the creation of new methodologies for collaboration. Instead of a single PI, the business of the collaboratory would be coordinated by a representative group of researchers whose interests span the disciplinary spectrum: humanities, social and cognitive sciences, arts, engineering and sciences. As participants in this collaboratory, researchers from various disciplines each bring something important to the collaborations:

Special role of the humanist: Contributes expertise in the assessment and critique of the ethical, social, and practical affordances of new technologies; provides expertise on the process of meaning-making which is central to the development of successful new technologies; provides appropriate historical contextualization.

Special role of the social and cognitive scientist: Contributes expertise in the assessment of social impact and in the analysis of institutional, policy, and global effects of the development and deployment of new technologies; addresses the cognitive impact of new technologies; provides methods for analyzing social uses.

Special role of the technologist: Contributes expertise in the innovation of new devices and applications; provides analytical skills in the assessment of problem formation and solution design; demonstrates methods of design, creation, and prototyping; recommends specific tools, processes, and materials.

Special role of the scientist: Contributes expertise in the development of new theoretical possibilities; provides methodologies for assessing and evaluating implementation efforts, and for formulating possible (theoretical) outcomes; develops experiments with new materials; contributes understanding about environmental impacts and waste management.

Special role of the artist: Contributes expertise in the performance, expression, and demonstration of technological insights; provides skills in different modes of engagement: the tactile, the visual, the kinesthetic, and the aural.

The goal is to create space for the constitution of a research community that collaborates on technology-based projects that take culture seriously. While it is tempting to offer a list of suggested projects, this would undermine one of the critical components of the collaborative effort. While any participant can suggest a project, the project must be, in effect adopted by the community. This is to say that there needs to be consensus that a project is important to pursue. This, of course, is the basis of all good research; but it is rare that humanists, artists, and social scientists have a voice in this kind of evaluation of technology-based research projects. It is even rarer still that they have peer-status as researchers who will design, build, and fabricate new technologies. This is one of the important innovations of such a collaboratory. The output of these research projects might include typical research monographs, but also possibly public demonstrations, new pedagogical technologies, and new technologies of literacy. All the collaborators will serve as important “technology-translators” who can help make the meaning of new technologies more accessible to a wider public, both within and outside of the academy.

The social engineering of this endeavor is a crucial element of its success. The price of admission to this collaboratory is an individual’s commitment to embrace collaborative work. A key requirement of the research participants is that they work against the facile division of labor that would have the humanists doing the “critique,” the technologists doing the building, and the artists offering art direction. While there is a special role to be played by each participant, they must all be willing —  indeed, eager– to learn new skills, new analytical frameworks, new methods, and new practices. A personal commitment to life-long learning is the foundation for these collaborations. Each participant must be willing to uphold the ethical foundation of multidisciplinary work: intellectual  flexibility, intellectual  generosityintellectual confidence, and intellectual  humility. Only by doing so will the collaborations result in the kind of work where the sum is greater than the parts, and where the technological imagination can be freely exercised and employed to create futures that are desirable for all people around the world, not just for those already-privileged and technological-empowered.

Excerpted from Chapter 1: The Technological Imagination Revisited; Designing Culture: A Work of the Technological Imagination,  Anne Balsamo, Duke University Press, forthcoming.

Footnotes:
[1].   “How eight pixels cost Microsoft millions,” Jo Best, c|net News.com.  http://news.com.com/How+eight+pixels+cost+Microsoft+millions/2100-1014_3-5316664.html.

[2].   The resonance with C.W. Mills’ notion of the “Sociological Imagination” is intentional here. C. Wright Mills, The Sociological Imagination (London: Oxford UP, 1959). See also:  Michel Benamou, 1980. “Notes on the Technological Imagination,” in Teresa De Lauretis, Andreas Huyssen, and Kathleen Woodward, eds. The Technological Imagination: Theories and Fictions. (Madison, WI: Coda Press, pp: 65-75).

[3]  This is an explicit reference to Pat Cadigan’s novel, Synners (New York: HarperCollins, 1991). For a more complete discussion of the education of original synners:  “Engineering Cultural Studies: The Postdisciplinary Adventures of Mindplayers, Fools, and Others” Science + Culture: Doing cultural studies of science, technology and medicine, eds. Sharon Traweek and Roddey Reid (New York: Routledge, 2000: 259-274).

[4].  C.P. Snow, The Two Cultures: and a Second Look (New York: Cambridge University Press, 1963).

[5]  http://slsa.press.jhu.edu

[6]  http://www.hastac.org

Technology as Epistemology

Posted December 12th, 2005 by Peter Schilling, Amherst College

Early in the 20th century Gertrude Stein wrote that America was the oldest country because it was the first to arrive at the new Century. Today’s students have formed their habits of mind by interacting with information that is digital and networked. They are, in a way, older than their teachers, whose relationships with information are governed by earlier generations of technology. There is more. Not only do our students possess skills and experiences that previous generations do not, but the very neurological structures and pathways they have developed as part of their learning are based on the technologies they use to create, store, and disseminate information. Importantly, these pathways and the categories, taxonomies, and other tools they use for thinking are different from those used by their teachers.

(http://www.userfriendly.org indicates use in this manner is not an infringement of the creator’s intellectual property.)

To say that “new technology is changing the way we think” is as obvious as it is ambiguous. While it may be popular, and accurate, to complain that Microsoft Word’s grammar checker has a greater influence on American English than any teacher, curriculum, or book, I would like to consider the relationship between technology and thinking explicitly in the context of education, where the mission is to help students learn to think.

Let us start with the role that patterns and categories play in learning and knowing. Although the patterns and categories we use are never perfect ways of creating meaning, they influence the way we think, remember, and anticipate information. For instance, in biology we divide the world into domain, kingdom, phylum, class, order, family, genus, and species, which, at its final category, is a division based on the ability to reproduce sexually. For this reason, we have the families of canidae and felidae, dogs and cats. If, in our world of cloning and other forms of assisted reproduction, we, instead, divide the world primarily by means of locomotion, dogs and cats would both be in one group as the digitigrade. (I suspect that, no matter how we categorize them, the digitigrade with the longer nose and floppy ears would still chase the digitigrade that purrs and flicks its tail.) In addition, the particular way we learn information, as well as when in our lives we learn, creates specific neural pathways (or patterns) in our brains. Once the patterns and pathways become too familiar or set, however, we become less adept at seeing information which does not fit the pattern. At times we may even start adding phantom data to fill in gaps. It is very important to keep this in mind.

All of our cognitive tools help us perceive our world and sort the flood of information that continually flows across our senses. We regularly filter and winnow this information in order to focus, group, and extract meaning. If our brain and senses did not do this, we would be overwhelmed by our inability to differentiate foreground from background.

In the photo of the two dogs on the log, we can differentiate the dogs from the woods that surround them. We have a sense of the field of vision in the photo, and perceive that one of the dogs is standing closer to the viewer than the other. We know that the trees are wood and the dogs are not. We also know that this is a photo on a computer screen and that it is unlikely that either dog will start chasing a squirrel.

Neurologist and author Oliver Sacks described the cognitive and neurological development of a man, blind since childhood, who regains his sight in his 50s. The once blind man, Virgil, cannot do all of the things with the dog photo we described above. Sacks shows that, for Virgil, information does not follow the same neural pathways that it does for other, sighted adults. However, once Virgil can feel a scene with his hands, such as the contents of a room or a person’s face, he can then describe the information that he sees. So, while his eyes function properly, his brain has developed strategies and pathways for processing information that do not accommodate visual data.[1]
Time and experience train our senses to interpret information. They also lead to the development of a facility (or opportunity, from an illusionist’s point of view) to fill in information not available to our senses. Optical illusions are perhaps the most widely-known demonstration of this kind of learned behavior. Our mind fills in or adds information so that we can perceive depth, relationships, and other data not actually present in an image or scene.

The mind also fills in such things as context and informs our understanding by, for instance, utilizing our familiarity with the tools of information creation and dissemination. So, while patterns and categories are necessary for us to sort through the information to find meaning, once we have created our categories and patterns, they can be hard to put aside. In these cases, one cannot see familiar information without the categories or meaning with which we have associated it.

Much has been said and written about the importance of categories and patterns for thinking. The National Research Council has reported on “research demonstrating that when a series of events are presented in a random sequence, people reorder them…. the mind creates categories for processing information. . . . the mind imposes structure on the information available from experience.”[2]

It is problematic when we lose sight of the constructs we bring to our interaction with the data around us, but it is hard not to. What Nietzsche has said about metaphors holds equally true for our use of patterns to help formulate meaning.

What, then, is truth? A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.[3]

The patterns and categories we use can constrict our ability to understand new things. For instance, Salman Rushdie points out in Midnight’s Children, a novel about Indian culture, that any people whose word for “yesterday” is the same as their word for “tomorrow” cannot be said to have a firm grip on time[4], yet academics studying Rushdie’s novel are tempted to develop a timeline of the events of the story. Similarly, the U.S. publisher of Gabriel Garcia Marquez’s Hundred Years of Solitude have added a family tree to their English-language edition of the novel,[5] perhaps missing the point that in a book where twenty-one characters have the same name, the concept of individual identity is not really key for understanding.

Similarly, we tend to use known patterns to help us learn, or manage new information. Context and what we know affects the ways in which we establish meaning, such that if one were to have come across this image of the saffron gates in New York’s Central Park anytime before February of 2005, one would likely have assumed that Photoshop had been used to create the image. But after February 2005, this would not be the reaction. The geese in the foreground are, now, as likely the result of work with Photoshop as the gates themselves.

For centuries, humans have used various technologies to help manage data, whether it was Incan knots or Egyptian hieroglyphs. The introduction of new technologies, therefore, is an important part of the context in which we set meaning for new information. For this reason, although we have had stories about three-headed dogs in our culture from Cerberus to Fluffy, today most viewers of this photo of a three-headed dog will (hopefully) immediately consider it a product of image-editing software.

Education has the contradictory tasks of teaching us to work within patterns, but also to think beyond them. If we are not careful, disciplinary thinking can slip into rote formalism or a mere act of classifying data with established taxonomies. For instance, students exposed for years to narrative will likely incorporate narrative pattern into the way they anticipate information. Consider, for instance, this Hyundai Commercial. Try stopping the video every few seconds and narrate the unfolding scene yourself. Although there is no dialogue, you will probably notice that you can tell a fairly detailed story on your own.

The same phenomenon of filling in information gaps occurs when we try to proofread our own writing (by which I mean to plead forgiveness for any errors in this text. . .).

These claims I make can be overstated, however. For instance, we may recall reports in the popular press about research at Cambridge University that showed our ability to recognize words  when all letters other than the first and last are jumbled. Nevertheless, after the press releases, many easily refuted the research, showing, among other things, that it was not done at Cambridge, does not work for all languages, does not work when all the letters are capitals, does not work when letters are simply removed, etc.

That said, the way we learn, when we learn, and the technologies we use to learn all influence what we know as well as the neural pathways we use when accessing our knowledge. Researchers such as Wayne Reeves have emphasized the differences in the ways that experts and novices in a given area or topic solve problems and react to information.[6]

As part of a well-known 1965 study on thought and choice in chess, de Groot noticed that, when a chess master, a proficient chess player, and a novice were shown a chessboard for five seconds with all pieces on it in mid-game, the master could recall the position of sixteen pieces, the proficient play eight pieces, and the novice four. When all were shown the same board for a second five second look, they doubled the number of pieces and locations they could recall. However, when the same subjects were shown a board but with all the pieces randomly placed, each could recall pieces and positions only at the level of the novice.[7]

Analogous studies have been done with mathematicians, physicists, and historians, though the emphasis was on the ways in which experts and novices approach information differently. In short, experts can chunk information in ways novices cannot and they can access and apply appropriate overarching principals, laws, and methods to the new data, which, again, the novices cannot.

Research conducted by Eleanor Maguire of University College London has shown that London taxi drivers have an enlarged region of the posterior hippocampus. This region is believed to be associated with “spatial navigation” and is a “memory bank” for the spatial representation of the complex maze of streets in the city of London, England. There is a positive correlation between the number of years on the job and the size of the posterior hippocampal region.[8] Additional research conducted by Lewis R. Baxtor et al of UCLA in 2001 demonstrated that the physical characteristics of the brain of subjects who receive psychotherapy (talk therapy) changed the brain in much the same way as psychotropic medication.[9]

In 2003, research at the University of Rochester demonstrated that action video games, such as single player shooters, train the brain to better process certain types of visual information. Students who played video games for as short as a two-week period had a greater facility seeing and processing multiple stimuli in their peripheral vision.[10]

As reported in Nature in 2004, a neural pattern has also been associated with language learners. According to Andrea Mechelli, a neuroscientist at University College London “[t]he grey matter in this region increases in bilinguals relative to monolinguals — this is particularly true in early bilinguals who learned a second language early in life . . . . The degree is correlated with the proficiency achieved.”[11] Learning another language after 35 years of age also alters the brain but the change is not as pronounced as in early learners. Mechelli said their research “reinforces the idea that it is better to learn early rather than late because the brain is more capable of adjusting or accommodating new languages by changing structurally. This ability of the brain decreases with time.”[12]

But what happens when the content of one’s expertise, developed over years of study and research using one generation of technology, gets separated from the tools now used to generate and disseminate information within that content area? The following QTVR versions of a chessboard may prove disorienting for those who, while masters of chess, are novices to QTVR.

Chess Example 1
:
Chess Example 2:

Not only do today’s novices use technologies unavailable at the time their teachers were becoming masters, but the quantity and types of information students need to assess has also expanded exponentially. Part of this shift in learning brought about by today’s digital, networked information results from the fact that we now often work, share, and search at the data level as opposed the level of conclusions, narratives, catalogs, or indices. That is, students are not limited to browsing a card catalogue to find just those books that their college library had the resources to purchase and that were described with Library of Congress subject terms as addressing a particular topic and which a publishing house has selected for publication by an author who had created a narrative by sorting and synthesizing years’ worth of research into a comprehensible whole. They can use search and collaboration tools to get at the primary source data as well as a wider variety of studies of the data. By so doing, they can wade through and remove four levels of filters between themselves and the information.

What it means to master a field of study has changed. Rather than developing an encyclopedic knowledge of all literature on a single topic, today’s students need to know how to find, evaluate, and contextualize information in numerous, different formats on more interdisciplinary topics, but they also need to know how to locate and use the underlying data as well as the technology to sort and present it. To teach the history of the English language today, for instance, an instructor would most likely want to train students to use popular Geographic Information Systems (GIS) and create data layers of audio files demonstrating the pronunciation of Old English and Old Norse town names, point data for the town’s location, data relating to the slope and aspect of northwestern Britain, and have knowledge of the military technology of pre-Norman England. Reading a book or listening to a lecture on the topic is no longer sufficient. An educated person today knows how to access and use appropriate tools and the appropriate data as well as understands the abilities and limitations of each. It is likely that the way in which they know these things — as well as the ways in which they go about finding, assimilating, and representing information — utilize specific areas of their brains. Photoshop and other such tools change the way we process visual data.

Epistemology, and epistemological inquiries, have a long history, arching from superstition toward what Gurvitch called the “social frameworks of knowledge.”[13] Technology has always been present as an essential component of how we think, of our thinking about our thinking, and of what we teach. When the technology changes, as it is now, its role becomes all the more evident. For the new generation of thinkers, knowledge includes del.icio.us and other forms of immediate and readily-available folksonomies. Colleges continue to push writing as the skill students must have to be articulate thinkers. Yet they risk stagnation in an epistemological eddy if they do not also appreciate digital video production, database programming, or even the underlying functionality of MediaWiki, as necessary for developing the cognitive abilities to create and share knowledge.

As educators, we can discuss the ways in which learning changes the brain. Following Nietzsche, we can also reason that it is hard to change our patterns and categories of thought. Nevertheless, we must perceive our own technology-dependent constructs in order to integrate the valuable information and skills we have developed over a lifetime with the new tools now used to create and share knowledge.

NOTES

  1. Oliver Sacks, “To See and Not See,” An Anthropologist on Mars (Vintage Books, 1995), 108-152.
  2. Council Committee on Learning Research and Educational Practice et al, How People Learn: Brain, Mind, Experience, and School: Expanded Edition (National Academies Press, 2000),http://www.nap.edu/books/0309070368/html/124.html – /125.html.
  3. Friedrich Nietzsche, “On Truth And Lie in an Extra-Moral Sense,” The Portable Nietzsche, trans. Walter Kaufman (New York: Penguin Books, 1982), 46-47.
  4. Salman Rushdie, Midnight’s Children (New York: Avon Books, 1980), 123.
  5. Gabriel Garcia Marquez, One Hundred Years of Solitude, trans.Gregory Rabassa (New York: Harper and Row Publishers, 1970).
  6. See Wayne Reeves, Learner-Centered Design: A Cognitive View of Managing Complexity in Product, Information, and Environmental Design (Sage Publications, Inc., 1999).
  7. See Adriann deGroot, Thought and Choice in Chess, (The Hague: Mouton De Gruyter, 1965).
  8. Eleanor Maguire, Proceedings of the National Academy of Sciences 97, no 8 (April 11, 2000): 4398-4403, http://www.pnas.org/cgi/content/full/97/8/4398.
  9. Arthur L. Brody, MD; Sanjaya Saxena, MD; Paula Stoessel, PhD; Laurie A. Gillies, PhD; Lynn A. Fairbanks, PhD; Shervin Alborzian, BS; Michael E. Phelps, PhD; Sung-Cheng Huang, PhD; Hsiao-Ming Wu, PhD; Matthew L. Ho, BS; Mai K. Ho; Scott C. Au, BS; Karron Maidment, RN; Lewis R. Baxter, Jr, MD, Regional Brain Metabolic Changes in Patients With Major Depression Treated With Either Paroxetine or Interpersonal Therapy,” Archives of General Psychiatry 58, no 7 (2001): 631-640. http://archpsyc.ama-assn.org/cgi/content/abstract/58/7/631
  10. “Altered perception: The science of video gaming,” Currents (University of Rochester, 2003), http://www.rochester.edu/pr/Currents/V31/V31SI/story04.html13. See Georges Gurvitch, The Social Frameworks of Knowledge, trans. Margaret A. Thompson and Kenneth A. Thompson, with an introductory essay by Kenneth A. Thompson (New York: Harper & Row, 1971).
  11. Mechelli et al., “Neurolinguistics: Structural plasticity in the bilingual brain.” Nature 431 (14 October 2004), 757. Abstract at: http://www.nature.com/nature/journal/v431/n7010/abs/431757a.html).
  12. ibid.

Interactive Reading, Early Modern Texts and Hypertext: A Lesson from the Past

Posted December 12th, 2005

by Tatjana Chorney, Saint Mary’s University

Views Over the past decade, the increasing presence of hypermedia environments in the lives of a growing number of readers and learners has contributed to a change in the definition of “text.” However, we still do not have adequate ways of speaking about the implications of the gradual extension of the notion of text—an entity existing usually in print, with clearly defined borders and presenting information in a highly structured manner—to e-text or hypertext, a much more fluid concept, whose borders are not at all clearly defined, and whose manner of presenting information is non-linear. Because “hypertext is a mental process, as well as a digital tool,”[1] one of the larger cultural implications arising from this change in the meaning of text concerns the role of the reader. Text in print implies and, to a certain degree, constructs a passive reader, one who is often a “receptacle” of information. Hypertext is shaping an appropriative reader who is interacting with the text, and is involved in knowledge construction.

Although this shift in the position the reader in many ways arises from the new technology, the manner of active reading in which the reader is empowered to construct meaning and to change the “original” text is at least as old as the early modern period.[2] The Renaissance reader was accustomed to applying “alien” texts to new purposes in a method of appropriative reading that was a consequence of the Renaissance technique of collecting commonplaces.

Increasing our historical awareness of Renaissance reading habits will not only help us avoid technological determinism but will also extend our awareness of the current changes in the definition of text and the concomitant shift in the nature of reading and knowledge management. This in turn will inform our pedagogy by increasing our ability to relate to a student body whose reading and learning habits are already a product of the “digital age” and continue to be shaped by the new medium. Many of our students are “Net-generation” learners whose minds are accustomed to bite-sized bits of information that can be easily transferred, manipulated and appropriated into different contexts and integrated into different “wholes.”[3]

To begin, interactive reading can be defined as a process in which readers have control over the texts they are reading. This control enables them to influence the nature of the reading process in that they are able and free to participate actively in the construction of meaning of whatever they are reading. Renaissance reading habits and those fostered by the hypertext environment (which has become synonymous with the Internet), are similar with regard to four broad issues: 1. non-linearity; 2. a protean sense of text and its functions; 3. affinity with oral models of communication, and 4. a changing concept of authorship.[4]

In my work on the manuscript circulation of John Donne’s poetry, I have come across a number of records revealing the extent to which 17th-century anonymous readers—those who did not belong to Donne’s coterie composed mainly of friends and patrons—interacted with the texts they were copying into their own manuscript compilations. A compelling and generally overlooked aspect of English manuscript collections and commonplace books from the 16th and 17th centuries is that most of the writers of the texts they contain, such as Donne, Jonson, King, Herrick, and others, have been identified by bibliographers and textual scholars only centuries after their compilation, and not by those who copied the poems. A large number of poems in these collections appear without any indication as to the author’s identity (whether known or unknown to the scribe or owner at the time of recording); texts are often untitled or retitled (at least, with titles different from the ones we have come to associate with them); they also often appear in fragments, and these are sometimes blended seamlessly into other fragments or entire poems.

Single lines, like line 24 from Donne’s “The Dreame” (“That love is weake, where feare’s as strong as hee”) were taken out as sententiae with aphoristic value. The last four lines of Donne’s “The Bracelet,” appear recorded in the Fullman MS as a new short poem (Bodleian MS CCC) bearing the new title “A Creditor.” The excerpted lines were treated as poetic “commonplaces,” “generally applicable ideas, precepts and images pointing to or illustrating universal truths.”[5] These ideas could be used later to adorn or enhance formal arguments as well as informal discussions to increase the copia or eloquence of the reader/writer.[6]

Formal and conceptual reworking were not uncommon either. Donne’s “A valediction: forbidding mourning” found in a mid-seventeenth-century anthology of poetry, shows that the collector, under a different title, converted Donne’s nine tetrameter quatrains into five pentameter six-line stanzas, each ending on a rhymed couplet after “the first four lines replicated the alternating rhymes of the ‘original.’” This is not simply a “version” of Donne’s poem, but a “major reworking,” “done with the creative freedom that collectors and imitators in the system of manuscript transmission felt free to exercise.”[7]

The interactive tradition of reading in the Renaissance is not confined to the fluid manuscript environment. During the Reformation in 16th-century Italy, the Dominican Giovanni Rubeo was in the habit of copying passages and sometimes entire pages from the works of Bucer, Zwingli and Calvin inserting them later into his own sermons, while Michel Montaigne, in his Essais, claimed, “I only speak others in order better to speak myself.”[8]

These examples indicate that early modern readers assumed three functions or roles: they were readers, but the reading process implied that each reader was also, in the words of Henry King, “both the Scribe and the Author.”[9]

Interactive reading in the Renaissance was part of the characteristic model of learned reading based on the intellectual technique on collecting “commonplaces.” A reader read texts in order to “extract quotations and examples from them, then note down the more striking passages for easy retrieval or indexing,” or for later use either in writing or in speaking. The “reference” style of reading is symbolized in the reading wheel, “a vertical wheel turned with the help of a system of gears permitting the readers to keep a dozen or so books, placed on individual shelves, open before them at one time.” [10] Reading for “linear narrative” is here replaced by reading multi-linearly or for points of interest that can later be arranged into a new “narrative” according to individual needs and contexts.

The reading of texts in manuscript also emphasized a “communal” sense of textuality.[11] Manuscript culture especially was orally-inflected and “conversational” because writer and audience knowingly participated in the form of “publication as performance.”[12] For 17th-century poet Katherine Philips, for example, and for many others like her, poetry set in print “wrested” the texts out of their natural, fluid manuscript environment in which they were closer to the living word, and set them in ways that stood oddly fixed and immutable.[13]

The appropriative treatment of and approach to various texts implies a cultural attitude to writing and reading similar to the one articulated by some twentieth-century reader-response theories, or the reader shaped by the hypertext environment. In all three models, readers are seen as having a co-creative role. It is in the idea of the “living” text open to transformations, and in the approach to reading as a creative and re-creative engagement with the text, that past and present resemble one another.

The experience of reading texts in hypertext, the best known example of which is the World Wide Web, is very similar to the experience of reading with the help of a “reading wheel.” It encourages reading not for “linear narrative” but for points of interest, empowering readers to shape and control the reading process by selecting and reading only those parts of texts that are memorable or relevant to them. Similar to the past model, here author and reader often have in common the knowledge of “publication as performance.” Authors “conceive of their works so that readers have many choices along the reading path; they are invited to transform and contribute to the texts, which in turn transforms the literary work into a more open-ended experience.”[14] This approach to writing and reading in hypertext allows the modern reader to assume the three functions mentioned earlier with regard to Renaissance readers, that of reader, “scribe” (one who transcribes or copies the texts of others), and author.

The experience of composing and reading poems in hypertext, as recorded by poet Stephanie Strickland, echoes my description of reading in the Renaissance and captures the spirit of the shift from a print-oriented textuality to hyper-textuality: “When a set of poems is composed in or into hypertext, the space in which they exist literally opens up, [r]eleased from the printed page into this floating space, readers are often uneasy. What is the poem?…Only slowly does one assimilate the truth that one may return each time differently.”[15] “Returning each time differently” encapsulates one of the dominant aspects of hypertext. It is a format that does not depend on a print-informed sense of “original narrative as only context.” It allows for “multiple entrances and exits” from a text. As “wherever the reader plunges in, we find a beginning,” linearity becomes “a quality of the individual reader’s experience.”[16]

As a personal-public pastiche, just like the manuscript environment in the Renaissance, the Internet questions the boundaries between authorship and readership. In his hyperpoem, “Medical Notes of an Illegal Doctor,” poet Alexis Kirke invites readers to envision the poem as a space for social dialogue, and “mutate” the poem by entering their changes in the section “text to be added or changed,” which will after a few clicks, transform the initial poem.[17] The reader is here invited to author as he reads, by adding new text with new links and titles. While reading in the Renaissance was described as “poaching,”[18] reading in hypertext is described in very similar terms, as “welding,” “where the meanings extracted—decontextualized—from different parts of the text can be crafted—re-contextualized”— into something new.[19]

In acknowledging and validating the polysemic nature of language and human expression and experience, hypertext is linked with “orality” and the idea of the “living word.”[20] Internet based communication tools such as email, IRC (Internet relay-chat), forums, and synchronous conferencing illustrate well the association between spoken and written language. Many, and especially “digital natives,” those who do not know life without computers, treat the interactions enabled by these communicative spaces informally. They use expression almost as verbal communication, and their texts bear informal “oral” markers in the lack of punctuation and capitalization, and the use of emoticons whose nature and meaning is modeled on body language.

On-line scholarly essays, too, often function on an interactive principle in that their basic structure subverts the idea that readers have to read in an order intended by the author. They are most frequently organized episodically; the content is broken into relatively short units held together by loosely related ideas, each with a different title and each connected to the next or the previous one with a link. For instance, Kaplan’s recent essay on “politexts” emphasizes an “out of order” reading paradigm: “There are a number of ways to read this essay, none of which will exactly replicate the text of the talk I gave. Take chances with your choices.”[21] This aspect of reading in hypertext will gradually lead to the development of different argumentation strategies, and generally a different sense of narrative structure.[22]

Hypertext thus offers an alternative to what Lyotard calls “the tyranny of coherence,” and indicates that the thinking modality encouraged by it is “closer to the way the mind works.”[23] Hypertext thus compels us to reconsider the nature of text in essential ways. By encouraging a “piecemeal” approach to composition and reading, it reeducates us into a form of the “commonplace” tradition of reading and information management. Interactive reading reminds us that knowledge can be transmitted not only through self-referential, extended narratives emphasizing closures, but also as “collections of ideas that can arrange themselves into a kaleidoscope of hierarchical and associative patterns—each pattern meeting the needs of one class of readers [and writers] on one occasion.”[24]

We seem to be experiencing a form of convergence in reading paradigms with past models. However, while the interactive model of reading in the Renaissance was a product of a wider cultural attitude to texts and the world, the interaction enabled by hypertext, and its implications, are often perceived as running against and threatening most cultural and institutionalized notions about texts, reading, and education based largely on print models.

My main point in placing the past and present sense of text and reading experience side by side, therefore, is: 1) to draw attention to the cognitive aspect involved in managing and understanding information, and 2) to make explicit the major assumptions that govern interactive reading in any context.

And while one may be a student who in her spare time reads novels from back to front, and middle to back, that same student placed in a traditional learning environment will soon realize that there may be no legitimate or readily articulated context for her quirky reading habit. Traditional education emphasizes submission to authority, often rote memorization (more frequent in disciplines other than English studies) and what Freire called the “banking concept of education,” in which learned teachers deposit knowledge into passive students, implicitly inculcating conformity.[25] This is likely one of the reasons why it is proverbially difficult, as it is often heard in academic teaching circles, to “get your students to talk” and why so many pedagogical seminars are held on the same topic. Becoming a student and a teacher who engages in multiple forms of interactive practices and honors the results of these practices does in many cases require practice.

The new model of education calls for multi-linear problem solving, and an “interactive” and “participatory” workforce. In a recent article, Andrea Leskes, vice-president for education and quality initiatives of the Association of American Colleges and Universities (AAC&U), reminds us that economic globalization, “fueled by the transformative power of modern communications” poses particular local challenges for institutions of higher learning across the world. The so-called “Greater Expectations Report,” formulated by the Association in 2000, examines the changing role of the academy and liberal studies in the 21st century.[26] The report stipulates the central aims of global liberal education as having to respond to a world characterized by change and interconnection by preparing students to be integrative thinkers. As integrative global thinkers, students would be able to take a more active part in their learning, and then transfer easily what they learn from one context to another. Integrative learning is based on an essential flexibility in how we conceive of knowledge creation and management, which in turn allows for integrating apparently unrelated or various ideas and methods into new and unforeseen paradigms, contexts, and unities. Studying the dynamic of interactive reading is thus not only a look back on past practice, but also a model for studying integrative teaching and learning in a global world, and a way of responding to the perceived “lack of clarity of purpose in undergraduate education” as the outcome of “….escalating demands created by changes in both the campus experience and the emergence of high-technology industries and applications.”[27]

NOTES

1. Paul Gilster, Digital Literacy (New York: John Wiley, 1997),136.

2. The Renaissance inherited the tradition of collecting commonplaces from Antiquity and from the Middle Ages. Thus, the idea of reading as interaction with the aim of remodelling and reusing the whole or parts of the given material is very old. However, as Walter Ong reminds us, it was the Renaissance humanists who distinguished themselves particularly in this practice, and who formulated the contemporary theory of education based on the commonplace technique. Erasmus and his followers, “broke down virtually the whole of classical antiquity into these bite-size snippets or sayings (adages or proverbs, and apothegms or more learned sayings), which could then be introduced into discourse as they stood or be imitated.” See Ong, The Presence of the Word: Some Prolegomena for Cultural and Religious History (New Haven: Yale U P, 1967), 62-3. Also see Ong, Interfaces of the Word : Studies in the Evolution of Consciousness and Culture (Cornell UP, 1977) and Orality and Literacy: The technologizing of the word (Methuen, 1982), and Ann Moss, Printed commonplace-books and the structuring of Renaissance thought (Oxford: Clarendon Press, 1996).

3. See D. P. Tackaberry, “The Digital Sound Sampler: Weapon of the Technological Pirate or Pallet of the Modern Artist?,” Entertainment Law Review 87, 1990; Thomas Schumacher, “’This is Sampling Sport’: Digital Sampling, Rap Music and the Law in Cultural Production,” Media, Culture and Society 17 (1995): 253-273; John Perry Barlow, “The Economy of Ideas: A Framework for Rethinking Patents and Copyrights in the Digital Age,” Wired (March 1994), and B. R. Seecof, “Scanning Into the Future of Copyrightable Images: Computer-Based Image Processing Poses Present Threat,” High Technology Law Journal 5 (1990): 371-400; Ronald Deibert, Parchment, Printing, and Hypermedia: Communications in World Order Transformation (New York: Columbia University Press, 1997); Michael Rogers and David Starrett, “Techped: Don’t Be Left in the E-Dust” National Teaching and Learning Forum Newsletter 14.5 (1996-2205), online version accessible at http://www.ntlf.com/.

4. While my claims here are made in relation to a past model of reading characteristic of Western Europe and records of reading habits gleaned from English manuscript collections, the full range of changes brought about by the new technology with regard to the process of reading and the social attitude toward textuality, and their similarities to various, other past models of reading is an emerging area of study.

5. Peter Beal, “Notions in Garrison: The Seventeenth-Century Commonplace Book,” New Ways of Looking at Old Texts. Papers of the Renaissance English Society, 1985-1991, ed. Speed Hill (Binghampton: MRTS in conjunction with the Renaissance English Text Society, 1993), 135.

6. In Harley Rawlinson MS (British Library, Harley MS 3991). See Peter Beal, Index of English Literary Manuscripts 1450-1625, Vol. 1 (London: Mansell, 1980), 332.

7. Arthur Marotti, Manuscript, Print and the Renaissance Lyric (Ithaca: Cornell UP, 1995), 152-3.

8. See Jean-Francois Gilmont’s “Protestant Reformations and Reading,” The History of Reading in the West (eds. Guglielmo Cavallo and Roger Chartier, trans. Lydia Cochrane, U of Masachusetts P, 1999), 231; Terence Cave, “Mimesis of Reading in the Renaissance,” Mimesis: From Mirror to Method, Augustine to Descartes, eds. John Jyons and Steven Nichols, Jr (Hanover: UP of New England 1982), 156. See also Cave, “Problems of Reading in the Renaissance,” Montaigne: Essays in Memory of Richard Sayce, eds. I.W.F. Maclean and I.D. McFarlane (Oxford: Clarendon P, 1982), and The Cornucopian Text: Problems of Writing in the French Renaissance (Oxford: Clarendon, 1979).

9. Cited in Margaret Crum, “Notes on the Physical Characteristics of Some Manuscripts of the Poems of Donne and Henry King,” The Library, 5.16 (1961), 121.

10. Guglielmo Cavallo and Roger Chartier, A History of Reading In the West, eds. G. Cavallo and R. Chartier, trans. Lydia G. Cochrane (Amherst: U of Massachusetts P, 1999), 29.

11. A fascinating example of this “communal” aspect of Renaissance textuality is the manuscript of De Doctrina Christiana, a theological treatise ascribed to John Milton, but whose actual composition as it stands today is the works of at least a few others who have added or changed sections of the text without clearly indicating their involvement in this “co-authoring” of Milton’s text. See Gordon Campbell, Thomas N. Corns, John K. Hale, David I. Holmes, Fiona J. Tweedie, “The Provenance of De Doctrina Christiana,” Milton Quarterly 31.3 (1997): 67-119.

12. After McLuhan, there have been many very useful discussions of the conversational, social dimension of the manuscript culture in the Renaissance, especially with regard to Donne. See, for example, Harold Love, The Culture and Commerce of Texts: Scribal Publication in Seventeenth-Century Culture (Amherst: U of Massachusetts P, 1993); Arthur Marotti, John Donne, Coterie Poet (Madison: U of Wisconsin P, 1986); Ted-Larry Pebworth, “John Donne, Coterie Poetry and the Text as Performance,” Studies in English Literature 29 (1989): 61-75.

13. Margaret Ezell, Social Authorship and the Advent of Print (Baltimore: The John Hopkins UP, 1999), 53-4.

14. Eduardo Kac, “Holopoetry, Hypertext, Hyperpoety,” Originally published in Holographic Imaging and Materials (Proc. SPIE 2043), ed. Tung H. Jeong (Bellingham, WA: SPIE, 1993). Accessible at: http://www.ekac.org/Holopoetry.Hypertext.html.

15.Talk given at Hamline University, St. Paul, MN, April 10, 1997. Accessible at:http://www.altx.com/ebr/ebr5/strick.htm.

16. Ingrid Hoofd, “Aristotle’s Poetics: some affirmations and critiques.” Accessible at: http://www.cyberartsweb.org/cpace/ht/hoofd3.

17. The poem can be accessed at: http://wings.buffalo.edu/epc/ezines/brink/brink02/medical.html.

18. Gilmont, “Protestant Reformations and Reading,” 231 (see note 8).

19. Andreas Luco (1999), whose Website features numerous other discussions dealing with the relationships between critical theory and cyberspace: http://www.cyberartsweb.org/cpace/theory/luco/Hypersign/Play.html.

20. I am aware of the host of possible models of communication enabled through the Internet, including the variety of combinations among text, sound and image content. I cannot help but notice that the association between image and text in particular is a very interesting “comeback” of the emblem tradition. This, however, is a different topic; here I am concerned primarily with text-based hypertext.

21. Nancy Kaplan, “Politexts, Hypertexts, and Other Cultural Formations of the Late Age of Print.” Computer-Mediated Communication Magazine, Vol 2.3 (1994), page 3.

22. See George P. Landow, The Convergence of Contemporary Critical Theory and Technology (Baltimore: John Hopkins UP, 1992), and Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology (Baltimore: John Hopkins UP, 1997).

23. Jean Mason, “From Gutenberg’s Galaxy to Cyberspace: the Transforming Power of Electronic Hypertext,” (Diss. McGill U, 2000), accessible at: https://tspace.library.utoronto.ca/citd/JeanMason/about.html. Mason’s work is one of the very few discussions that aims to examine hypertext and its implications with regard to pedagogical practice and long held assumptions about literacy and creativity.

24. Jay David Bolter, Writing Space: The Computer, Hypertext, and the History of Writing (Hillsdale, NJ: Erlbaum, 1991), 87.

25. Douglas Kellner, “Technological Transformation, Multiple Literacies, and the Re-Visioning of Education,” 3, accessible at <http://www.gseis.ucla.edu/faculty/kellner>

26. The Report is part of the AAC &U online publications, and can be accessed at: http://www.greaterexpectations.org/.

27. ibid.

css.php