Writely

 by Bryan Alexander, Center for Educational Technology, NITLE

The slow rise of wikis as popular authoring tools produced a series of commercial ventures, such as SocialText. Recently a new group of Web 2.0-oriented, wiki-like services has appeared. Writely is a good example of these. It offers an easy-to-access, clean-looking, collaborative writing space, much like JotSpot Live and Writeboard. Users can quickly set up a web page focused on a single document.
Unlike most wiki implementations, and contrary to the old wiki ethos, Writely excludes all potential editors, save those invited by the creator via email. Editing is advanced wiki, including multiple versions and rollback. Contributors are identifiable, unlike most wikis. Menus aren’t intrusive, but drop down through AJAX.
From an information services support perspective, Writely has several advantages. As a hosted site it looks relatively noncommercial. For backup and migration purposes, Writely exports content into several formats, including HTML and Word. In terms of training, the creation and content editing window is a clear, simple WYSIWYG, easier to get into and more recognizable for non-specialists than many wiki implementations. Writely can ingest uploaded documents in common formats, such as Word.
Disadvantages are fairly evident. Like most Web 2.0 products, it’s in beta. As with any externally-hosted service, academic users are at the mercy of someone else’s business decisions. For Mac users, Safari has issues, and may not be supported. Overall, however, the ease of use and good export renders Writely worth the experiment.

TK3: A Tool to (Re)Compose

by Virginia Kuhn, University of Southern California

For a copy of this review in TK3, please click here. If you need to download the TK3 reader, please click here. The download is quick and easy.

Ever since reading George Landow’s Hypertext 2.0 in the mid 90’s, I have been leery of application-specific theory, at least when it is not acknowledged as such. Hypertext 2.0 includes copious references to StorySpace, a software program which allows one to create hypertext and to view and manipulate nodes into various spatial arrangements. These references are not accompanied by full disclosure of Landow’s role in the program’s development; it was developed by academics, many of whom work with Landow. Though it is not clear whether he is a developer himself, there is undoubtedly a reciprocal relationship between him and Eastgate Systems, Inc., the group that created and distributes the program. As I scanned the index entry for StorySpace, noticing that it is longer than that for the World Wide Web, I felt slightly duped at having shelled out $200+ for this rather limited program, particularly when the Eastgate site was hawking Landow’s book. Now, however, I find myself in a curiously similar position. That is to say, much of the scholarly work I have done lately centers on a software program with which I have been teaching for several semesters and in which I created my dissertation. So while my reasons for this partiality ought to be evident if I have done my job in this review, I still feel the need to state my bias from the outset. Fair warning then: I have a rather solid (though non-monetary) investment in this platform. And perhaps it is time for academics to be intimately involved in software development, as I believe Landow was with Story Space, as I am with TK3; if we educators do not help to shape the tools we use, they will be shaped for us and will, no doubt, be imposed upon us by corporate entities such as Microsoft.

When I first encountered TK3, I was teaching at a large urban university and was looking for a way to have students engage with emergent digital technologies; this required them to not only consume or analyze multimodal “texts” but to create them as well. Bob Stein, the founder of NightKitchen, TK3’s developer, was the keynote at the 2003 Computers & Writing Conference held at Purdue; I was there facilitating a pre-conference workshop on technology training for new teachers. Another of the pre-conference activities included a collaborative e-book, Digital Publishing F5 | Refreshed, created by Bob Stein, David Blakesley and thirty other intellectuals—both publishers and academics—interested in the scholarship, pedagogy and publishing potentials of emergent technologies. Indeed, since the New London Group’s 1996 manifesto calling for educators to attend to “multiliteracy,” there has been increasing attention among writing scholars to interrogate  reading and writing with new media.

I was keenly interested in TK3’s pedagogical possibilities since it promised to allow all students, regardless of their technological prowess, to create digital texts. TK3 seemed an alternative to web page construction and given that I could not count on students’ ability to know HTML—indeed there is no baseline for students’ technological proficiency—and being that I did not want (nor did I have the time or expertise) to teach programming, I was stymied. I had begun requiring students to compose visual compositions but I had to allow for a range of vehicles for completion of their projects, and this was problematic for many reasons. Although there are seemingly numerous programs that allow multimedia authoring, none of them was adequate for my needs, as I shall explain.

I tried TK3 in a course I taught called Writing in Cyberspace, and soon began requiring it in classes which ranged from Media Literacy to Multicultural America. This program, meant to allow humans with few computer skills to create electronic “books” which include verbal text, still images, sound and video clips, as well as web links and internal jumps, is by far the best of its type, particularly in terms of making meaning in several registers (aural, filmic, imagistic, textual). Its next version, Sophie, will improve upon its precursor and answer the pedagogical needs of those committed to teaching their students a wide range of literate practices; many of the changes came from feedback from those who have used the program. With TK3, one can easily become a new media producer and exhibit what the New London Group sees as “competent control” of new media texts. Indeed, the ability to contribute to the discourse of new media helps to foster the sort of advanced literacy appropriate to university level studies; without this sort of transformative use, there is very little liberatory potential for literacy.

When I have mandated a particular program in a class, my students and I perform a careful review of the background of its developers. First, I want students to be aware of the programming choices behind the hardware and software used in class; secondly, I want students to confront the content and design choices necessary to create such “texts” themselves. As such, the provenance of the program—the maker of the platform and also the use for which it was made and has been subsequently deployed—is indeed germane.

Throughout The Critical Theory of Technology, Andrew Feenberg argues that technology is ultimately ambivalent, and can be used for disparate social and political ends. Insofar as it is ambivalent, however, there is an inherent ideology that guides programming choices—the surveillance features of many office-networked programs is a good example of this since there is no technical reason for such features to exist. Thus, we must view interaction with technology not as one occurring between a person and a machine but rather as a dialogue between a human user and a human programmer.[1] No curricular materials, be they textbooks, software or hardware, are politically innocent or ideologically neutral; thus, it is important to teach students not only to critically “read” such “texts” but also to use them in a way that demonstrates both self-consciousness as well as an awareness of both the possibilities and the limitations of constructing meaning in a given environment. I firmly believe that we, as academics, cannot afford to simply use platforms created by corporations or those produced for some other purpose in university-level education. We have to step up to the plate and answer for the ideological implications inherent in the vehicles we require students to use, as well as remaining attuned to the specifics of the projects we assign to be carried out in these vehicles.

Figure 1. Screen shot showing the “find” feature which lists not only the search term but also the words immediately before and after. The list is also hot so one click brings the reader to its reference.

Although I mostly use TK3’s authoring function in class (I have students create TK3 books as their course projects), the reader functions are at least as compelling and offer many teaching strategies. The reader functions take full advantage of the encyclopedic nature of the computer. There is a rich “find” function that allows one to search for words or phrases. The results are then listed with a few words that appear on either side of the searched word in order to give a better sense of the context in which each instance occurs. Further, the resulting list is “hot” so that one click takes you to the place at which the word occurs. The reader can highlight text, add sticky notes or produce a notebook in which she can copy excerpts that import replete with working citations. All of these markings are catalogued in a “marks” history; again this list is hot. For a particularly good example of teacherly comments and directions for their access by students, see Cheryl Panelli’s web page that describes the ways in which her students ought to download and read her comments on their work. As a pedagogical tool, TK3 allows the teacher to approach the student text with the sort of seriousness with which she would approach a published text. As teachers, we are conditioned to engage with published writing via a highlighter, while we approach student writing with a red pen. TK3’s reader functions help to dismantle this dichotomy. Reader functions aside, the ability to “write” new media is what I find most compelling about TK3. As the editors of Literacy: A Critical Sourcebook point out, computers call attention to “the relation between technology and literacy in an especially insistent fashion” and yet this relationship has always been a concern, from the technology of writing itself, to other vehicles of textual and pictographic delivery and dissemination.[2] In Chaucer’s day, they note, the act of reading was becoming more widespread but the act of writing was one left to experts. Further, as several essays in this anthology note, the liberatory potential of literacy cannot actualize without the ability to write; for instance, speaking of cultural critic bell hooks’s account of her education at the feet of her black female teachers, Lisa Delpit notes that hooks learned that she was “capable of not only participating in the mainstream, but of redirecting its currents.”[3] This is why scholars such as Gregory Ulmer call for teachers to move students from literacy to “electracy,” a neologism he adopts to describe the types of abilities necessary to be conversant in digital technologies. Seeing digital technologies as inherently image-based, Ulmer teaches students to use both poststructuralist theory and memory work (psychoanalytic theory) to investigate the ways in which cultural structures shape their identities and desires. Students represent that work via web pages produced in response to a series of exercises that help make them “egents” in an electrate world. I subscribe to Ulmer’s methodology and yet I cannot, as I said, count on my students to be able to code web pages. Even if students were to use a variety of web composing programs such as FrontPage or Netscape’s Composer, there would be less opportunity to interrogate individual programs and, without a lot of training, the pages would be very rudimentary. TK3 simply gives the author far more control over the interplay between word and image as well as sound and video.

There are several aspects of the program worth mentioning and they are as follows.

Book Metaphor: Because TK3 retains the book metaphor (while not being a slave to it), it does not sacrifice print literacy for bells and whistles. Unlike PowerPoint, for instance, there is no limit to the amount of text that fits on a screen. As a result, an author is not encouraged to reduce complex ideas into bullet points so commonly found on PowerPoint slides. In addition, images may either be added into the “stream” of the text, anchored between words and moving with them, or they can be added independently with the text wrapping around them. This latter feature adds visual elegance to the page even as it allows for more complexity in the relationship between text and image. One can teach print and visual literacy in a contrastive way, using word and image to inform each other and to highlight the potentials and limitations of each as semiotic systems. The text-friendliness of TK3 is crucial for those of us who do not wish to sacrifice print literacy even as we endeavor to enhance it with other forms of expression afforded by digital technologies. Edward Tufte argues that PowerPoint’s limited text capability causes the oversimplification of complex ideas and cites the breakdown of information among NASA engineers and business managers that ultimately led to the Challenger explosion.[4] While I believe Tufte overstates the case, it is important to remember that PowerPoint and Keynote are presentation programs, ones meant to be accompanied by a human presenter who supplies more detailed information. The Challenger incident is more an example of using the wrong tool, or at least of relying on PowerPoint alone to convey critical information. This is, indeed, rocket science. The problem occurs when such programs are used to produce stand-alone texts, those associated with college level assignments. By contrast, TK3 may be used as a presentation aid but can also include the complex ancillary documents.

Figure 2. Stuart Pate’s student project. He did not make use of the more linear book metaphor and yet its availibility in TK3 is important in that it does not abandon print literacy in favor of image and sound.

Media Resources: This feature is the key to the program’s brilliance. TK3 can act as a holding tank. One can import multiple types of files including jpegs, movie files, text files, sound clips and web links. From here the author simply drags the assets into her book and manipulates them as she wishes. The media resources feature can be filled with a wide range of resources, allowing one to “write” with numerous modes of expression in a very deliberate way. Thus, the program fosters self-conscious decisions about both form and content. Further, since TK3 uses its own reader, the resulting text is not dependent on the look of various web browsers that might change the look of the e-book. Though the Internet can be used as a method of dissemination, the books retain the integrity of the author’s creation.

Complex Annotation: A stunning feature of the program is the ability to annotate. One can create both complex and simple annotations. The simple annotation is one in which a pop up box contains only one resource: word or image. This annotation may be attached to a word or an image in the main body of the book and can be attached with an “annobeam.” This annotation, with its transparent annobeam, allows one to convey further information without sending the reader to a different page, from which she may not be able to return, even as it visually shows its origin without covering the rest of the page and disrupting the main body. The complex annotation takes this capacity to a higher level. A complex annotation is defined as one that includes more than one media element. Similar to a mini-book, this annotation can house film and sound clips, text and image and also links to other annotations or to outside web sources. Thus, one can create a complex annotation inside a complex annotation ad infinitum, enacting the sort of depth that characterizes the pinnacle of hypertext as Jerome McGann describes it in “The Rationale for Hypertext.”[5]

Though the term “depth” is bandied about frequently in the discourse of hypertextual structures, McGann offers the smartest explication of what that actually might mean and how such capacity can exceed the work of the bound book. The ability for depth of analysis is instantiated in TK3 annotation features.

Using the example of the scholarly or critical edition, a staple of literary studies, McGann describes the inherent shortcoming of using a book to analyze a book due to the one-to-one ratio:

 Brilliantly conceived, these works are nonetheless infamously difficult to read and use. Their problems arise because they deploy a book form to study another book form. This symmetry between the tool and its subject forces the scholar to invent analytic mechanisms that must be displayed and engaged at the primary reading level — e.g., apparatus structures, descriptive bibliographies, calculi of variants, shorthand reference forms, and so forth. The critical edition’s apparatus, for example, exists only because no single book or manageable set of books can incorporate for analysis all of the relevant documents. In standard critical editions, the primary materials come before the reader in abbreviated and coded forms.

The problems grow more acute when readers want or need something beyond the semantic content of the primary textual materials — when one wants to hear the performance of a song or ballad, see a play, or look at the physical features of texts. Facsimile editions answer to some of these requirements, but once again the book form proves a stumbling block in many cases. Because the facsimile edition stands in a one-to-one relation to its original, it has minimal analytic power — in sharp contrast to the critical edition. Facsimile editions are most useful not as analytic engines, but as tools for increasing access to rare works.[6]

Programmatic Support: There are a number of premises under which I approach digital technologies and they are as follows:

  1. Literacy includes both reading and writing (decoding and encoding).
  2. Digital technologies change what it means to read and to write.
  3. We are on the verge of a new semiotic system that is rooted in print and image but exceeds them.
  4. Literacy cannot be unhooked from its materiality.
  5. Technology is never politically innocent, nor ideologically neutral.

Given these guiding beliefs, TK3’s developer is a strong asset to the program. After receiving two large grants, one from the Mellon Foundation and one from the MacArthur Foundation, the Institute for the Future of the Book was formed in order to create the next generation of tools for digital scholarship. This Institute has launched many projects in addition to overseeing the development of the next generation of TK3 which will be called Sophie. Due out within months, Sophie will incorporate feedback and suggestions from the teachers, students, writers, and artists who have used TK3. It will be open source and networked, both of which will encourage user enhancement and wider use. The people who staff the Institute for the Future of the Book are smart, supportive and responsive to the needs of the educators they serve; they are involved in numerous projects at the forefront of digital technologies, including digital textbooks, academic blogs, conversations with authors of prominent new books and the archiving of art and photography. Their blog, the if:book blog, is widely-read and cited in many e-zines and on other blogs. Their work is collaborative and not committed to revenue and other corporate concerns; as such they are great intellectual partners. They are a strong feature of TK3, a program that allows writers to both theorize and enact the types of literacies necessary for life in the 21st-century, wired world.

Figure 3. Below is the opening screen of the Institute for the Future of the Book site, the Institute will release TK3’s next version, Sophie.

Notes

1. Andrew Feenberg, Critical Theory of Technology (Oxford: Oxford UP, 1992), 183.

2. Ellen Cushman et al, Literacy: A Critical Sourcebook (Boston: Bedford St.Martins, 2001), 4.

3. Lisa Delpit, “The Politics of Teaching Literate Discourse” in Literacy: A Critical Sourcebook (Boston: Bedford St. Martins, 2001), 552.

4. Edward Tufte, “PowerPoint Does Rocket Science: Assessing the Quality and  Credibility of Technical Reports” (Home Page, Selected Writings, 6 September 2005) http://www.edwardtufte.com/bboard/q-and-a-fetchmsg?msg_id=0001yB&topic_id=1.  Tufte concludes that Technical Reports are superior to PowerPoint for conveying complex information but this seems obvious since PowerPoint is a presentation software and does not pretend to be a reporting format.

5. Jerome McGann, “The Rationale of Hypertext” http://www.iath.virginia.edu/public/jjm2f/rationale.html, 1995, screen 2.

6. ibid.

Bibliography
Blakesly, David, et al.  Digital Publishing F5 | Refreshed.   West Layafette: Parlor Press, 2003. http://www.parlorpress.com/digital.html

Landow, George. Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins UP, 1992.

New London Group, Multiliteracies: Literacy Learning and the Design of Social Futures. New York: Routledge, 1999.

Ulmer, Gregory. Internet Invention: From Literacy to Electracy. New York: Longman, 2002.

Learning Outcomes Related to the Use of Personal Response Systems in Large Science Courses

by Jolee West, Wesleyan University

 

The use of Personal Response Systems, or polling technology, has been receiving wider attention within academia and also in the popular press. While neither the technology nor the pedagogical goals are new, general knowledge and implementation of course-related polling appears to recently have reached the critical threshold. Between 2004 and 2005, the implementations by “early adopters”[1]began to seriously influence the “early majority” resulting in wider visibility of the technology. This trend is illustrated by the increasing number of references to “clickers” and “personal response systems” on the EDUCAUSE website from 2004 until the present, as well as a recent spate of newspaper and e-zine articles.[2]

Many institutions, including community colleges, liberal arts colleges, and large research universities have now adopted Personal Response Systems (i.e., polling technology) in their larger lecture courses across the curriculum. For example, MIT, University of Massachusetts-Amherst, Harvard, Yale, Brown, University of Virginia, Vanderbilt, and Duke, have all implemented personal response systems for larger physics and/or biology lecture courses. A number of implementations took place under the auspices of granting programs, such as the Pew Center for Academic Transformation and the Davis Educational Foundation’s Creating Active Learning Through Technology, which focus on the economics of teaching large lecture courses and the transformation of these typically passive learning style courses into active learning experiences for students.

But as is often the case in the adoption of new instructional technologies, arguments for adoption rarely rest on published analyses demonstrating improvements in learning outcomes. Commonly, such assessments simply have not been performed. Nevertheless, in researching the technology for my own institution, I did a hard search for learning outcome studies. I found that data abound on student satisfaction with personal response systems, on whether it made their class more interesting, improved attendance and the like.[3] But reports of learning outcomes are few and far between. What follows is a discussion of four references I found reporting learning outcome analyses related to the use of interactive engagement pedagogical methods in large science courses. Only the last two cases are personal response systems specifically mentioned. But as we will see, the technology is not really the star in this show; not surprisingly, it is the pedagogy that takes center stage.

A controlled study by Ebert-May et al. shows that student confidence in their knowledge of course materials is significantly increased in courses taught using interactive engagement methods over those taught by traditional lecture: “Results from the experimental lectures at NAU suggest that students who experienced the active-learning lecture format had significantly higher self-efficacy and process skills than students in the traditional course. A comparison of mean scores from the self-efficacy instrument indicated that student confidence in doing science, in analyzing data, and in explaining biology to other students was higher in the experimental lectures (N = 283, DF = 3, 274, P < 0.05).”[4]

A large study by Hake of 63 introductory physics courses taught with traditional methods versus interactive engagement (IE) methods, examined student learning outcomes using a commonly applied pre- and post-test design based on the Halloun-Hestenes Mechanics Diagnostic test and Force Concept Inventory. The study, which included 6,542 students, concluded that “A plot of average course scores on the Hestenes/Wells problem-solving Mechanics Baseline test versus those on the conceptual Force Concept Inventory show a strong positive correlation with coefficient r = + 0.91. Comparison of IE and traditional courses implies that IE methods enhance problem-solving ability. The conceptual and problem-solving test results strongly suggest that the use of IE strategies can increase mechanics-course effectiveness well beyond that obtained with traditional methods [original emphasis].”[5]

The Pew Center for Academic Transformation has been interested in examining transformation of courses from passive to active learning experiences by using controlled experiments. One of its beneficiaries, the University of Massachusetts-Amherst, conducted a two year study of courses redesigned for use of a Personal Response System. The Office of Academic Planning and Assessment at University of Massachusetts concluded that in these courses “[attendance] in the redesigned sections was consistently high, and students performed consistently better on the new problem-centered exams compared to the old exams based on recall of facts.”[6]

Lastly, a recent study by Kennedy and Cutts examined actual response data per student over the course of a single semester. In-class questions were of two types, one which asked the student to self-assess their study habits, and the other which focused on course content. These data were analyzed with end-of-semester and end-of-year exam performance results using cluster analysis and MANOVA. Their investigation showed that students who more frequently participated in use of the personal response system and who were frequently correct in their responses, performed better on formal assessments. Students who infrequently responded, but did so correctly, nevertheless performed poorly on formal assessments, suggesting level of involvement during the class is positively correlated with better learning outcomes.[7]

To sum up, what my search found was that where data exist, they are positive in supporting not just the use of personal response systems, but more specifically, the pedagogy associated with the use of these systems. These studies suggest that better learning outcomes are really the result of changes in pedagogical focus, from passive to active learning, and not the specific technology or technique used. This is an important caveat to interested faculty–the technology is not a magic-bullet. Without a focused, well-planned transformation of the large lecture format and pedagogical goals, the technology provides no advantage. If the manner in which the technology is implemented in class is not meaningful nor interesting to the student, participation lapses. Ultimately, what these studies demonstrate is that student participation is key to positive learning outcomes.

Notes:

  1. See E. M. Rogers, Diffusion of Innovations (New York: Collier Macmillan, 1983).2. C. Dreifus, “Physics Laureate Hopes to Help Students Over the Science Blahs,” New York Times(Nov. 1, 2005), http://www.nytimes.com/2005/11/01/science/01conv.html?ex=1132376400&en=c13349a4a1f8cf78&ei=5070&oref=login; Alorie Gilbert, “New for Back-to-school: ‘Clickers,'” CNET’s News.com (2005), http://news.com.com/New+for+back-to-school+clickers/2100-1041_3-5819171.html?tag=html.alert; Natalie P. McNeal, “Latest Campus Clicks a Learning Experience,” The Miami Herald (Oct 17, 2005), http://www.miami.com/mld/miamiherald/news/12920758.htm.3. Steven R. Hall, Ian Waitz, Doris R. Brodeur, Diane H. Soderholm, Reem Nasr, “Adoption of Active Learning in a Lecture-based Engineering class” IEEE Conference, (Boston, MA, 2005), http://fie.engrng.pitt.edu/fie2002/papers/1367.pdf; SW Draper and MI Brown, “Increasing Interactivity in Lectures Using an Electronic Voting System,” Journal of Computer Assisted Learning, 20 (2004): 81-94, http://www.blackwell-synergy.com/links/doi/10.1111/j.1365-2729.2004.00074.x/full/; Ernst Wit, “Who Wants to be… The Use of a Personal Response System in Statistics Teaching” MSOR Connections 3(2) (2003), http://ltsn.mathstore.ac.uk/newsletter/may2003/pdf/whowants.pdf4. Diane Ebert-May, Carol Brewer and Sylvester Allred, “Innovation in Large Lectures–Teaching for Active Learning” BioScience 47 (1997): 601-607, 604.

    5. Richard R. Hake, “Interactive-engagement Versus Traditional Methods: a Six-thousand-student Survey of Mechanics Test Data for Introductory Physics Courses,” American Journal of Physics 66 (1998): 64-74, http://www.physics.indiana.edu/~sdi/ajpv3i.pdf,18.

    6. Office of Academic Planning and Assessment, University of Massachusetts, Amherst, Faculty Focus on Assessment, v.3(2) (Spring 2003), http://www.umass.edu/oapa/oapa/publications/faculty_focus/faculty_focus_spring2003.pdf, 2.

    7. GE Kennedy, QI Cutts, “The Association Between Students’ Use of an Electronic Voting System and their Learning Outcomes,” Journal of Computer Assisted Learning21 (2005): 4, 260-268, http://www.blackwell-synergy.com/doi/pdf/10.1111/j.1365-2729.2005.00133.x

Incorporating Blogging in a Free Speech Course: Lessons Learned

by David Reichard, California State University Monterey Bay

 

Details

Instructor Name:

David A. Reichard

Course Title:

Free Speech and Responsibility

Institution:

California State University Monterey Bay

What is the overall aim of the course?:

This course serves meets the Relational Communication outcome required for all Human Communication majors (an integrated humanities degree). It also fulfills concentrations in Pre-Law,Practical & Professional Ethics, and Journalism. The goal is for students to contend with ethical and effective ways to communicate through intensive study of free speech.

Course design and scope of the project:

Forty students take this semester-long course, meeting in four separate seminars. This allows them to work closely with the same group on a consistent basis. After seminar, the class reconvenes as a larger group, drawing connections among the discussions, and mapping our observations and questions on the board.

Incorporation of Technology:

At first, each seminar had its own weblog, or blog, using Moveable Type and hosted on a server. I was administrator of all blogs, students were authors. The next time I offered the course, students created individual blogs using Blog-City, a free commercial blogging host. I blogged too.

Lessons Learned:

Blogging should connect with course content and students should see that connection; otherwise it seems like busy work. Second, the more creative freedom students had, the more they embraced blogging. With less faculty “control,” student blogs became individualized, perfect for free speech, but perhaps not for other kinds of curricula.

References, links:

Free Speech and Responsibilty Course website, including course outline, materials, assignments and links to blogs.

I began this research as a 2003-2004 scholar with the Carnegie Academy for the Scholarship of Teaching and Learning (or CASTL).  Here’s the project snapshot.

Professor Barbara Ganley’s blog about blogging in teaching and learning—she is one of the most insightful academic bloggers I have encountered.

Weblogs in Higher Education–a good portal to blogging in education.

Measured Results:

Blogs were important spaces for students to record notes, pose questions, and organize their ideas before and after seminar.  However, some students found blogging repetitive or a poor substitute for face-to-face interactions. These findings became clear from observation of seminars, reading student blogs, and reviewing mid-term and final assessments (which specifically addressed blogging). Most importantly, students wrote essays analyzing their own and other students’ blogs. These essays provided invaluable “meta” analysis of student learning in the course. Significantly, students described blogs as providing a public record of their own learning, making their process as learners visible to themselves and others.

Interactive Engagement with Classroom Response Systems

by S. Raj Chaudhury, Christopher Newport University

 

Details

Instructor Name:

  1. Raj Chaudhury

Course Title:

Introductory science for non-majors

Institution:

Christopher Newport University

What is the overall aim of the course?:

Introductory science courses for non-majors are often among the larger courses taught by liberal arts colleges, and while they fulfill “breadth” requirements within core curricula, their very size and nature often pose a challenge for properly assessing student learning and engagement with the material. Even though such courses often stress “finding the right answer,” I’m interested in generating  discussion among students and engendering a sense of shared inquiry.

Course design and scope of the project:

Physical science courses for non-majors have been a special interest of mine over the last several years – from classes that follow a “studio” model with integrated lecture and laboratory – to large lecture-only courses where student interest, attendance and motivation are all open to question. Much of this work has been completed at Norfolk State University (a Historically Black University), but I am continuing it at Christopher Newport University, also a state institution in Virginia. I’ve been especially interested in exploring classroom response systems (aka “clickers”) to promote understanding of the material, collaboration, and metacognitive awareness.

Incorporation of Technology:

Interactive handheld response systems (“clickers”) lie at the heart of my approach. Multiple choice questions are posed by the instructor to the class who each anonymously respond using their devices–which look like TV remote controls. Once all responses are received, a histogram displaying the results displays on the screen. Ideally, the chosen question will generate a bi-modal distribution. The instructor asks students to engage in Peer Instruction–“ “turn to your neighbor and try to convince them to change their answer to yours.” This period usually lasts 60-90 seconds. As the buzz in the room subsides, the instructor polls the class again using the same question. Depending on the outcome of this poll, the instructor may chose to revisit the topic, clarify a point, or simply proceed with the lesson. A 50-minute lecture broken into 3 segments of 10 minute direct instruction followed by one or two “clicker” questions keeps students engaged and provides the instructor with useful formative assessment data.

Lessons Learned:

Response systems are, in my opinion, excellent tools for scholars of teaching and learning because of their data generation capabilities. Student-work artifacts in pencil and paper form can take long hours to grade to obtain evidence for an inquiring teacher; with the response systems the work goes into creating excellent questions – once, since they can be reused and the data is automatically generated and stored by the system. Even though I teach introductory science – where there is often an emphasis on “finding the right answer,” I use the system principally to generate discussion among students and to engender a sense of shared inquiry, where the assessment data is shared in real-time by the students and the instructor. This approach is applicable across many disciplines – wherever there are lectures that could be made more interactive.

As students and instructor view a histogram of results together, they connect, in a powerful way, around the material – creating a pathway for the development of students’ metacognitive skills in a manner not easily possible without the technology.

References, links:

There are several classroom response systems available – both in Higher Education and in the Secondary School markets. I am currently using the CPS (Classroom Performance System) from e-Instruction. Their website is a good place to start. Research on response systems has been growing – especially in the Physics Education field, where papers from Eric Mazur’s group at Harvard and the U-Mass Amherst group are well regarded. A number of presentations at the Carnegie Colloquium and meetings such as AAC&U have focused on implementations of response systems. I shall be developing an online poster on my usage of the CPS system at CNU in Fall 2005. Links will be available from my home page.

Measured Results:

While I have always received positive anecdotal feedback from students regarding the usage of response systems in my classes at Norfolk State University, much of my attention there was focused on encouraging other faculty members to adopt an approach of pursuing interactive engagement in their lectures using Peer Instruction. We used the Personal Response System (PRS) technology – now sold as Interwrite PRS. Its data aggregation facilities were primitive and made it hard to store and analyze data from multiple class sessions. This year at CNU I have been pursuing a very systematic strategy of storing the results of each session (using the CPS technology). I have asked students to comment on the Student Evaluation forms about the effectiveness of using technology in their course (introductory physics for non-science majors) for learning. I hope to receive this feedback sometime in the Spring semester. In the meantime, my data suggest that many physics misconceptions have been identified with a CPS question, addressed through a short instructional sequence and then assessed with follow-up questions that allowed students to demonstrate increased understanding of the topic. Most recently, this happened during the study of thermal energy and differentiating the temperature of an object from its specific heat capacity.

NOTES & IDEAS: Using Blogs to Teach Philosophy

by Linda E. Patrik, Union College

 

Students taking their first philosophy course often express surprise when encouraged to use “I” in their papers. Unlike academic writing in most other disciplines, philosophical writing frequently and strongly states the “I” because philosophers have to develop and defend their own positions. They cannot weasel out of taking responsibility for their views, and thus the assertion of the “I” means that they are willing to stand or fall with their expressed position.

This is one reason why blogs are so effective for teaching students how to debate in philosophy. Blogs were initially developed as online diaries, and most college students still associate blogs with their own inward monologues. The blog medium softens students’ resistance to using the philosophical “I” in their writing, since they are accustomed to bloggers expressing their own views and taking personal responsibility for such. Blogs bridge the personal “I” of a diary with the philosophical “I” of an argument offered in public debate. Once these public debates are posted online, the ease of using “I” — and meaning it — makes students more confident that they are capable of having their own views.

The effectiveness of blogs for philosophical debate increases when each student has his or her own blog. It is better to give each student a blog than to have all students participate in a single blog; not only do students write more, but they argue more creatively. When students have to post a blog that is in competition with other students’ blogs, students become attentive to which blogs attract and generate the most interesting and heated debates in the course. They scan the various blogs posted by other students and keep returning to the blogs that have the best debates. When commenting on others’ blogs, students not only aim to make their points in those debates but seek to entice readers back to their own blog. Students spend more time and thought on their individual blogs in order to keep it popular, and they also take care when commenting on others’ blogs because they want reciprocal visits to their own blog.

When students each have a blog for posting their positions on philosophical issues, they not only develop a sense of personal responsibility and confidence in their work, but they also unlock their creativity. Some blog software allows them to select the graphic design and format of their blog; many blog programs allow them to include photos, images, video clips and audio files to personalize their blogs.

Creativity in blogs is not limited to graphics. Students learn to write hypertext and even techno-text[1] papers in their blogs. In the philosophy course on Cyberfeminism that I taught last spring, students posted all of their writing in their blogs. In addition to papers, they wrote summaries of reading assignments and posted commentaries on debate issues raised in class. As they gained familiarity with blogging, they began to experiment with links, images, video, and sound as digital enhancements for their posted written work. Over the course of the term, students gained new inspiration from visiting others’ blogs on a regular basis—the experiments attempted by one or two began to spur the others on to try something new. A few students created complex, multimedia forms of techno-text as the final project in their blog; as the term ended, students not only visited one another’s blog but celebrated these virtuoso multimedia creations. Who would have thought that philosophical writing could have images and video? Hegel’s old metaphor for philosophy—the owl of Minerva flying at dusk—was inadequate for the kind of philosophical writing posted by the most creative students: philosophical writing supported by rainbow colors and complex imagery.

Philosophical creativity involves raising the most thought-provoking questions and defending one’s own answers to such questions. Blogging encourages creativity in philosophical debate, especially when each student has his or her own blog, because it allows for fairly spontaneous expression of ideas and it invites students to journey out of their blogs into the blogworld established by another. In order to debate with one another, students in my Cyberfeminism course posted their own position on an issue on their own blog and then visited one another’s blog to find others’ positions on the issue. They posted their responses to others’ positions either on their own blog or on other students’ blogs. The more technically-adventurous included links to one another’s blogs in their own blog’s discussion of an issue.

Several course management programs have a Discussion medium that is similar to a blog, but most of these programs require students to participate in the same blog (e.g., Blackboard’s Discussion Board). The professor sets up the questions for discussion and debate, and then asks students to log in and comment on the questions. Course chat rooms are also a common online venture, lacking the individual character and control that separate student blogs have. The advantages of grading individual blogs outweigh the ease of grading discussions gathered in one blog or one chat room, considering that each student learns to write for a public beyond the professor; in addition, students can more easily compare their online work to that of others. Grades for individual blogs make more sense to students than do grades on what they have contributed to a common blog or chat room.

In sum, the advantages of individual student blogs for philosophical writing are personal responsibility, confidence in one’s own view, debate excitement, and creativity. The blog medium allows for dialogue and debate, which are essential to philosophical thinking, and the digital enhancements possible in blogs allow for new directions in philosophical expression.

[1] N. Katherine Hayles’ concept of techno-text is that of a digitally enhanced text that reflects back upon its own electronic medium. (Writing Machines, MIT Press, 2002)

Open Access to Scholarship: An Interview with Ray English

by Michael Roy, Middlebury College

 

What is the open access movement?
Open access is online access to scholarly research that’s free of any charge to libraries or to end-users, and also free of most copyright and licensing restrictions. In other words, it’s scholarly research that is openly available on the Internet. Open access primarily relates to the scholarly research journal literature–works that have no royalties and that authors freely give away for publication without any expectation of monetary reward.

The open access movement is international in scope, and includes faculty and other researchers, librarians, IT specialists, and publishers. There has been especially strong interest from faculty in scientific fields, but open access applies to all disciplines. The movement has gained great impetus in recent years through proclamations on open access, endorsements from major research funding agencies, the advent of new major open access publishers, and through the growth of author self-archiving and author control of copyright.

Are there different forms of open access?
Open access journals and author self-archiving are the two fundamental strategies of the open access movement. Open access journals make their full content available on the Internet without access restrictions. They cover publication costs through various business models, but what they have in common is that they generate revenue and other support prior to the process of publication. Open access journals are generally peer-reviewed and they are, by definition, published online in digital form, though in some instances they may also produce print versions. Author self-archiving involves an author making his or her work openly accessible on the Internet. That could be on a personal website, but a preferable way is in a digital repository maintained by a university or in a subject repository for a discipline. I should point out that author self-archiving is fully in harmony both with copyright and with the peer review process. It involves the author retaining the right to make an article openly accessible. Authors clearly have that right for their preprints (the version that is first submitted to a journal) – but they also can retain that right for post-prints (the version that has undergone peer review and editing).

Do journals generally allow authors to archive their work in that way?
A very large percentage of both commercial and non-profit journals do allow authors to make post-prints of their works openly accessible in institutional or disciplinary archives. There tend to be more restrictions on the final published versions (usually the final pdf file), but many publishers allow that as well. An interesting site that keeps track of that is SHERPA in the United Kingdom.

Why is open access important for higher education?
Open access is one strategy – and actually the most successful strategy so far – for addressing dysfunctions in the system of scholarly communication. That system is in serious trouble. High rates of price increase for scholarly journals (particularly in scientific fields), stagnant library budgets, journal cancellations, declining library monograph acquisitions, university presses in serious economic trouble, and increasing corporate control of journal publishing by a small number of international conglomerates that have grown in size through repeated mergers and acquisitions – those are all symptoms of the problem. Scholars have lost control of a system that was meant to serve their needs; more importantly, they are also losing access to research. Open access has extraordinary potential for overcoming the fundamental problem of access to scholarship. It is a means of reasserting control by the academy over the scholarship that it produces and of making that scholarship openly available to everyone – at anytime and from virtually any place on the globe.

Why does open access matter to liberal arts colleges in particular?
It is especially important for liberal arts colleges because of the access issue. Liberal arts college libraries have smaller budgets, compared to the research universities. While even the major research libraries cannot afford all of the journals that they need, the lack of access is an even bigger problem in the liberal arts college realm. Faculty at many liberal arts colleges are expected to be active researchers and independent study is also a hallmark of a liberal arts college undergraduate education. So the lack of access to journal literature can be even more problematic in the liberal arts college framework than it is for the research universities.

Are there other benefits to open access?
There are many benefits, but the main one that I would point out relates to the growing body of research that demonstrates how open access increases research impact. A number of studies have shown that articles that are made openly accessible have a research impact that is several times larger than that of articles that are not openly accessible. Authors will get larger readership and more citations to their work if they make it openly accessible.

And what about disadvantages?
Well, one of the main objections to open access journals relates to the fact that most of them are new and don’t have the prestige factor of older established journals. So, younger faculty who are working for tenure may not want to publish in open access journals, particularly if they can publish in traditional subscription journals that are high in prestige and impact. That’s not as much of a concern for tenured faculty, though, and some open access journals are becoming especially successful and prestigious. Titles published by the Public Library of Science are a great example of that. Prestige and tenure considerations don’t come into play for self-archiving. All authors can exert control over copyright and can make their work openly accessible in a repository, and that will definitely benefit both themselves and the research process generally.

What about the business viability of open access journals?
As I mentioned, there are a variety of business models that support open access publishing. They include institutional sponsorship, charging authors’ fees, and generating revenue from advertising or related services. Business models will differ, depending upon the discipline and the particular circumstances of a journal. In the sciences, where there is tradition of grant support, charging authors’ fees is very feasible. Both the Public Library of Science (the most prominent nonprofit open access publisher) and BioMed Central (the most prominent commercial open access publisher) are great examples of that. In humanities fields, by contrast, there is very little grant support for research, but publishing is also less costly, so open access there is likely to be fostered primarily through institutional sponsorship. Open access publishing is inherently less expensive than traditional subscription or print publishing. There are virtually no distribution costs and no costs related to maintaining subscriptions, licensing, or IP access control. There are also a number of open source publishing software systems that support the creation of new open access journals. I’m amazed by how many new peer-reviewed open access journals are appearing all the time. One way to get a sense of that is to go from time to time to the Directory of Open Access Journals. As of right now there are almost 2,000 titles listed. Just six months ago there were about 1,450.

Are there over 500 new titles in the last six months, or are there 1,000 new titles, and 500 titles that went out of business? Should faculty who don’t have tenure worry about publishing in journals that might no longer exist when they come up for tenure?
I’m not aware of any conclusive data on the failure rate for open access journals (or new subscription journals, for that matter). A new study that will be published in January indicates that about 9% of the titles listed in the Directory of Open Access Journals have published no articles since 2003. Those titles are still available online, so it’s hard to say if the journals have actually ceased. In addition, a small percentage of titles in the directory (about 3%) were inaccessible during the study. The reasons for those titles being offline are not clear; some may have failed, but some may just be inaccessible temporarily. A significant percentage of open access journals are from well-established publishers and some individual titles have been in existence for a decade or longer. At the same time, a large majority of open access titles are from smaller, more independent contexts – they are produced by non-profit organizations, academic departments, or leaders in a field. Since they are relatively new, their viability isn’t proven yet. So it could be advantageous for untenured faculty to publish in some open access journals, but that may not be the case a lot of open access titles.

What’s the hottest current issue related to open access?
I think it’s the issue of taxpayer-funded research. Both in this country and abroad there is increasing interest in making publicly-funded scientific research openly accessible. We saw the beginnings of that with the National Institute of Health policy that was instituted last year and I think we will soon see a broad national debate about the advisability of this for all U.S. government agencies. The United Kingdom is moving toward a comprehensive policy of mandating open access to all government-funded research.

What is your role in the open access movement?
I have been a member of the steering committee of SPARC (the Scholarly Publishing and Academic Resources Coalition) since its inception. SPARC, which is a coalition of academic and research libraries, has been a prominent advocate for open access. I have also played a leading role in the scholarly communications program of the Association of College & Research Libraries. I chaired a task force that recommended the ACRL scholarly communications initiative and I have been chair of the ACRL Scholarly Communications Committee since it was established. Being involved with both SPARC and ACRL has put me in the middle of a number of these issues for the past several years.

How does open access fit into your role as library director at Oberlin?
We have been doing ongoing work at the campus level to build faculty awareness of scholarly communications issues and also to support open access in concrete ways. We have taken out institutional memberships to major open access journals and we’ve encouraged faculty to publish in open access journals in instances where that made sense for them. I have also been involved as a steering committee member with the creation of a statewide institutional repository that OhioLINK is developing. When that repository system is implemented we will be working very actively with our faculty on the question of author control of copyright and self-archiving.

What are some concrete things that faculty, librarians, and other stakeholders can do to help?
Faculty have great power in the system of scholarly communication (as editors, reviewers, and authors), so they are in the best position to bring about change. They can assert control over their copyright, archive their research openly, and publish in open access journals, among other things. The role of librarians and IT staff necessarily needs to be more educational in nature. They can become informed about these issues and then work with faculty and other researchers to bring about fundamental change. There is a good summary of a lot of these issues, along with concrete suggestions for what faculty, librarians, and others can do, in the ACRL Scholarly Communications Tool Kit.

The Create Change website is another great resource.

Other than Academic Commons, what is your personal favorite open access publication?
My favorite one, for obvious professional reasons, is D-Lib Magazine. It publishes a variety of content – articles, news, commentary, conference reports – related to the development of digital libraries. They’ve had a number of important pieces on open access and scholarly communications issues.

Faculty as Authors of Online Courses: Support and Mentoring

by Deborah Cotler and Gail Matthews-DeNatale, Monmouth University

 

Our Present Context: How Did We Get Here?

Only a few years ago, if you had polled Simmons College administrators, faculty, students, and even technology staff members, the consensus would have been that “online”  learning is not relevant to the mission of our institution. A “small university”  with a liberal arts undergraduate program and four graduate schools, Simmons’ culture is “high touch”  and personalized. To the uninitiated, distance learning seemed antithetical to our institutional mission and philosophy of learning.

Along with thousands of other institutions of higher education, our views have changed as we have become increasingly sophisticated in our understanding of the tremendous potential for online learning. Today we offer hybrid courses, three fully-online certificate programs, and an online degree program in Physical Therapy. The School of Library Science is a member of WISE, a national network of schools providing online courses in information science. A number of other fully-online and hybrid programs are in development, including courses within the College of Arts and Sciences. Not only do pioneering faculty teach online at Simmons, those in the so-called “second wave”  are also developing hybrid and fully-online courses.

Our current challenge is to ensure the development of online learning that engages learners in the open-ended, inquiry-based learning that we believe is at the heart of a liberal arts education. We are finding that excellent professors whose face-to-face teaching is grounded in a liberal arts approach to learning may sometimes encounter difficulties when they take their teaching into the digital realm.

Our experience also suggests that the distinction between “pioneer”  and “second wave”  faculty is spurious. These labels distract from the insights and unique talents that a particular faculty member can contribute to a project. People don’t fit neatly into categories – they aren’t exclusively pioneers orsecond wave. Some faculty who are “second wave”  in relationship to technology can be pedagogical “pioneers.”  To realize the promise of online learning, we believe that academic technologists must learn how to collaborate with good teachers – even when technology isn’t a professor’s strong suit. Conversely, faculty members need help in learning how to work in partnership with academic technologists.

Good professors excel at engaging groups of students face-to-face, but few are prepared to develop courses online. In addition, their pedagogy is often implicit – developed and fine tuned over the years through trial and error.  Paul Hagner writes:

It is a basic fact that many of the best teachers possess natural communication and information management abilities that, for many of them, are simply assumed rather than the product of intensive self-examination.  Since one requirement for transformation is coming to grips with how the new technologies can enhance learning objectives, a problem results in that many successful teachers have never engaged in this form of articulation and self-examination.[1]

Faculty members and academic administrators who are new to e-learning are likely to overlook or even eschew logistical details that technologically-adventurous professors easily think through, grapple with, and resolve. Likewise, tech-savvy faculty may be undeterred by technical glitches, but have tremendous difficulty conceptualizing online offerings that are pedagogically progressive and grounded in inquiry.

Given this context, it is vitally important for those of us who are involved in academic technology to help faculty and administrators develop understandings and capabilities they may not realize they need.[2]And we may also need to step back and question our own pedagogical assumptions about the role that technology should/can play in teaching and learning at liberal arts institutions.

Just as a good teacher knows how to tailor a course to suit a particular group of learners, academic technologists need to develop a framework of support customized to meet the complex and variable needs of mainstream faculty, a support framework that is also congruent with the culture of the institution. In the same way that an ethnographer takes time to become steeped in the culture of a given community, we need to listen, observe, and thoughtfully assess faculty members’ perspectives and needs.[3]

To deepen our understanding of the range of their perspectives and needs, we interviewed several of our faculty collaborators, including:

Mary Jane Treacy, who directs the Honors Program in the College of Arts and Sciences at Simmons College. In fall 2004 we worked with Mary Jane to help her develop her first hybrid course for graduating seniors. As part of a year-long fellowship, we are currently collaborating with her to integrate ePortfolio work across all years of the Honors Program and curriculum.

Vicki Bacon, who chairs the Counselor Education program at Bridgewater State College and is an adjunct faculty member at Simmons. She developed and teaches a fully-online course in Sports Psychology. Of the three faculty members we interviewed, Vicki had the greatest difficulty making the transition to teaching online. Our work with her is featured in a case study later in this article. We are grateful to Vicki for allowing us to write up the problems she encountered as a case study through which others can learn.

Robert (Bob) Goldman, who is a Mathematics Professor in the College of Arts and Sciences at Simmons College. He has developed two online courses, the most recent of which is “Webstat,” a fully-online statistics course.

What Are The Concerns of Mainstream Faculty?

When asked about preliminary concerns in developing an online course, each of our interviewees shared similar concerns. Bob and Mary Jane were apprehensive about loss of control and quality in their teaching. They also expressed fear of failure.  (see “Preliminary Concerns” video)

Vicki wasn’t initially concerned. Because her ability as a classroom teacher is her “greatest strength,” it didn’t occur to her that she might have difficulty teaching online. Like Bob, she doubted the medium – whether a course like hers could succeed online. But she didn’t anticipate that distance learning would set in motion a process that required her to rethink how she teaches her subject.

Online Authoring: What’s Different?

Online course development challenges faculty to become explicit about their teaching because e-courses force them to “put it in writing”  (or into multimedia). Yet few first-time online professors – and even fewer academic administrators – recognize the course development process as an act of multimedia authorship.

According to Doug Brent, good courses are “like a story in an oral society … created and recreated each year in the complex guided interaction that occurs around [a] constellation of texts.” [4] When courses are offered over the Web, the posting of a session is a distinct act of authorship that precedes student and faculty interaction with the material. The “course”  reads as a musical score to be followed (and hopefully improvised upon) by course participants and facilitators. Each “class”  is an enactment, or performance, of this score, varying from semester to semester according to learners’ needs. The course score must be carefully composed in advance with attention to:

  • tone (desired approach and interpersonal dynamics);
  • part  (expectations for how students will interact with the material and with each other);
  • timing (a realistic assessment of how long each task will take); and
  • flow (how each component connects, furthers goals, and contributes to the learning experience as a whole).

As faculty members become immersed for the first time in the writing-intensive process of course development, they struggle to understand the genre. What constitutes a “session”  or “lesson?”  Lacking sufficient orientation, they tend to misapply familiar formats: cryptic lesson plan notes, PowerPoint slides that lack the speaker’s narrative, or lengthy academic articles. Faculty need guidance in developing a mental template for online learning that suits their personality, discipline, and pedagogical philosophy.

The collaborative dimension of online course development also requires faculty to become accustomed to a different pace and working style. With the exception of team-taught courses, most faculty members develop lesson plans on their own, using an idiosyncratic process that involves little or no interaction with others.

But for mainstream faculty who do not do their own technical implementation, online course development inevitably involves the give and take of working with a team of instructional designers and technologists. Ideally, team members are full collaborators with the faculty member. Instead of viewing others on the team as technicians who are solely responsible for “putting the course online,”  the faculty member needs to learn how to partner with people who possess professional perspectives, skills, and abilities. The work of educational technologists may be a heretofore invisible dimension of the process for the faculty member. For example, Instructional Designers, expert in web-based course design, implementation, and assessment, may suggest approaches that feel counter-intuitive to those who have never taught online. In addition, the technical implementation of course materials takes time, requiring faculty to adhere to deadlines that are well in advance of those that would be needed for a face-to-face course. According to Bob Goldman:

I’ve gotten used to working with the team that is preparing the course. I think that’s worked out well. I now know that I have to give them a lot of lead time. I know what they can do, and what they can’t do. And I’m now able to work within that framework much better than I was before.

Online Course Authorship Requires Faculty to Develop a New Skill Set

Assuming that online courses are a new genre of writing, what’s entailed in this type of authorship? In addition to asking our three inteviewees about their preliminary concerns, we also asked them to tell us what they think first-time authors of online courses need to know (see “What First Timers Need to Know”  video).

In reflecting on our interview data and on our own experiences working with faculty, we believe that faculty need support in developing the following understandings and capabilities:

Understand How to Author a Coherent, Integrated Learning Experience:  Most faculty members are unaware of the explanations they provide “in the moment”  when they teach face-to-face. Their first stab at translating sessions for online delivery reads like a set of lesson notes. For many, this is a necessary first step – putting the broad strokes in writing. When asked to flesh out the session, the second draft will often read like cookbook directions- with some clarifying details and the desired sequence of activity (“First, do this.  Then, read that.  Finally, do this.” ). But for the course to be a gratifying learning experience, sessions need a narrative dimension, the textual equivalent to verbal orientation and context setting. Sessions also need to be revised and polished in a manner usually reserved for print publications.

Understand What Needs To Be Composed in Advance and What Can Be Improvised: In a face-to-face setting, the teacher goes to class with a repertoire of strategies, discussion questions, and other resources jotted down in her lesson notes (or in her head). If students do not connect with one approach, she can improvise. In developing an online course, first timers have difficulty distinguishing between materials that need to be incorporated into the course text and things that can be communicated in impromptu announcements and discussion posts.

Understand the Emotional Needs of Online Learners: In the face-to-face classroom, good teachers know how to use subtle gestures and tone of voice to set an emotional tone that is conducive to learning. In preparing a course for delivery online, faculty are often inattentive to issues of tone. They need to learn how to use words, color, and images to communicate that their course welcomes intellectual risk-taking, inquiry, and deep thought.

Understand How To Keep Students Engaged and Oriented: Perhaps the most difficult challenge for faculty is to develop online sessions that are both explicit and engaging. Well-crafted sessions address the metacognitive dimension of learning. For example, callout boxes can be used to help learners see how discrete activities connect up with larger learning goals.

Faculty members who are new to teaching online often focus on the limitations of the medium – overlooking types of learning that can only take place “at a distance.”  For example, instead of doing all coursework online, students can get up from their computers to do activities around their homes and communities in geographically diverse settings. They can then report back. Within a relatively short time frame class members can benefit from information or stories that peers have gathered from across the country or even the world.   Groups can compare, contrast, analyze, debate, and synthesize their experiences into a multi-dimensional understanding of the topic.

Understand How The Course Looks and Feels From The Students’ Perspective: In the face-to-face setting, there are numerous cues about how a session is going – students’ body language and questions indicate when the learning is off course. But in an online course, serious problems can go unnoticed and compromise student learning. For this reason, we ask first-time course developers to solicit feedback through frequent formative assessment surveys. While the problems with a given session are still fresh in students’ minds, we use the following three questions at the end of each learning module:

  • How many hours did you spend working on this module?
  • What are your suggestions for improving this module? Please also fill us in on any problems you encountered with the technology, directions, or organization of materials.
  • Considering the objectives for this module, what do you think is the most important thing you learned? What questions remain?

The three-question format helps us disentangle technical and pedagogical glitches. Some things can be fixed in the moment. Student engagement intensifies when they realize that their input results in on-the-fly course revisions. Other issues are duly noted and “fixed”  in the next “edition”  of the course.

This skill set serves as the framework we use in consultation with faculty. But what does it look like in action? The following case study serves as an example.

Case Study

In 2003, Simmons launched a fully-online certificate program in Nutrition. Sports Psychology, taught by Professor Vicki Bacon, is one course in the program.

Well-regarded by her students and by others in her field, Vicki prides herself in her ability to “walk into a classroom, quickly size up the dynamic and mold the classroom experience accordingly.”  Her courses are pedagogically progressive and take a liberal arts approach to health science learning. She makes extensive use of novels (A River Runs Through It), films (“Fearless”), community-based interviews, and case studies. Course discussions are shaped by open-ended questions that have no clear answer – queries that are thoughtfully designed to engage students in inquiry, reflection, and critical thinking.

Vicki’s class was first taught live on Simmons’ campus and then piloted online. Modifications were made in response to formative assessment and the course was taught a second time online in spring of 2005.

Challenges: The Sports Psychology course faced a number of barriers to success in its online debut. This was the department’s first foray into distance learning. Other departments had taken the plunge into web-based distance learning. But, in the absence of an institutional mechanism for intentional information-sharing, communication among faculty and departmental administrators about distance learning took place on an ad hoc basis.

Other challenges involved gaps in support at the institutional level. Academic Technology was in the process of hiring two fulltime instructional designers to work with faculty, but at the time that Vicki was authoring a first draft for her course there was insufficient support in place. In retrospect, all involved acknowledged the need for more training, modeling, and guidance prior to the course development phase.

In addition, both the department and Vicki assumed that the project entailed “putting the course online.”  In reality, as Vicki noted during her interview, online course development involves rethinking fundamental aspects of oneself as a teacher and how to best engage students in learning.

Finally, as someone who had never taken or facilitated an online course before, it was difficult for Vicki to know what was required of her. Perhaps her biggest challenge was learning how to teach in a context in which she was unable to “read”  the expressions and reactions of her students. While her skill at reading a room served her quite well in the classroom environment, it hindered her ability to author course materials that anticipated the needs of virtual students.

As mentioned previously, online course development constitutes a new genre of writing for most academics – both the process and the product that differ from their previous experiences authoring books, scholarly articles, book reviews, or even email messages and PowerPoint presentations. The text Vicki produced for the pilot version was skeletal. The outline was explicit, but the narrative that helps students connect the dots was noticeably absent. This is not unusual for a first time online course author. All three faculty members interviewed for this article mentioned that translating “lecture notes”  into a coherent online learning experience for students was one of their biggest hurdles.

Predictably, the course debuted with a bumpy start. Course modules pointed students to articles, case studies and lecture notes, but failed to set the context for learning. Participation lagged – students submitted the required work, but the learning and level of engagement stagnated. Vicki expressed frustration that the students were failing to “take it to the next level.”  She was concerned that these students’ discussions, reflections, and questions were not indicative of the type of learning she usually observes in her classes – conceptual understanding and insight did not seem to build from one module to the next.

Weekly formative assessment, gathered through WebCT surveys, confirmed what was already evident; students were not engaged, they didn’t come away from the modules having grasped the key concepts, and they were often confused about what they should be doing.

Intervention and Revisions: Fortunately, as these challenges unfolded, Simmons College was increasing its infrastructure for faculty support. As the newly hired instructional designers, one of our first tasks was to provide Vicki with the guidance and support she needed to succeed. In addition to face-to-face consultation and coaching, we also introduced her to the literature about best practices in online teaching.

The Evolution of an Activity: The following example presents the evolution of one assignment, illustrating how we worked with Vicki to turn it into a successful experience of learning through inquiry.

The genogram assignment required students to use Inspiration software to construct a diagram of their own family’s roles and dynamics. The purpose of this assignment was to help students examine their family history and reflect on potential “hot buttons”  that might impede their ability to work with a client.

Pilot Version:  Directions for the assignment, in the first iteration of the course, read as follows:

You should complete construction on your family genogram this week. In the discussion forum, first post about your experience developing your own genogram. Given your experience, what do you think is the genogram’s value for client assessment? Then, review your classmates’ posts and post at least one reply to another thread.

The formative assessment and implementation of that plan quickly revealed that students were struggling. Because there was no on-site demonstration, it took students longer to learn how to use the software. Because students weren’t explicitly told to attach their genogram files to their posts, they couldn’t understand details in peer comments on the experience and had no basis for comparative discussion. Because this was the first week of the class and community norms were still in flux, they felt awkward sharing personal details about family dynamics. Finally, because the assignment guidelines and discussion prompt were vague, the discussion fell flat.

The following are typical student comments from formative assessment surveys conducted during the pilot:

“Things are too scattered around.”  “I was confused with this module.”  “I tried to develop a conversation … and until the last day received little to no feedback.”

As an “on the fly”  change in response to formative assessment, Vicki decided to extend the discussion into a second week – this time encouraging students to post their genograms. But at best this was damage control – before the course was offered again, Vicki worked with Deborah Cotler to revise and reformat the entire course, including the genogram assignment.

Online Course Revised:  After analyzing students’ formative feedback, Deborah and Vicki realized that the goals for the assignment were unclear – both for the students and for Vicki. For example, the stated goal was for students to identify prior life experiences that might affect their ability to work with clients on certain issues. But the assignment’s discussion prompt also asked students to consider the value of using client genograms as a tool for assessment.

Deborah asked Vicki to describe how she would teach the assignment in the context of a face-to-face class. Vicki said that she would probably begin the discussion by focusing on what students learned by doing their own genograms and then ask follow-up questions to extend the conversation to cover the value of genograms in a sports psychology context. But in the online context, absent facilitation in the moment, presenting both discussion topics at once resulted in confusion about the assignment.

Deborah worked with Vicki to hone the assignment to make the rationale, process for implementation, and expectations explicit. They also moved the genogram assignment to the third week of the class, allowing time for community-building before asking students to disclose personal family information. Comments made during the second round of formative assessment indicate dramatic improvement:

“I learned to look at the possible conflicts I can have with patients because of their beliefs and lifestyles. I did realize this before, but this module made me focus and think about the possibility of this happening in my clinical practice.”

“Another great application of our learning to real life. It’s great to apply this knowledge to a real person and see how it actually fits in real time. My confidence about applying this to my patients outside of this class is growing.”

At the end of the semester, course evaluation comments were equally gratifying:

“Dr. Bacon was the best facilitator in my entire Simmons College online experience. She was extremely insightful and provided food for thought in several of the modules. It was encouraging that she responded to all the modules. This gave us a feedback as to that we are on the right track.”

Support for Developers of Online Learning: What’s Helpful?

To get a better understanding of what we were already doing “right,”  during faculty interviews we asked which aspects of our support had been most helpful (see “What Helped?”  video). Based on this feedback and our own observations, we offer the following suggestions:

Establish optimal conditions for dialogue. Before you begin working in depth with a faculty member, point them to a copy of the literature that informs your approach to online pedagogy. We find that when faculty members come to the table with a foundational understanding of the principles that guide your approach, the dialogue starts at a much more productive level.

Articulate goals for student understanding and skill development. By identifying learning goals at the outset of the project— and frequently reassessing these— you will ensure that course materials and activities support the desired learning.

Clarify how students will learn. Brainstorm ideas for what students will do or experience to further their understanding of course concepts. Identify, in advance, the artifacts of learning (discussion posts, work samples, chat logs, etc.) that will provide the professor with insight into students’ learning needs and progress toward goals. Help faculty keep cognizant of the fact that, in the online environment, you can never be too explicit in writing up assignment directions – but that doesn’t mean that assignments need to take an objectivistic approach to learning.  The assignment tasks need to be crystal clear, but the process of enacting those tasks – projects, research, discussion, reflection, etc. – will ideally engage students in constructivist meaning-making.

Work with faculty as writers. The most critical turning point for many faculty members is the moment they recognize this effort as an act of authorship. Suggest a process for authorship and help them develop a consistent format for session modules. Model a sequence for authorship that begins with analysis of students’ ideas. For example, instead of beginning with “what I want to say,”  begin with “what are common student misconceptions, where do the students tend to struggle?”  Then develop the course with these patterns of need in mind. Help them reflect on the desired class culture, or sense of community, and what needs to be included in the course to achieve that dynamic.

Work with faculty as revisers. Just as an author would never write an academic paper without multiple rounds of revisions, a course author must be prepared to revise the course based on feedback from others. Offer to be a reviewer. Encourage the faculty member to solicit peers as additional reviewers.

Final Words

Collaborating with course authors can be an immensely satisfying experience. When the pieces fall into place and an online course runs well, the result is intensely generative. Rather than increase the distance between faculty and students, faculty are discovering that web-enhanced learning engenders the type of personalized learning that is at the heart of Simmons’ mission.  According to Mary Jane Treacy,

I have never learned as much about a group of students in all my years at Simmons College. I am just amazed by what I know about them – and also amazed by how they’re coming together, getting close, but also bumping elbows, and how they’re getting closer to me. It feels very, very good.  It’s the right thing to do.

At Simmons, we have had the pleasure of enjoying many such positive partnerships. It is our hope that the suggestions and experiences we have detailed will assist you in your own consultative work with liberal arts faculty.

[1] Paul Hagner, “Faculty Engagement and Support in the New Learning Environment,”  Educause Review (September/October 2000), 31.

[2] Though beyond the scope of this article, a set of suggested guiding questions we developed for administrators and faculty involved in developing online programs is available athttp://my.simmons.edu/services/technology/ptrc/pdf/educause04_handouts.pdf.

[3] Video clips from interviews we conducted in preparation for this article are available online athttp://my.simmons.edu/services/technology/ptrc/resources/articles.shtml.

[4] Doug Brent, “Teaching as performance in the electronic classroom”  in First Monday 10, no. 4 (2005):60, http://firstmonday.org/issues/issue10_4.

Taking Culture Seriously: Educating and Inspiring the Technological Imagination

Posted December 12th, 2005 by Anne Balsamo, University of Southern California

Introduction:  On the Relationship of Technology and Culture

Ignorance costs.

Cultural ignorance — of language, of history, and of geo-political contexts — costs real money.

Microsoft learned this lesson the hard way. A map of India included in the Windows 95 OS represented a small territory in a different shade of green from the rest of the country. The territory is, in fact, strongly disputed between the Kashmiri people and the Indian government; but Microsoft designers inadvertently settled the dispute in favor of one side. Assigning the territory (roughly 8 pixels in size on the digital map) a different shade of green signified that the territory was definitely not part of India. The product was immediately banned in India and Microsoft had no choice but to recall 200,000 copies. Through a release of another version of its famous operating system, Microsoft again learned the cost of cultural ignorance. A Spanish-language version of Windows XP OS marketed to Latin American consumers presented users with three options to identify gender: “non-specified,” “male,” or “bitch.”  In a different part of the world, with yet a different product, Microsoft again was forced to recall several thousand units. In this case the recall became necessary when the Saudi Arabian government took offence at the use of a Koran chant as a soundtrack element in a Microsoft video game. The reported estimate of lost revenue from these blunders was in the millions of dollars.[1]

These examples illustrate the very real ways in which cultural ignorance costs money and good will in the big business of technological innovation. In this case, several seemingly insignificant details incorporated into state-of-the-art digital applications not only resulted in the recall of several widely distributed products and damage to a global brand, but also demonstrated a grand failure of multicultural intelligence within the ranks of a multinational corporation.

Although it is tempting to deploy these examples as a contribution to the popular pastime of Microsoft bashing, that response is neither creative nor particularly insightful. Rather, I use the examples of the costliness of a multinational corporation’s cultural blunders to assert that the process of technological innovation must take culture seriously. Moreover, I argue that the process of technological innovation is not solely about the design and development of new products or services, but rather is the very process that creates the cultures that we inhabit around the globe.

Technology is not an epiphenomenon of contemporary culture, but rather is deeply intertwined with the conditions of human existence across the globe. Although we are now more than a century past the dawn of the industrial age, the global distribution of the benefits of industrialism, i.e., basic health and subsistence-level resources, remains disturbingly uneven. In considering the significant loss of life due to recent hurricanes in the southern U.S. it is clear that the demarcation between rich and poor does not map simply onto the division between the global North and South. The tragedy revealed a wide-scale ignorance of the reality of the technological situation of people living in those regions. Evacuation orders were not only late in coming, they only addressed those who were already technologically endowed with the means to flee to safer ground, i.e., the automobile, or to those who had access to other technological resources, such as planes, trains, or buses. When lives are at stake, which is often the case with the deployment of large-scale or new technologies, it is ethically imperative that the technological imagination must explicitly consider cultural, social, and human consequences. This imagination must be trained to imagine the unimaginable—that is, to actively imagine unintended consequences.

When developing new technologies, culture needs to be taken into consideration at even a more basic level: as the foundation upon which the technological imagination is formed in the first place. I define the technological imagination as a character of mind and creative practice of those who use, analyze, design and develop technologies.[2] It is a quality of mind that grasps the doubled-nature of technology: as determining and determined, as both autonomous of and subservient to human goals. This imagination embraces the possibility of multiple and contradictory effects. This is the quality of mind that enables people to think with technology, to transform what is known into what is possible, and to evaluate the consequences of such creation from multiple perspectives.

The Interdisciplinary Education of the Technological Imagination

Every discipline within the contemporary university has been transformed by the development of new technologies, whether technology now becomes an “object” of study, as in the humanities and legal studies; a tool of knowledge production, as in the social and medical sciences; or a domain of new disciplinary knowledge, as in the engineering sciences, cinema, and communication studies. This means that every discipline within the university has something important to contribute to the development of new technologies.

Universities need to actively educate and inspire researchers, teachers and students to develop a robust technological imagination. This is an educated “quality of mind” that is by nature thoroughly interdisciplinary. To understand technology deeply one needs to apprehend it from multiple perspectives: the historical, the social, the cultural, as well as the technical, instrumental and the material. We must develop interdisciplinary research and educational programs that enact and teach skills of creative synthesis of the important insights from a range of disciplines in the service of producing incisive critique of what has already been done. From this critique emerges the understanding of what is to be done. In this formulation, the traditional role of criticism is expanded. No longer an end in itself, criticism of what has already been done is a step in the process of determining what needs to be done differently in the future. Our educational programs need to teach skills of critical thinking that lead to creative proposals for doing things differently. Then we need to teach students methods for doing things differently with technology: how technologies are built, how they are implemented, how they are reproduced and how they affect cultural arrangements. This is the foundation of innovative research and new knowledge production. This is the work of the university-educated technological imagination.


Figure 1: How the university contributes to significant cultural change through the development of new technologies

Educational programs that seek to develop a robust technological imagination must include training in 1) the history of technology, 2) critical frameworks for assessing technology and identifying effects, 3) creative and methodological use of technological tools, 4) pedagogical activities and exercises that create new technological applications, devices, and services, 5) architectural and virtual spaces for social exchange and creative production, and 6) international studies and policy analysis that provide appropriate cultural and institutional contexts of assessment of effects. This is the necessary multidisciplinary foundation for the development of new technologies.

Moreover, there is a category of technology—what might be labeled technologies of literacy—that serve as the stage for the elaboration, reproduction, performance, and dissemination of culture across the globe. Technologies of literacy include the development of pedagogical methods for educating literate citizens who not only understand the technologies already available, but who will be equipped with the intellectual foundation and habits of mind to respond and use the new technologies that will become commonplace in the future. This is a crucial dimension of the education of life-long learners. Thus these educational programs must experiment and develop innovative pedagogies that engage multiple intelligences: the social, cultural, and emotional, as well as the cognitive and the technical. Furthermore these pedagogies must utilize the full range of new technologies that enable multiple-modes of expression in the production of educational materials and educational output: visual, textual, aural, corporeal, and spatial. In this way these programs both draw on new technological literacies and engage faculty and students in the creation of the literacies of the future.

In a research context, the manifestation of this imagination comes through the collaboration of faculty and researchers from different disciplines working together on projects of social and cultural significance to create human-centric technologies. The output of their research may take several forms: innovative technological devices, applications, research monographs, presentations, demonstrations, performances, and installations. The guiding strategy for all these research projects is that they “take culture seriously.” Culture serves as both the context for the formulation of the research problem in the first place, and as the domain within which significant technological developments will unfold. In this way, this kind of technology-based research understands its ethical dimensions and acknowledges its ethical responsibilities.

To do this right, we need to ground these interdisciplinary efforts in new ways of thinking about technology. We need a new educational philosophy that can guide our efforts to create “original synners”—students who can synthesize information from multiple perspectives.[3] We need to develop new institutional structures for research and new pedagogies that support the development of the technological imagination and inspire its practical application. We need new analytical frameworks that enable us to imagine the multiple consequences of the deployment of new technologies. I also argue that we need to specify the ways in which all of us within the university are accountable for the future of technological development. Designers and engineers need to address their cultural responsibilities.  Humanists and social scientists must contribute creative direction as well as critical analyses. In an effort to suggest a starting point for new multidisciplinary collaborative applied technology-based research projects that take culture seriously, I offer the following three broad questions:
What are the most pressing cultural issues within the US and across the world?
All technologies rearrange culture. We know that new technologies are especially useful in facilitating interactions among people from different cultures. How is the project of cultural reproduction served by new technologies? How will current as well as traditional cultural memories be preserved over time? How should we choose what to forget? What role does narrative play in the technological reproduction of culture? How is narrative itself a technology of culture? What new narrative devices/applications need to be developed to aid the reproduction of culture? The use of new digital devices for entertainment and pleasure yields contradictory effects. While some people in the developed world enjoy an expanded range of mobility, enabled by the development of mobile communication devices, others become more sedentary and confined within a limited orbit Through the use of global telecommunication networks people can expand their global awareness through virtual visits. What are the cultural possibilities and consequences of virtual mobility? What is the future of embodied play and entertainment? What implication does this have for the design of playgrounds, digitally- augmented performance spaces, and the development of creative toys? What are the implications of virtual tourism for the reproduction of privilege and mobility?  What are the cultural possibilities of technologically-augmented reality?

What are the literacies of the 21st century?
Literacy is a technological phenomenon. The development of new technologies of communication and of expression not only influence but demand the development of new literacies. These literacies do not compete with traditional print-based literacies, but rather build on and complement them. Current undergraduate students will become the next generation of scholars and researchers who will go on to develop new technologies of literacy, new genres and devices of cultural expression, and new forms of scholarship and research. How will we prepare them for this important cultural work? What technologies can be developed to teach basic literacy? What new kinds of reading devices will be useful in the future? How will our educational materials need to change to address the many kinds of literacy that will be required of future generations: reading, writing, digital, technological, multimedia? What will the textbook of the future look like? What are the possibilities of multi-player distributed gaming for the development of educational experiences?

What will scholarship look like in 10-15 years? 
Interdisciplinary collaborations and research provoke the need to develop new forms of scholarship, publications and other modes of cultural outreach. These new forms in turn offer an opportunity to experiment with modes of expression made possible by the development of new digital technologies. In the process, new forms of knowledge production emerge. New forms of scholarship will require the development of new authoring and publishing tools. We already know that authoring and designing are merging; what kinds of digital authoring environments are needed to support scholarship across the curriculum? Collaborative scholarship is a global phenomenon: how can social networking applications be used for scholarly and educational purposes? These social networking applications facilitate communication among scholars and lay people, thus offering a stage for the forging of radically new collaborations for the production of knowledge. Traversing the binary distinction between “scholar” and “amateur” promises to transform the educational scene within the university, effectively opening up the university to the world in unprecedented ways. How can the communication of scholarship and new research be enhanced through the development of multilingual digital applications, widely distributed digital archives, and new collaboration platforms?  What are the stages for knowledge transfer from the university to the broader public, which now includes so-called “amateurs” who are also actively engaged in new knowledge construction (through the development of folksonomies, for example)?

A trained technological imagination is the critical foundation required by the next generation of technologically and culturally literate scholars, scientists, engineers, humanists, teachers, artists, policy makers, leaders, and global citizens. Creating research programs and new curricula that explicitly address the education of the technological imagination are the ways in which the university will contribute to significant cultural change.

Instead of a Bridge, How about a Collaboratory?

In 1959, when C.P. Snow first described the gulf between the sciences and the humanities as a “two culture” problem, he implored educators to find ways to bridge the divide.[4] He took pains not to blame one side or the other for the failure to communicate because he believed that neither “the scientists” nor the “literary intellectuals” had an adequate framework for addressing significant world problems. In the intervening half-century since the publication of Snow’s manifesto there have been several attempts to bridge the “two culture” divide. While some of these attempts resulted in spectacular failures (“The Science Wars” of the early- to mid-1990s), others represent modest, but on-going interventions (The Society for Literature, Science and the Arts.[5] The development of Science and Technology Studies programs (STS) are noteworthy academic programs that train students to investigate the cultural and social implications of science and technology. Few if any of these programs or institutional experiments have successfully brought humanists, social scientists, scientists, and engineers together—as peers—to collaborate on the production of new applied research that results in the creation of new technologies. Future attempts to bridge the two cultures will be of limited success as long as these groups of scholars continue to see themselves as standing on opposite sides of the divide, or if the groups continue to regard each other as hierarchically advantaged or disadvantaged. I believe that the time is right to take up Snow’s challenge once again, not to work on building bridges per se, but rather to create a new place for the practice of multidisciplinary, collaborative technology-based research.

In 1989, a professor at the University of Virginia coined the term collaboratory to describe a new institutional structure for collaborative research. As of Fall 2005, there are dozens of collaboratories around the world, most of which are virtual spaces that utilize digital network technologies to support the collaboration among researchers at distant physical locations. Many of these collaboratories are actually collaborations among laboratories located around the world, where the individual laboratories are (presumably) still organized in the typical fashion around a single PI’s research or a single topic.

To date the collaboratories that involve humanities scholars focus almost exclusively on humanities computing research, where the projects involve the development and use of a high-end digital infrastructure for digitizing, archiving and searching specialized collections of historic materials, most typically books, manuscripts, and images. While these efforts and others such as the various “digital library” projects are absolutely necessary and valuable, they represent only one vector of research that unites the humanistic with the technological.

In 2002, a group of humanities program directors formed a virtual collaboratory called HASTAC: The Humanities, Arts, Science and Technology Advanced Collaboratory designed to promote the development of humane technologies and technological humanism.[6] The programs participating in HASTAC each have attempted to create some sort of institutional space for collaborative research involving humanists and technologists. The efforts include humanities computing programs as well as interdisciplinary humanities institutes that have a particular focus on science and technology.

Inspired by HASTAC discussions and meetings, I assert that there is a critical need to create physical collaboratories that bring humanists, artists, media producers and technologists together to build human-centric technologies. This requires a physical space where researchers from multiple disciplines work together as peers to design, prototype, and actually fabricate new technologies. In combining the critical methods of the humanities and social sciences with innovative engineering/design methods such as rapid prototyping and user-centered design, these collaborators will create innovative methodologies. Thus, the research output includes not simply new technology-based projects and demonstrations, but also insights into the nature of interdisciplinary collaboration and the creation of new methodologies for collaboration. Instead of a single PI, the business of the collaboratory would be coordinated by a representative group of researchers whose interests span the disciplinary spectrum: humanities, social and cognitive sciences, arts, engineering and sciences. As participants in this collaboratory, researchers from various disciplines each bring something important to the collaborations:

Special role of the humanist: Contributes expertise in the assessment and critique of the ethical, social, and practical affordances of new technologies; provides expertise on the process of meaning-making which is central to the development of successful new technologies; provides appropriate historical contextualization.

Special role of the social and cognitive scientist: Contributes expertise in the assessment of social impact and in the analysis of institutional, policy, and global effects of the development and deployment of new technologies; addresses the cognitive impact of new technologies; provides methods for analyzing social uses.

Special role of the technologist: Contributes expertise in the innovation of new devices and applications; provides analytical skills in the assessment of problem formation and solution design; demonstrates methods of design, creation, and prototyping; recommends specific tools, processes, and materials.

Special role of the scientist: Contributes expertise in the development of new theoretical possibilities; provides methodologies for assessing and evaluating implementation efforts, and for formulating possible (theoretical) outcomes; develops experiments with new materials; contributes understanding about environmental impacts and waste management.

Special role of the artist: Contributes expertise in the performance, expression, and demonstration of technological insights; provides skills in different modes of engagement: the tactile, the visual, the kinesthetic, and the aural.

The goal is to create space for the constitution of a research community that collaborates on technology-based projects that take culture seriously. While it is tempting to offer a list of suggested projects, this would undermine one of the critical components of the collaborative effort. While any participant can suggest a project, the project must be, in effect adopted by the community. This is to say that there needs to be consensus that a project is important to pursue. This, of course, is the basis of all good research; but it is rare that humanists, artists, and social scientists have a voice in this kind of evaluation of technology-based research projects. It is even rarer still that they have peer-status as researchers who will design, build, and fabricate new technologies. This is one of the important innovations of such a collaboratory. The output of these research projects might include typical research monographs, but also possibly public demonstrations, new pedagogical technologies, and new technologies of literacy. All the collaborators will serve as important “technology-translators” who can help make the meaning of new technologies more accessible to a wider public, both within and outside of the academy.

The social engineering of this endeavor is a crucial element of its success. The price of admission to this collaboratory is an individual’s commitment to embrace collaborative work. A key requirement of the research participants is that they work against the facile division of labor that would have the humanists doing the “critique,” the technologists doing the building, and the artists offering art direction. While there is a special role to be played by each participant, they must all be willing —  indeed, eager– to learn new skills, new analytical frameworks, new methods, and new practices. A personal commitment to life-long learning is the foundation for these collaborations. Each participant must be willing to uphold the ethical foundation of multidisciplinary work: intellectual  flexibility, intellectual  generosityintellectual confidence, and intellectual  humility. Only by doing so will the collaborations result in the kind of work where the sum is greater than the parts, and where the technological imagination can be freely exercised and employed to create futures that are desirable for all people around the world, not just for those already-privileged and technological-empowered.

Excerpted from Chapter 1: The Technological Imagination Revisited; Designing Culture: A Work of the Technological Imagination,  Anne Balsamo, Duke University Press, forthcoming.

Footnotes:
[1].   “How eight pixels cost Microsoft millions,” Jo Best, c|net News.com.  http://news.com.com/How+eight+pixels+cost+Microsoft+millions/2100-1014_3-5316664.html.

[2].   The resonance with C.W. Mills’ notion of the “Sociological Imagination” is intentional here. C. Wright Mills, The Sociological Imagination (London: Oxford UP, 1959). See also:  Michel Benamou, 1980. “Notes on the Technological Imagination,” in Teresa De Lauretis, Andreas Huyssen, and Kathleen Woodward, eds. The Technological Imagination: Theories and Fictions. (Madison, WI: Coda Press, pp: 65-75).

[3]  This is an explicit reference to Pat Cadigan’s novel, Synners (New York: HarperCollins, 1991). For a more complete discussion of the education of original synners:  “Engineering Cultural Studies: The Postdisciplinary Adventures of Mindplayers, Fools, and Others” Science + Culture: Doing cultural studies of science, technology and medicine, eds. Sharon Traweek and Roddey Reid (New York: Routledge, 2000: 259-274).

[4].  C.P. Snow, The Two Cultures: and a Second Look (New York: Cambridge University Press, 1963).

[5]  http://slsa.press.jhu.edu

[6]  http://www.hastac.org

Technology as Epistemology

Posted December 12th, 2005 by Peter Schilling, Amherst College

Early in the 20th century Gertrude Stein wrote that America was the oldest country because it was the first to arrive at the new Century. Today’s students have formed their habits of mind by interacting with information that is digital and networked. They are, in a way, older than their teachers, whose relationships with information are governed by earlier generations of technology. There is more. Not only do our students possess skills and experiences that previous generations do not, but the very neurological structures and pathways they have developed as part of their learning are based on the technologies they use to create, store, and disseminate information. Importantly, these pathways and the categories, taxonomies, and other tools they use for thinking are different from those used by their teachers.

(http://www.userfriendly.org indicates use in this manner is not an infringement of the creator’s intellectual property.)

To say that “new technology is changing the way we think” is as obvious as it is ambiguous. While it may be popular, and accurate, to complain that Microsoft Word’s grammar checker has a greater influence on American English than any teacher, curriculum, or book, I would like to consider the relationship between technology and thinking explicitly in the context of education, where the mission is to help students learn to think.

Let us start with the role that patterns and categories play in learning and knowing. Although the patterns and categories we use are never perfect ways of creating meaning, they influence the way we think, remember, and anticipate information. For instance, in biology we divide the world into domain, kingdom, phylum, class, order, family, genus, and species, which, at its final category, is a division based on the ability to reproduce sexually. For this reason, we have the families of canidae and felidae, dogs and cats. If, in our world of cloning and other forms of assisted reproduction, we, instead, divide the world primarily by means of locomotion, dogs and cats would both be in one group as the digitigrade. (I suspect that, no matter how we categorize them, the digitigrade with the longer nose and floppy ears would still chase the digitigrade that purrs and flicks its tail.) In addition, the particular way we learn information, as well as when in our lives we learn, creates specific neural pathways (or patterns) in our brains. Once the patterns and pathways become too familiar or set, however, we become less adept at seeing information which does not fit the pattern. At times we may even start adding phantom data to fill in gaps. It is very important to keep this in mind.

All of our cognitive tools help us perceive our world and sort the flood of information that continually flows across our senses. We regularly filter and winnow this information in order to focus, group, and extract meaning. If our brain and senses did not do this, we would be overwhelmed by our inability to differentiate foreground from background.

In the photo of the two dogs on the log, we can differentiate the dogs from the woods that surround them. We have a sense of the field of vision in the photo, and perceive that one of the dogs is standing closer to the viewer than the other. We know that the trees are wood and the dogs are not. We also know that this is a photo on a computer screen and that it is unlikely that either dog will start chasing a squirrel.

Neurologist and author Oliver Sacks described the cognitive and neurological development of a man, blind since childhood, who regains his sight in his 50s. The once blind man, Virgil, cannot do all of the things with the dog photo we described above. Sacks shows that, for Virgil, information does not follow the same neural pathways that it does for other, sighted adults. However, once Virgil can feel a scene with his hands, such as the contents of a room or a person’s face, he can then describe the information that he sees. So, while his eyes function properly, his brain has developed strategies and pathways for processing information that do not accommodate visual data.[1]
Time and experience train our senses to interpret information. They also lead to the development of a facility (or opportunity, from an illusionist’s point of view) to fill in information not available to our senses. Optical illusions are perhaps the most widely-known demonstration of this kind of learned behavior. Our mind fills in or adds information so that we can perceive depth, relationships, and other data not actually present in an image or scene.

The mind also fills in such things as context and informs our understanding by, for instance, utilizing our familiarity with the tools of information creation and dissemination. So, while patterns and categories are necessary for us to sort through the information to find meaning, once we have created our categories and patterns, they can be hard to put aside. In these cases, one cannot see familiar information without the categories or meaning with which we have associated it.

Much has been said and written about the importance of categories and patterns for thinking. The National Research Council has reported on “research demonstrating that when a series of events are presented in a random sequence, people reorder them…. the mind creates categories for processing information. . . . the mind imposes structure on the information available from experience.”[2]

It is problematic when we lose sight of the constructs we bring to our interaction with the data around us, but it is hard not to. What Nietzsche has said about metaphors holds equally true for our use of patterns to help formulate meaning.

What, then, is truth? A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.[3]

The patterns and categories we use can constrict our ability to understand new things. For instance, Salman Rushdie points out in Midnight’s Children, a novel about Indian culture, that any people whose word for “yesterday” is the same as their word for “tomorrow” cannot be said to have a firm grip on time[4], yet academics studying Rushdie’s novel are tempted to develop a timeline of the events of the story. Similarly, the U.S. publisher of Gabriel Garcia Marquez’s Hundred Years of Solitude have added a family tree to their English-language edition of the novel,[5] perhaps missing the point that in a book where twenty-one characters have the same name, the concept of individual identity is not really key for understanding.

Similarly, we tend to use known patterns to help us learn, or manage new information. Context and what we know affects the ways in which we establish meaning, such that if one were to have come across this image of the saffron gates in New York’s Central Park anytime before February of 2005, one would likely have assumed that Photoshop had been used to create the image. But after February 2005, this would not be the reaction. The geese in the foreground are, now, as likely the result of work with Photoshop as the gates themselves.

For centuries, humans have used various technologies to help manage data, whether it was Incan knots or Egyptian hieroglyphs. The introduction of new technologies, therefore, is an important part of the context in which we set meaning for new information. For this reason, although we have had stories about three-headed dogs in our culture from Cerberus to Fluffy, today most viewers of this photo of a three-headed dog will (hopefully) immediately consider it a product of image-editing software.

Education has the contradictory tasks of teaching us to work within patterns, but also to think beyond them. If we are not careful, disciplinary thinking can slip into rote formalism or a mere act of classifying data with established taxonomies. For instance, students exposed for years to narrative will likely incorporate narrative pattern into the way they anticipate information. Consider, for instance, this Hyundai Commercial. Try stopping the video every few seconds and narrate the unfolding scene yourself. Although there is no dialogue, you will probably notice that you can tell a fairly detailed story on your own.

The same phenomenon of filling in information gaps occurs when we try to proofread our own writing (by which I mean to plead forgiveness for any errors in this text. . .).

These claims I make can be overstated, however. For instance, we may recall reports in the popular press about research at Cambridge University that showed our ability to recognize words  when all letters other than the first and last are jumbled. Nevertheless, after the press releases, many easily refuted the research, showing, among other things, that it was not done at Cambridge, does not work for all languages, does not work when all the letters are capitals, does not work when letters are simply removed, etc.

That said, the way we learn, when we learn, and the technologies we use to learn all influence what we know as well as the neural pathways we use when accessing our knowledge. Researchers such as Wayne Reeves have emphasized the differences in the ways that experts and novices in a given area or topic solve problems and react to information.[6]

As part of a well-known 1965 study on thought and choice in chess, de Groot noticed that, when a chess master, a proficient chess player, and a novice were shown a chessboard for five seconds with all pieces on it in mid-game, the master could recall the position of sixteen pieces, the proficient play eight pieces, and the novice four. When all were shown the same board for a second five second look, they doubled the number of pieces and locations they could recall. However, when the same subjects were shown a board but with all the pieces randomly placed, each could recall pieces and positions only at the level of the novice.[7]

Analogous studies have been done with mathematicians, physicists, and historians, though the emphasis was on the ways in which experts and novices approach information differently. In short, experts can chunk information in ways novices cannot and they can access and apply appropriate overarching principals, laws, and methods to the new data, which, again, the novices cannot.

Research conducted by Eleanor Maguire of University College London has shown that London taxi drivers have an enlarged region of the posterior hippocampus. This region is believed to be associated with “spatial navigation” and is a “memory bank” for the spatial representation of the complex maze of streets in the city of London, England. There is a positive correlation between the number of years on the job and the size of the posterior hippocampal region.[8] Additional research conducted by Lewis R. Baxtor et al of UCLA in 2001 demonstrated that the physical characteristics of the brain of subjects who receive psychotherapy (talk therapy) changed the brain in much the same way as psychotropic medication.[9]

In 2003, research at the University of Rochester demonstrated that action video games, such as single player shooters, train the brain to better process certain types of visual information. Students who played video games for as short as a two-week period had a greater facility seeing and processing multiple stimuli in their peripheral vision.[10]

As reported in Nature in 2004, a neural pattern has also been associated with language learners. According to Andrea Mechelli, a neuroscientist at University College London “[t]he grey matter in this region increases in bilinguals relative to monolinguals — this is particularly true in early bilinguals who learned a second language early in life . . . . The degree is correlated with the proficiency achieved.”[11] Learning another language after 35 years of age also alters the brain but the change is not as pronounced as in early learners. Mechelli said their research “reinforces the idea that it is better to learn early rather than late because the brain is more capable of adjusting or accommodating new languages by changing structurally. This ability of the brain decreases with time.”[12]

But what happens when the content of one’s expertise, developed over years of study and research using one generation of technology, gets separated from the tools now used to generate and disseminate information within that content area? The following QTVR versions of a chessboard may prove disorienting for those who, while masters of chess, are novices to QTVR.

Chess Example 1
:
Chess Example 2:

Not only do today’s novices use technologies unavailable at the time their teachers were becoming masters, but the quantity and types of information students need to assess has also expanded exponentially. Part of this shift in learning brought about by today’s digital, networked information results from the fact that we now often work, share, and search at the data level as opposed the level of conclusions, narratives, catalogs, or indices. That is, students are not limited to browsing a card catalogue to find just those books that their college library had the resources to purchase and that were described with Library of Congress subject terms as addressing a particular topic and which a publishing house has selected for publication by an author who had created a narrative by sorting and synthesizing years’ worth of research into a comprehensible whole. They can use search and collaboration tools to get at the primary source data as well as a wider variety of studies of the data. By so doing, they can wade through and remove four levels of filters between themselves and the information.

What it means to master a field of study has changed. Rather than developing an encyclopedic knowledge of all literature on a single topic, today’s students need to know how to find, evaluate, and contextualize information in numerous, different formats on more interdisciplinary topics, but they also need to know how to locate and use the underlying data as well as the technology to sort and present it. To teach the history of the English language today, for instance, an instructor would most likely want to train students to use popular Geographic Information Systems (GIS) and create data layers of audio files demonstrating the pronunciation of Old English and Old Norse town names, point data for the town’s location, data relating to the slope and aspect of northwestern Britain, and have knowledge of the military technology of pre-Norman England. Reading a book or listening to a lecture on the topic is no longer sufficient. An educated person today knows how to access and use appropriate tools and the appropriate data as well as understands the abilities and limitations of each. It is likely that the way in which they know these things — as well as the ways in which they go about finding, assimilating, and representing information — utilize specific areas of their brains. Photoshop and other such tools change the way we process visual data.

Epistemology, and epistemological inquiries, have a long history, arching from superstition toward what Gurvitch called the “social frameworks of knowledge.”[13] Technology has always been present as an essential component of how we think, of our thinking about our thinking, and of what we teach. When the technology changes, as it is now, its role becomes all the more evident. For the new generation of thinkers, knowledge includes del.icio.us and other forms of immediate and readily-available folksonomies. Colleges continue to push writing as the skill students must have to be articulate thinkers. Yet they risk stagnation in an epistemological eddy if they do not also appreciate digital video production, database programming, or even the underlying functionality of MediaWiki, as necessary for developing the cognitive abilities to create and share knowledge.

As educators, we can discuss the ways in which learning changes the brain. Following Nietzsche, we can also reason that it is hard to change our patterns and categories of thought. Nevertheless, we must perceive our own technology-dependent constructs in order to integrate the valuable information and skills we have developed over a lifetime with the new tools now used to create and share knowledge.

NOTES

  1. Oliver Sacks, “To See and Not See,” An Anthropologist on Mars (Vintage Books, 1995), 108-152.
  2. Council Committee on Learning Research and Educational Practice et al, How People Learn: Brain, Mind, Experience, and School: Expanded Edition (National Academies Press, 2000),http://www.nap.edu/books/0309070368/html/124.html – /125.html.
  3. Friedrich Nietzsche, “On Truth And Lie in an Extra-Moral Sense,” The Portable Nietzsche, trans. Walter Kaufman (New York: Penguin Books, 1982), 46-47.
  4. Salman Rushdie, Midnight’s Children (New York: Avon Books, 1980), 123.
  5. Gabriel Garcia Marquez, One Hundred Years of Solitude, trans.Gregory Rabassa (New York: Harper and Row Publishers, 1970).
  6. See Wayne Reeves, Learner-Centered Design: A Cognitive View of Managing Complexity in Product, Information, and Environmental Design (Sage Publications, Inc., 1999).
  7. See Adriann deGroot, Thought and Choice in Chess, (The Hague: Mouton De Gruyter, 1965).
  8. Eleanor Maguire, Proceedings of the National Academy of Sciences 97, no 8 (April 11, 2000): 4398-4403, http://www.pnas.org/cgi/content/full/97/8/4398.
  9. Arthur L. Brody, MD; Sanjaya Saxena, MD; Paula Stoessel, PhD; Laurie A. Gillies, PhD; Lynn A. Fairbanks, PhD; Shervin Alborzian, BS; Michael E. Phelps, PhD; Sung-Cheng Huang, PhD; Hsiao-Ming Wu, PhD; Matthew L. Ho, BS; Mai K. Ho; Scott C. Au, BS; Karron Maidment, RN; Lewis R. Baxter, Jr, MD, Regional Brain Metabolic Changes in Patients With Major Depression Treated With Either Paroxetine or Interpersonal Therapy,” Archives of General Psychiatry 58, no 7 (2001): 631-640. http://archpsyc.ama-assn.org/cgi/content/abstract/58/7/631
  10. “Altered perception: The science of video gaming,” Currents (University of Rochester, 2003), http://www.rochester.edu/pr/Currents/V31/V31SI/story04.html13. See Georges Gurvitch, The Social Frameworks of Knowledge, trans. Margaret A. Thompson and Kenneth A. Thompson, with an introductory essay by Kenneth A. Thompson (New York: Harper & Row, 1971).
  11. Mechelli et al., “Neurolinguistics: Structural plasticity in the bilingual brain.” Nature 431 (14 October 2004), 757. Abstract at: http://www.nature.com/nature/journal/v431/n7010/abs/431757a.html).
  12. ibid.
css.php