Mixxer: Skype-enabled Language Exchange Site

by Rachel Smith, NMC: The New Media Consortium

Mixxer.gif

Mixxer allows individual students or entire classes to participate in a language exchange.  Once a profile is created, the user can search for a language partner, i.e. a native English speaker learning German can find a native German speaker learning English.  The site asks all users to install Skype in order to communicate.  (Skype is a free voice-over-IP program which is very reliable, has excellent sound quality, and can run on Mac OS X, Windows, or Linux; see www.skype.com.)  The site also takes advantage of the API released by Skype which allows users in the site to see which potential language partners are currently online, giving them the ability to contact any other user directly by clicking on their Skype name.

Mixxer is not the only language exchange site on the web.  However, by making use of Skype and its API, it makes it very easy for a student or teacher to speak with a language partner with literally one click of the mouse.

Questions or ideas are welcomed. Contact the site’s creator, Todd Bryant, at bryantt@dickinson.edu.

Aphrodisias in Late Antiquity Online Edition 2004

by Dr Gabriel Bodard, Kings College London

Aphrodisias in Late Antiquity 2004

Revised second edition (online)

Aphrodisias in Late Antiquity was published in 1989 in the Society of the Promotion of Roman Studies Monographs series, by Charlotte Roueché with contributions by Joyce Reynolds. The second edition, expanded and revised, was published online in 2004 as:

Charlotte Roueché Aphrodisias in Late Antiquity: The Late Roman and Byzantine Inscriptions, revised second edition, 2004, (http://insaph.kcl.ac.uk/ala2004), ISBN  1897747179 (abbreviated to ‘ala2004’)

The editions and commentary are by Charlotte Roueché except for text 1, by Joyce Reynolds. The electronic editorial conventions were developed by Tom Elliott (and the EpiDoc Collaborative), and the website and the supporting materials are the work of Gabriel Bodard, Paul Spence, and colleagues in the Centre for Computing in the Humanities, King’s College London.

The publication consists of:

  • A full catalogue of the inscriptions, illustrated far more richly than would be possible in a conventional volume, and indexed by significant terms, lexical words, locations, dates, and bibliographical concordance;
  • Commentary and historical narrative, epigraphic introductions and prosopographical appendices, fully cross-referenced and hyper-linked across the site;
  • Reference materials including bibliography, links, clickable plans of the site, and repoductions of epigraphic notebooks;
  • A free text search engine, in case what you are looking for is not in the very full indices.

Digital Gaming Teaching and Research at Michigan State

by Rachel Smith, NMC: The New Media Consortium

UM-games.jpg

In Fall 2005, the Department of Telecommunication, Information Studies and Media at Michigan State University launched the Game Design and Development Specialization. The specialization  brings together undergraduate students majoring in digital media arts and technology within the department of Telecommunication, Information Studies, and Media, Computer Science, and Studio Art. Combining these perspectives and talent, students explore the history, social impacts, technology, design fundamentals, and the art of team-based digital game production.

The specialization culminates in a Collaborative Design capstone course where students work in teams with a client in the full design cycle encompassing specification, design, prototyping, implementation, testing, and documentation. Throughout the specialization, students are expected to develop a portfolio of game design and development, and to explore internship opportunities. The core undergraduate Game Design and Development curriculum is enhanced by additional classes in human computer interaction and user centered design, interactive media design, and digital storytelling. In addition  to exciting courses, students have the opportunity to participate in FuturePlay, an international conference on the future of game design, game technology, and game research sponsored and hosted by Michigan State University.

MSU states about digital gaming as an academic field, “Video games have grown to become an important medium in our society. Like film, radio, television, and the Web before it, games have become worthy of academic study, analysis and research. In academia today it is the hot research focus across many diverse disciplines, including education, computer science, communication, psychology, and economics, just to name a few.â€?

An example of the importance of digital games in social research and other fields is MSU’s Digital Media Arts and Technology project funded by the National Science Foundation called “Girls as Game Designers,â€? which is research on how girls and boys approach games, and how games affect them.  One of the projects that has grown out of the Girls as Game Designers research is the “Alien Gamesâ€? project, which “will be… integrated, out of this world fun interactive science learning and play about extraterrestrials and astrobiology designed to appeal to high school and middle school girls to interest them in astrobiology, space science, and game designers . . . .â€?

ArtXplore

by Rachel Smith, NMC: The New Media Consortium

Professor Susan Tennant of the Informatics Research Institute at Indiana University-Purdue University Indianapolis (IUPUI) has collaborated with the Indianapolis Museum of Art, the university’s Herron School of Art and Design, and Purdue’s School of Science at IUPUI to develop ArtXplore, a multimedia program running on a hand-held PDA. The interface highlights information on 16 objects in 12 galleries at the Indianapolis Museum of Art and provides the information wirelessly to the museum visitor. Additionally, museum patrons are able to review their experience and provide comments to the curator directly from the PDA.

ArtXplore provides audio and visual content to the museum patron, including graphics, animation, video, and panoramas. The device also provides data to the museum such as how long a person looks at the art objects, what objects are the most popular, traffic patterns and time of day patterns, and other areas such as gender differences in viewing art.

The content was provided by the Indianapolis Museum of Art and the Herron School of Art and Design. Students from the Herron School, the School of Science, and the School of Informatics were also involved in content development and program design.

The Physical Universe

by Rachel Smith, NMC: The New Media Consortium

Created to accompany the eponymous textbook (The Physical Universe, by Konrad B. Krauskopf and Arthur Beiser; McGraw-Hill), this extensive site includes animations and figures for each chapter, along with study questions and exercises.  The site stands on its own with introductory text for each topic that sets the stage for exploration within subject areas such as the scientific method, matter and energy, the atom, the Periodic Law, and the solar system, among others.

Some animations and figures include audio descriptions (text transcripts are also provided). Although the site is linked closely to the book, it could be used as a resource to learn more about a particular topic even without the book. Goals for learning and follow-up questions help guide the learner, and key concepts are outlined with important terms linked to the glossary.

The Digital Classicist

by Dr Gabriel Bodard, Kings College London

The Digital Classicist is a web-based hub for scholars and students interested in the application of Humanities Computing to undertake research into the ancient world. The main purpose of the site is to offer guidelines and suggestions of major technical issues. The site also features news about events, publications (print and electronic), and other developments in the field.

The main website contains an annotated list of classical projects that utilize computing technology, and links to freely available tools and resources of use to scholars engaging in such projects. The core of the project is the Wiki FAQ: an interactive platform for the building of a Frequently Asked Questions list, with answers and other suggestions offered by members of the community, and collectively authored work-in-progress guidelines and reports.

The site’s creators seek to encourage the growth of a community of practice, which is open to everyone interested in the topic, regardless of skill or experience in technical matters, and language of contribution. As a general principle, key sections of the website or summaries of discussions will, where possible, be translated into the major languages of European scholarship: e.g. English, French, German, and Italian.

The Digital Classicist is hosted by the Centre for Computing in the Humanities at King’s College London.

Harvard@Home

by Gina Siesing, Tufts University

athome.jpg

Harvard@Home offers dozens of rich multimedia programs, each of which allows in-depth exploration of a particular intellectual or artistic arena. Programs are freely viewable by the public and are designed to be of general interest to those with curiosity about a variety of fields. Based on faculty and expert lectures and symposia, the programs feature highly edited streamed video segments, supplemented by additional program materials, including textual description, glossaries, timelines, maps, and QTVR.

All programs are open to the world. Topics run the gamut of disciplinary and interdisciplinary interests; each program offers from 45 minutes to many hours of video content. Harvard@Home publishes approximately one new program per month. Visitors may join the program announcement list to hear about new releases: athome@fas.harvard.edu.

UO Channel

by Rachel Smith, NMC: The New Media Consortium

uo-channel.jpg

The UO Channel at the University of Oregon is a gateway to video programs that reflect the quality, creativity, and diversity of academic and cultural life at the university. Featured programs include lectures, interviews, performances, symposia, documentary productions, and more. In addition to video/streaming media on demand, the UO Channel also provides access to campus radio stations.

Visitors will find such varied offerings as a recent (November 2005) conversation with film director David Lynch and a not-so-recent (circa 1934) recruitment video for the university. Invited speakers include authors, activists, and scholars, among others.

The UO Channel is a collaborative service of the UO Libraries, Public and Media Relations, and the Computing Center.

Writely

 by Bryan Alexander, Center for Educational Technology, NITLE

The slow rise of wikis as popular authoring tools produced a series of commercial ventures, such as SocialText. Recently a new group of Web 2.0-oriented, wiki-like services has appeared. Writely is a good example of these. It offers an easy-to-access, clean-looking, collaborative writing space, much like JotSpot Live and Writeboard. Users can quickly set up a web page focused on a single document.
Unlike most wiki implementations, and contrary to the old wiki ethos, Writely excludes all potential editors, save those invited by the creator via email. Editing is advanced wiki, including multiple versions and rollback. Contributors are identifiable, unlike most wikis. Menus aren’t intrusive, but drop down through AJAX.
From an information services support perspective, Writely has several advantages. As a hosted site it looks relatively noncommercial. For backup and migration purposes, Writely exports content into several formats, including HTML and Word. In terms of training, the creation and content editing window is a clear, simple WYSIWYG, easier to get into and more recognizable for non-specialists than many wiki implementations. Writely can ingest uploaded documents in common formats, such as Word.
Disadvantages are fairly evident. Like most Web 2.0 products, it’s in beta. As with any externally-hosted service, academic users are at the mercy of someone else’s business decisions. For Mac users, Safari has issues, and may not be supported. Overall, however, the ease of use and good export renders Writely worth the experiment.

TK3: A Tool to (Re)Compose

by Virginia Kuhn, University of Southern California

For a copy of this review in TK3, please click here. If you need to download the TK3 reader, please click here. The download is quick and easy.

Ever since reading George Landow’s Hypertext 2.0 in the mid 90’s, I have been leery of application-specific theory, at least when it is not acknowledged as such. Hypertext 2.0 includes copious references to StorySpace, a software program which allows one to create hypertext and to view and manipulate nodes into various spatial arrangements. These references are not accompanied by full disclosure of Landow’s role in the program’s development; it was developed by academics, many of whom work with Landow. Though it is not clear whether he is a developer himself, there is undoubtedly a reciprocal relationship between him and Eastgate Systems, Inc., the group that created and distributes the program. As I scanned the index entry for StorySpace, noticing that it is longer than that for the World Wide Web, I felt slightly duped at having shelled out $200+ for this rather limited program, particularly when the Eastgate site was hawking Landow’s book. Now, however, I find myself in a curiously similar position. That is to say, much of the scholarly work I have done lately centers on a software program with which I have been teaching for several semesters and in which I created my dissertation. So while my reasons for this partiality ought to be evident if I have done my job in this review, I still feel the need to state my bias from the outset. Fair warning then: I have a rather solid (though non-monetary) investment in this platform. And perhaps it is time for academics to be intimately involved in software development, as I believe Landow was with Story Space, as I am with TK3; if we educators do not help to shape the tools we use, they will be shaped for us and will, no doubt, be imposed upon us by corporate entities such as Microsoft.

When I first encountered TK3, I was teaching at a large urban university and was looking for a way to have students engage with emergent digital technologies; this required them to not only consume or analyze multimodal “texts” but to create them as well. Bob Stein, the founder of NightKitchen, TK3’s developer, was the keynote at the 2003 Computers & Writing Conference held at Purdue; I was there facilitating a pre-conference workshop on technology training for new teachers. Another of the pre-conference activities included a collaborative e-book, Digital Publishing F5 | Refreshed, created by Bob Stein, David Blakesley and thirty other intellectuals—both publishers and academics—interested in the scholarship, pedagogy and publishing potentials of emergent technologies. Indeed, since the New London Group’s 1996 manifesto calling for educators to attend to “multiliteracy,” there has been increasing attention among writing scholars to interrogate  reading and writing with new media.

I was keenly interested in TK3’s pedagogical possibilities since it promised to allow all students, regardless of their technological prowess, to create digital texts. TK3 seemed an alternative to web page construction and given that I could not count on students’ ability to know HTML—indeed there is no baseline for students’ technological proficiency—and being that I did not want (nor did I have the time or expertise) to teach programming, I was stymied. I had begun requiring students to compose visual compositions but I had to allow for a range of vehicles for completion of their projects, and this was problematic for many reasons. Although there are seemingly numerous programs that allow multimedia authoring, none of them was adequate for my needs, as I shall explain.

I tried TK3 in a course I taught called Writing in Cyberspace, and soon began requiring it in classes which ranged from Media Literacy to Multicultural America. This program, meant to allow humans with few computer skills to create electronic “books” which include verbal text, still images, sound and video clips, as well as web links and internal jumps, is by far the best of its type, particularly in terms of making meaning in several registers (aural, filmic, imagistic, textual). Its next version, Sophie, will improve upon its precursor and answer the pedagogical needs of those committed to teaching their students a wide range of literate practices; many of the changes came from feedback from those who have used the program. With TK3, one can easily become a new media producer and exhibit what the New London Group sees as “competent control” of new media texts. Indeed, the ability to contribute to the discourse of new media helps to foster the sort of advanced literacy appropriate to university level studies; without this sort of transformative use, there is very little liberatory potential for literacy.

When I have mandated a particular program in a class, my students and I perform a careful review of the background of its developers. First, I want students to be aware of the programming choices behind the hardware and software used in class; secondly, I want students to confront the content and design choices necessary to create such “texts” themselves. As such, the provenance of the program—the maker of the platform and also the use for which it was made and has been subsequently deployed—is indeed germane.

Throughout The Critical Theory of Technology, Andrew Feenberg argues that technology is ultimately ambivalent, and can be used for disparate social and political ends. Insofar as it is ambivalent, however, there is an inherent ideology that guides programming choices—the surveillance features of many office-networked programs is a good example of this since there is no technical reason for such features to exist. Thus, we must view interaction with technology not as one occurring between a person and a machine but rather as a dialogue between a human user and a human programmer.[1] No curricular materials, be they textbooks, software or hardware, are politically innocent or ideologically neutral; thus, it is important to teach students not only to critically “read” such “texts” but also to use them in a way that demonstrates both self-consciousness as well as an awareness of both the possibilities and the limitations of constructing meaning in a given environment. I firmly believe that we, as academics, cannot afford to simply use platforms created by corporations or those produced for some other purpose in university-level education. We have to step up to the plate and answer for the ideological implications inherent in the vehicles we require students to use, as well as remaining attuned to the specifics of the projects we assign to be carried out in these vehicles.

Figure 1. Screen shot showing the “find” feature which lists not only the search term but also the words immediately before and after. The list is also hot so one click brings the reader to its reference.

Although I mostly use TK3’s authoring function in class (I have students create TK3 books as their course projects), the reader functions are at least as compelling and offer many teaching strategies. The reader functions take full advantage of the encyclopedic nature of the computer. There is a rich “find” function that allows one to search for words or phrases. The results are then listed with a few words that appear on either side of the searched word in order to give a better sense of the context in which each instance occurs. Further, the resulting list is “hot” so that one click takes you to the place at which the word occurs. The reader can highlight text, add sticky notes or produce a notebook in which she can copy excerpts that import replete with working citations. All of these markings are catalogued in a “marks” history; again this list is hot. For a particularly good example of teacherly comments and directions for their access by students, see Cheryl Panelli’s web page that describes the ways in which her students ought to download and read her comments on their work. As a pedagogical tool, TK3 allows the teacher to approach the student text with the sort of seriousness with which she would approach a published text. As teachers, we are conditioned to engage with published writing via a highlighter, while we approach student writing with a red pen. TK3’s reader functions help to dismantle this dichotomy. Reader functions aside, the ability to “write” new media is what I find most compelling about TK3. As the editors of Literacy: A Critical Sourcebook point out, computers call attention to “the relation between technology and literacy in an especially insistent fashion” and yet this relationship has always been a concern, from the technology of writing itself, to other vehicles of textual and pictographic delivery and dissemination.[2] In Chaucer’s day, they note, the act of reading was becoming more widespread but the act of writing was one left to experts. Further, as several essays in this anthology note, the liberatory potential of literacy cannot actualize without the ability to write; for instance, speaking of cultural critic bell hooks’s account of her education at the feet of her black female teachers, Lisa Delpit notes that hooks learned that she was “capable of not only participating in the mainstream, but of redirecting its currents.”[3] This is why scholars such as Gregory Ulmer call for teachers to move students from literacy to “electracy,” a neologism he adopts to describe the types of abilities necessary to be conversant in digital technologies. Seeing digital technologies as inherently image-based, Ulmer teaches students to use both poststructuralist theory and memory work (psychoanalytic theory) to investigate the ways in which cultural structures shape their identities and desires. Students represent that work via web pages produced in response to a series of exercises that help make them “egents” in an electrate world. I subscribe to Ulmer’s methodology and yet I cannot, as I said, count on my students to be able to code web pages. Even if students were to use a variety of web composing programs such as FrontPage or Netscape’s Composer, there would be less opportunity to interrogate individual programs and, without a lot of training, the pages would be very rudimentary. TK3 simply gives the author far more control over the interplay between word and image as well as sound and video.

There are several aspects of the program worth mentioning and they are as follows.

Book Metaphor: Because TK3 retains the book metaphor (while not being a slave to it), it does not sacrifice print literacy for bells and whistles. Unlike PowerPoint, for instance, there is no limit to the amount of text that fits on a screen. As a result, an author is not encouraged to reduce complex ideas into bullet points so commonly found on PowerPoint slides. In addition, images may either be added into the “stream” of the text, anchored between words and moving with them, or they can be added independently with the text wrapping around them. This latter feature adds visual elegance to the page even as it allows for more complexity in the relationship between text and image. One can teach print and visual literacy in a contrastive way, using word and image to inform each other and to highlight the potentials and limitations of each as semiotic systems. The text-friendliness of TK3 is crucial for those of us who do not wish to sacrifice print literacy even as we endeavor to enhance it with other forms of expression afforded by digital technologies. Edward Tufte argues that PowerPoint’s limited text capability causes the oversimplification of complex ideas and cites the breakdown of information among NASA engineers and business managers that ultimately led to the Challenger explosion.[4] While I believe Tufte overstates the case, it is important to remember that PowerPoint and Keynote are presentation programs, ones meant to be accompanied by a human presenter who supplies more detailed information. The Challenger incident is more an example of using the wrong tool, or at least of relying on PowerPoint alone to convey critical information. This is, indeed, rocket science. The problem occurs when such programs are used to produce stand-alone texts, those associated with college level assignments. By contrast, TK3 may be used as a presentation aid but can also include the complex ancillary documents.

Figure 2. Stuart Pate’s student project. He did not make use of the more linear book metaphor and yet its availibility in TK3 is important in that it does not abandon print literacy in favor of image and sound.

Media Resources: This feature is the key to the program’s brilliance. TK3 can act as a holding tank. One can import multiple types of files including jpegs, movie files, text files, sound clips and web links. From here the author simply drags the assets into her book and manipulates them as she wishes. The media resources feature can be filled with a wide range of resources, allowing one to “write” with numerous modes of expression in a very deliberate way. Thus, the program fosters self-conscious decisions about both form and content. Further, since TK3 uses its own reader, the resulting text is not dependent on the look of various web browsers that might change the look of the e-book. Though the Internet can be used as a method of dissemination, the books retain the integrity of the author’s creation.

Complex Annotation: A stunning feature of the program is the ability to annotate. One can create both complex and simple annotations. The simple annotation is one in which a pop up box contains only one resource: word or image. This annotation may be attached to a word or an image in the main body of the book and can be attached with an “annobeam.” This annotation, with its transparent annobeam, allows one to convey further information without sending the reader to a different page, from which she may not be able to return, even as it visually shows its origin without covering the rest of the page and disrupting the main body. The complex annotation takes this capacity to a higher level. A complex annotation is defined as one that includes more than one media element. Similar to a mini-book, this annotation can house film and sound clips, text and image and also links to other annotations or to outside web sources. Thus, one can create a complex annotation inside a complex annotation ad infinitum, enacting the sort of depth that characterizes the pinnacle of hypertext as Jerome McGann describes it in “The Rationale for Hypertext.”[5]

Though the term “depth” is bandied about frequently in the discourse of hypertextual structures, McGann offers the smartest explication of what that actually might mean and how such capacity can exceed the work of the bound book. The ability for depth of analysis is instantiated in TK3 annotation features.

Using the example of the scholarly or critical edition, a staple of literary studies, McGann describes the inherent shortcoming of using a book to analyze a book due to the one-to-one ratio:

 Brilliantly conceived, these works are nonetheless infamously difficult to read and use. Their problems arise because they deploy a book form to study another book form. This symmetry between the tool and its subject forces the scholar to invent analytic mechanisms that must be displayed and engaged at the primary reading level — e.g., apparatus structures, descriptive bibliographies, calculi of variants, shorthand reference forms, and so forth. The critical edition’s apparatus, for example, exists only because no single book or manageable set of books can incorporate for analysis all of the relevant documents. In standard critical editions, the primary materials come before the reader in abbreviated and coded forms.

The problems grow more acute when readers want or need something beyond the semantic content of the primary textual materials — when one wants to hear the performance of a song or ballad, see a play, or look at the physical features of texts. Facsimile editions answer to some of these requirements, but once again the book form proves a stumbling block in many cases. Because the facsimile edition stands in a one-to-one relation to its original, it has minimal analytic power — in sharp contrast to the critical edition. Facsimile editions are most useful not as analytic engines, but as tools for increasing access to rare works.[6]

Programmatic Support: There are a number of premises under which I approach digital technologies and they are as follows:

  1. Literacy includes both reading and writing (decoding and encoding).
  2. Digital technologies change what it means to read and to write.
  3. We are on the verge of a new semiotic system that is rooted in print and image but exceeds them.
  4. Literacy cannot be unhooked from its materiality.
  5. Technology is never politically innocent, nor ideologically neutral.

Given these guiding beliefs, TK3’s developer is a strong asset to the program. After receiving two large grants, one from the Mellon Foundation and one from the MacArthur Foundation, the Institute for the Future of the Book was formed in order to create the next generation of tools for digital scholarship. This Institute has launched many projects in addition to overseeing the development of the next generation of TK3 which will be called Sophie. Due out within months, Sophie will incorporate feedback and suggestions from the teachers, students, writers, and artists who have used TK3. It will be open source and networked, both of which will encourage user enhancement and wider use. The people who staff the Institute for the Future of the Book are smart, supportive and responsive to the needs of the educators they serve; they are involved in numerous projects at the forefront of digital technologies, including digital textbooks, academic blogs, conversations with authors of prominent new books and the archiving of art and photography. Their blog, the if:book blog, is widely-read and cited in many e-zines and on other blogs. Their work is collaborative and not committed to revenue and other corporate concerns; as such they are great intellectual partners. They are a strong feature of TK3, a program that allows writers to both theorize and enact the types of literacies necessary for life in the 21st-century, wired world.

Figure 3. Below is the opening screen of the Institute for the Future of the Book site, the Institute will release TK3’s next version, Sophie.

Notes

1. Andrew Feenberg, Critical Theory of Technology (Oxford: Oxford UP, 1992), 183.

2. Ellen Cushman et al, Literacy: A Critical Sourcebook (Boston: Bedford St.Martins, 2001), 4.

3. Lisa Delpit, “The Politics of Teaching Literate Discourse” in Literacy: A Critical Sourcebook (Boston: Bedford St. Martins, 2001), 552.

4. Edward Tufte, “PowerPoint Does Rocket Science: Assessing the Quality and  Credibility of Technical Reports” (Home Page, Selected Writings, 6 September 2005) http://www.edwardtufte.com/bboard/q-and-a-fetchmsg?msg_id=0001yB&topic_id=1.  Tufte concludes that Technical Reports are superior to PowerPoint for conveying complex information but this seems obvious since PowerPoint is a presentation software and does not pretend to be a reporting format.

5. Jerome McGann, “The Rationale of Hypertext” http://www.iath.virginia.edu/public/jjm2f/rationale.html, 1995, screen 2.

6. ibid.

Bibliography
Blakesly, David, et al.  Digital Publishing F5 | Refreshed.   West Layafette: Parlor Press, 2003. http://www.parlorpress.com/digital.html

Landow, George. Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins UP, 1992.

New London Group, Multiliteracies: Literacy Learning and the Design of Social Futures. New York: Routledge, 1999.

Ulmer, Gregory. Internet Invention: From Literacy to Electracy. New York: Longman, 2002.

css.php