The Education Arcade

by Rachel Smith, NMC: The New Media Consortium

education-arcade.jpg

The Massachusetts Institute of Technology and University of Wisconsin-Madison have joined forces to catalyze new, creative, teaching and learning innovations around the next generation of commercially available educational electronic games. The Education Arcade, a two-year-old research and educational initiative established by leading scholars of computer and video games and education at both universities, plans to focus efforts by partnering with educational publishers, media companies and game developers to produce new educational electronic games and make them available to a larger audience of students and their teachers and parents.

The Education Arcade’s mission has been to demonstrate the social, cultural, and educational potentials of videogames by initiating new game development projects, coordinating interdisciplinary research efforts and informing public conversations about the broader and sometimes unexpected uses of this emerging art form in education. Having sponsored several annual conferences with the Entertainment Software Association at its E3Expo in Los Angeles, and with a series of landmark research projects in the field now complete, the Education Arcade looks ahead to help drive new innovations with commercial partners.

Previously, researchers at MIT have explored key issues in the use of a wide variety of media in teaching and learning through the Games-to-Teach Project, a Microsoft-funded initiative with MIT Comparative Media Studies that ran between 2001 and 2003. The project resulted in a suite of conceptual frameworks designed to support learning across math, science, engineering, and humanities curricula. Working with top game designers from industry and with faculty across MIT’s five schools, researchers produced 15 game concepts with supporting pedagogy that showed how advanced math, science and humanities content could be uniquely blended with state-of-the-art game play.

Several challenges have severely limited broader development and availability of educational games in the market, including the collapse of the CD-ROM software market, the failure of educational media in retail spaces, strict state adoption requirements, expensive production costs, and limited collaboration across the variety of disciplines needed to create compelling and educationally viable interactive media. By working with leading textbook publishers, media companies and game developers, the Education Arcade aims to help overcome these formidable challenges by focusing on an initial set of strategically-targeted, educationally-proven and expertly developed and produced on-line computer games that will be distributed through desktop computers and mobile devices.

By serving as the glue between university-based research and commercial product development, the Education Arcade is uniquely poised to make a profound impact on the production and use of games in the classroom and beyond. Education Arcade contributions to game production include (1) creative contextual development, (2) pedagogical and learning framework development, (3) curricular and teacher support, and (4) assessment and student evaluation studies.

Online Learning Highlight Videos

by Rachel Smith, NMC: The New Media Consortium

RIT-video.jpg

Two short videos prepared by the Online Learning department at the Rochester Institute of Technology (RIT) showcase the center’s activities. The first video, “Online Learning Showcase” (6 minutes long), is an overview of several projects at RIT. Topics include Pachyderm, the Student Response System, RIT’s course management systems approach, remote tutoring with Breeze Meeting and blended learning courses.The second video, “A Conversation on Blended Learning” (5 minutes long), highlights the advantages of using technology to facilitate teamwork and social networking among deaf and hearing students—in this case, activities that were previously only possible with the assistance of an interpreter. The video tells the story of how technology not only allows for the interactions to happen, but enables a deaf student to take on a leadership role in the class—without an interpreter.

Both videos may be viewed at http://online.rit.edu/faculty/

The Online Learning website is located at http://online.rit.edu

Mavericks: An Incorrigible History of Alberta

by Rachel Smith, NMC: The New Media Consortium

mavericks.jpg

The Learning Commons at the University of Calgary has worked with the Glenbow museum to create Mavericks: An Incorrigible History of Alberta, an extensive, interactive website that introduces the legendary tales and colorful personalities who shaped and defined Alberta’s history, and are the predecessors of Alberta’s maverick nature.

The Learning Commons was founded in 1997 to develop and promote quality, innovative approaches to teaching and learning in higher education. The organization provides support for the academic community through professional development programs, curriculum and project support services, and multimedia and technology development.

Mavericks: An Incorrigible History of Alberta was created to connect to the grades four to seven social studies curriculum. The ideas for student activities follow a project-based and inquiry-based learning approach and are multidisciplinary, combining social studies with language arts, mathematics, science, art, and health.

The site presents nationally significant stories of the important people and events that have shaped the identity of Alberta. Over 545 primary source images, audio, and video clips of historical materials have been digitized to increase access to Glenbow’s collections and information resources on the history of Alberta. These historical primary source materials are first-hand, original, authentic accounts of the past. They are the actual records or evidence of history and allow students to become actively engaged by positioning people, places, ideas, and events within their historical context. There is also a section on historical maps and an interactive timeline.

The Mavericks online exhibition, part of Alberta Canada’s centennial celebration, was created by the Learning Commons using the Pachyderm 2.0 software tool, a program that allows users to design educational and interactive multimedia presentations. The software links screens together, resizes images to appropriate dimensions, packages up audio and video files, and incorporates Flash technology to create vibrant and informative presentations. The Learning Commons is a member of the Pachyderm 2.0 Project consortium, a collaboration of software developers, university library specialists and museum experts who are developing and testing the program. D’Arcy Norman and Shawn Tse led the Mavericks initiative on behalf of the Learning Commons. King Chung Huang also contributed to the project. Flash programming was provided by Arts ISIT at the University of British Columbia.

The Daedalus Project

by Rachel Smith, NMC: The New Media Consortium

Daedalus.jpg

Research into educational gaming is leading to a deeper understanding of topics like gaming and engagement theory, the effect of using games in practice, and the structure of cooperation in gameplay. In the 2006 Horizon Report, the New Media Consortium and the Educause Learning Initiative describe the movement this way:”The past year has seen a subtle shift in the way educational gaming is perceived in higher education. A number of interesting examples have shown anecdotally that games can be very effective tools for learning. As a result, there is an increasing interest among scholars in researching the subject, not only to quantify the actual effect of using games to teach, but also to define the essence of gaming itself in order to better apply its principles to education. Educational gaming is no longer a fringe activity pursued only by extreme technophiles—it is emerging as a discipline unto itself, multifaceted and rich.”

One example of this research is the Daedalus Project, an ongoing study of Massively-Multiplayer Online Role-Playing Game (MMORPG) players. MMORPGs, or MMOs, are a video game genre that allow thousands of people to interact, compete, and collaborate in an online virtual environment. Over the past 6 years, more than 40,000 MMORPG players have participated in the project by completing surveys about their playing style, habits, and preferences. Various topics have been examined, from gender-related motivation factors to the effect of running an in-game guild on one’s real life experiences. The results of the research are available as reports sorted by topic.

Looking at Learning, Looking Together

by Joseph Ugoretz, Macaulay Honors College–CUNY

dport.jpg

What does it mean for colleagues in very different disciplines, in a community college, to work together on a Scholarship of Teaching and Learning (SoTL) project? What happens when they join together to take seriously their students’ learning, their learning as individual professors, and their collaborative learning? And what happens when they undertake an electronic publication of that worka digital gallery?

With the support of the Visible Knowledge Project and the Center for New Designs in Learning and Scholarship at Georgetown University, two faculty from the Borough of Manhattan Community College (CUNY) have developed a website which documents student learning, as well as collaborative scholarship of teaching and learning–using the web as a medium to publish the process as well as the conclusions of their research into student-created digital storytelling projects.

Cyberinfrastructure = Hardware + Software + Bandwidth + People

by Michael Roy, Middlebury College

 

A report on the NERCOMP SIG workshop Let No Good Deed Go Unpunished; Setting up Centralized Computational Research Support, 10/25/06

Introduction
Back to the Future of Research Computing

As Clifford Lynch pointed out at a recent CNI taskforce meeting, the roots of academic computing are in research. The formation of computing centers on our campuses was originally driven by faculty and students who needed access to computer systems in order to tackle research questions. It was only years later that the idea of computers being useful in teaching came into play. And once that idea took hold, it seemed that we forgot about the research origins of academic computing.

Lynch argues that the pendulum is swinging back again, as campuses nationwide report an increased interest in having libraries and computer centers provide meaningful, sustainable and programmatic support for the research enterprise across a wide range of disciplines.

At the October 27, 2005 NERCOMP meeting entitled “Let No Good Deed Go Unpunished,” Leo Hill, Leslie Hitch and Glenn Pierce from Northeastern University gave a presentation about how they planned for and implemented a university computer cluster that serves the research agendas of a wide array of Northeastern’s faculty.

The talks provided good information about the technology planning, the politics and the policy questions that arose, and placed the entire project within an economic model that is useful for analyzing a broad range of academic initiatives taking place on our campuses.

Key Questions:

  1. To what extent should support for research computing be centralized?
  2. If one runs a centralized research computing facility, how does one secure funding for it?
  3. What are some technology strategies for keeping these costs to a minimum?
  4. How can one justify the establishment of a centralized research facility in language that makes sense to academic administrators?
  5. How can this impulse be explained in terms of current trends in computation in particular and research in general?
  6. How do you allocate resources to map to institutional priorities?

Part One
On the Ground: Technical Considerations

Speaker: Leo Hill, Academic and Research Technology Consultant, Northeastern University

Slides available at https://myfiles.neu.edu/l.hill/deed/

How do you support research and high performance computing?
As a way into explaining why Northeastern took on the project of building a centralized computer cluster, Hill began his talk by making the claim that faculty are not experts at many of the technologies that are required to provide a robust cluster computer environment (OS, Patches, Security, Networking). He also shared his impression that the National Science Foundation and other funding agencies increasingly look for centralized support as part of the overhead that they pay to universities.

In addition, a major benefit to a centralized facility is that a university can enjoy volume discounts for hardware and software, as well as for the considerable costs associated with creating a space to house a large cluster. These costs primarily revolve around power and air conditioning.

How did the process of designing this space work?
A Research Computing steering committee was created. The group’s job was to understand the needs of the greater community. They conducted interviews about present and future research projects of the Northeastern faculty, as a way to understand what sort of software and computational horsepower they would need. In analyzing the results of these interviews, they asked: Are there consistent terms? How can we resolve conflicting concepts? How do we translate these various desires into a viable service?

Their solution was to build a cluster that had the following features:

  1. Job management (queues)
  2. Ability to interactively run jobs (required for some users)
  3. Ability to support large files
  4. Ability to efficiently support large data sets (in excess of 4 gig)

As is true of all centrally-managed computational facilities, they had to factor in (and make trade-offs between) processing power and very large file storage. The list of software that the cluster would be supporting (see slides) was large but did not seem to exceed what most schools support on a range of devices on their network.

Once they had the hardware and software requirements in place, the team chose to develop an RFP (request for proposal) in order to collect bids from multiple vendors. Hill used a web-based service offered by HP (http://linuxhpc.org) for both developing and distributing RFP. As cluster computing has matured into a commodity that one can buy, vendors have begun to provide data on the impact of their systems upon air conditioning and power, allowing a better understanding of the overall set-up costs of a data center.

One of the more alarming aspects of the requirements of this project was that it all had to be accomplished with no new staff. This drove the team to look for a vendor-supported turnkey solution (they ended up choosing Dell with ROCKS as the platform). With no new staff, there has been some impact on existing services. The helpdesk now needs to be able to respond to new types of questions. System administration is accomplished by two existing staff who collectively dedicate roughly four hours per week to this service. They also needed to develop service level agreements around node downtime. How quickly should they respond if a single node goes down? What if the head end of the system is no longer functioning? Implicit in making software available is the support for that software, which has meant that they have also reinstated a dormant training program to explain how to work in this environment, and to provide some support for particular applications.

While the cluster is presently offered as a free service, the work on developing the cluster has triggered interest in and the development of other services at Northeastern. This includes selling rackspace in the datacenter, advanced programming support, and increased (and welcome) consultation on grant writing and equipment specifications.

Part Two
Campus Politics and Process
Speaker: Leslie Hitch, Director of Academic Technology, Northeastern University

Slides available at https://myfiles.neu.edu/l.hill/deed/

While Hill’s presentation provided useful insights into the actual process by which the particular hardware and software were selected, installed and managed, Hitch’s talk focused on the institutional framework in which the project was carried out. Northeastern’s issues should be quite familiar to anyone working in higher ed today. The University’s academic plan calls for an increase in the quantity and quality of faculty research, and the project responds nicely to that area. It also calls for increased undergraduate involvement in research, which can be linked to this project as well. Advocates also linked the project to a possible boost in NEU’s rankings in US News & World Report, suggesting that ignoring research computing is something that one did only at one’s peril.

While the project was driven partially by actual faculty demand, it also anticipated growth in need in the social sciences and humanities, which do not have the traditional funding streams that the scientists enjoy. (For more information, see the draft report on Cyberinfrastructure for the Humanities and Social Sciences, recently published by the American Council of Learned Societies.)

In order to design the system, Hitch’s team set out to find what is common among various scientists and social scientists—a perfectly fine question, and one that those wanting to document the complex working relationships among their various faculties would be well-advised to consider. The act of asking people about what they do with technology, and what they would like to do with technology, almost always reveals useful insights into the nature and structure of their disciplines.

While the list of differences (software, memory requirements, gui v. command line, support requirements) in this case was framed as a means of specifying a particular system, the differences can also be understood in terms of what is called “scholarly infrastructure,” based on Dan Atkins’s recent work for the NSF in describing “cyberinfrastructure.” The slide below—from Indiana University’s recent Educause presentation—suggests a useful way of visualizing what particular disciplines have in common, and how they differ.

Source: “Centralize Research Computing to Drive Innovation, Really,” a presentation by Thomas J. Hacker and Bradley C. Wheeler, Indiana University.

Of course, with increased bandwidth among our schools, the act of centralization need not necessarily stay within the campus. Couldn’t our faculty share infrastructure by discipline in multi-institutional facilities staffed by domain experts who can help with the domain-specific applications? What of the various national supercomputer centers? Why should we build campus-specific clusters if the NSF and others will provide for us through national centers?

One answer to this question lies in governance. For such centers to be sustainable, there needs to be a funding model in place, and a fair and agreed-upon system for allocating computational cycles and provisioning support. (Hitch provides the charge to their user group in her slides.)

Northeastern’s funding model, not yet fully articulated, is to be determined by its users. Northeastern has also decided to allow the users of the system to develop their own policy about the allocation of computational cycles. Since there is no new FTE attached to this project, they do not have to worry about how to allocate the provision of support!

One funding model under discussion links awareness of IT to sponsored research. How can IT be used to bring in more money for research? Is providing this service something that should be part of overhead? If so, how do you go about securing a portion of overhead to dedicate to this sort of facility?

If one believes that the future of academic research lies in the increased use of such facilities, the question of staffing these facilities becomes critical. Is it enough to fund centralized facilities just to avoid the costs of lots of little clusters and to promote outside funding, allowing faculty to raise more money? One needs to more fully understand the support needs of such a transformed enterprise. In the discussion, hard questions arose about who would be providing this sort of support. Who pays for these people? To whom do they report? Even harder, where do they come from? How do you find people who can do this kind of work with/for the faculty? Does shifting the research associate from the department to the central IT universe reduce the amount of freedom, control, and experimental play? How can one build into the structure of these new types of support positions the ability to raise funds, to do research, to stay engaged in the field?

Part Three
Academic Research Process and IT Services
Speaker: Glenn Pierce, Director, IS Strategy and Research, College of Criminal Justice, Northeastern University

The next session moved from the particulars of Northeastern’s technical and political environment to a broader reflection on the implications of centralized research computing support for the academic enterprise. Pierce began by using the history of other enterprises (most notably, banking) to suggest that there are profound changes underway that could (for many disciplines) completely transform their way of conducting research, and eventually affect what happens in the classroom.

Using language more familiar to business school than to the usual IT conference, Pierce described the research process as a value/supply chain heavily dependent on IT investments and support. In this model, any break in the chain disrupts the process, slowing down the rate at which the faculty member can productively produce research, while new efficiencies (faster hardware, better software, training of faculty, hands-on IT support) can improve the efficiency of the process.

 

Source: Weill, Peter and Marianne Broadbent. Leveraging the New Infrastructure: How Market Leaders Capitalize on IT. Boston: Harvard Business School Press, 1998.

In a slide reminiscent of the scholarly cyberinfrastructure slide Hitch used, one is able to see the core question of the day: Where is the cut-off for central services: fast changing local application? shared standard IT applications? shared IT services? For Pierce, central IT should aim to go as high up the pyramid as possible.

While Pierce acknowledges that it is a real challenge to imagine a world in which centralized IT has intimate knowledge about domain-specific applications, he also challenges colleges and universities to re-think what is meant by productivity, and to ask not what it costs to provide central IT support for research computing, but instead to ask what it costs NOT to provide it. He argues that faculty doing their own IT represents a loss in productivity and a lost opportunity, and that traditional measures of academic productivity (like traditional measures of productivity in other industries) do not capture the fact that entire industries can be changed, created, or eliminated altogether through the revolution afforded by the powers of computing.

One concrete example Pierce offers is Softricity, an application (like Citrix) that allows one to run applications locally, taking advantage of local computer resources, without installing the application directly on the local machine. This fundamental change in how software can be distributed would require major changes both organizationally and financially. Pierce argues that the predominant model where all budgets across an institution rise and fall at the same rate gets in the way of fundamental change. In the case of Softricity, in order to meet an increased demand for applications and data, we need more money to make this available, and yet these arguments rarely succeed in an academic culture that approaches change incrementally. It is therefore difficult, if not impossible, to fundamentally re-tool to take advantage of the power and increased productivity enabled by centralized IT services.

If one accepts the argument that investing in central IT makes good business sense, and one is looking for other parts of the academic enterprise where one can point to increased productivity, Pierce suggests that the same productivity gains enjoyed by centrally-supported research initiatives can be (hypothetically) found in student education outcomes. This tantalizing claim, not backed up by examples, certainly seems worthy of further investigation.

So what keeps us from all changing overnight from our distributed model back to something that looks, to many, an awful lot like the old mainframe centralized model? Pierce identifies four major barriers to extending centrally-supported IT for research:

  1. The existing perception of IT service (many researchers simply do not believe central IT is up to the task)
  2. Current funding models that
    1. are balkanized
    2. measure costs rather than productivity
    3. make it difficult to measure or even see cost of lost opportunities
  3. Current planning models that suffer from the same problems as our funding models
  4. Anxiety over the loss of local control

Using the scholarly infrastructure model, Pierce made the point that the further one moves away from technical issues of hardware, operating systems and networking, and into the domain of discipline-specific software, the more involved faculty need to be in the planning process. He also makes the point that the sort of financial re-organization required to support this shift toward a centralized model requires a genuine partnership between the IT leadership and academic leadership. All of this is possible only if the campus really and truly believes that IT-supported research can fundamentally change for the better how we conduct research and eventually how we educate our students.

Conclusions
Possible Futures & Implications

What follows is a list of possible changes in the daily operations on campuses that embrace the idea of investing in the support of IT-supported research, and a few ideas for collaboration between campuses (or business opportunities):

  1. Change the way you distribute software to allow more ubiquitous access to software, using technologies such as Softricity or Citrix.
  2. Fund more aggressively-centralized resources such as clusters.
  3. Hire discipline-aware IT support staff who can work with faculty on research problems.

As our campuses become increasingly connected by high-speed networks, one can ask questions such as:

  1. Can we negotiate licenses with vendors that would allow us to consortially provide access to software?
  2. Can we create local clusters that multiple campuses can fund and support?
  3. Can discipline-specific support be organized consortially to allow (for example), an economist at School A in need of help with SAS to get that help from a SAS expert at School B?

What does cluster and research computing have to do with liberal arts education?
One can imagine protests about shifting institutional resources into IT-supported research computing. For some this will be seen as an unwelcome return to the early days of campus computing, when a disproportionate share of the support went to a few faculty from the handful of fields that had discovered how to use computers to facilitate their research. As in the first generation of campus computing, however, this trend may be a harbinger of demands that will arise across campus and across disciplines. If one takes seriously the propositions put forth in the recent American Council of Learned Societies report on cyberinfrastructure for the humanities and social sciences, this re-alignment of resources in support of changing requirements for scientific and quantitative research is very likely one of the re-alignments that will be required to support teaching, research, and scholarly communications in ALL disciplines.

Further Readings

Educause Resource Library on Cyberinfrastructure
http://www.educause.edu/Cyberinfrastructure/645?Parent_ID=803

“The new model for supporting research at Purdue University,” ECAR Publication (requires subscription)
http://www.educause.edu/LibraryDetailPage/666?ID=ECS0507

Beyond Productivity, National Academy of Sciences
William J. Mitchell, Alan S. Inouye, and Marjory S. Blumenthal, Editors, Committee on Information Technology and Creativity, National Research Council, 2003.
http://books.nap.edu/html/beyond_productivity/

Speaker Contact Information

Leo Hill, Academic and Research Technology Consultant, Northeastern University l.hill@neu.edu

Leslie Hitch, Ed.D. Director of Academic Technology, Northeastern University l.hitch@neu.edu

Glenn Pierce, Ph.D, Director-IS Strategy & Research, College of Criminal Justice, Northeastern University g.pierce@neu.edu

Review of “Connecting Technology & Liberal Education: Theories and Case Studies” A NERCOMP event (4/5/06)

by Shel Sax, Middlebury College

 

On April 5, at the University of Massachusetts, Amherst, NERCOMP offered a SIG event on “Connecting Technology and Liberal Education: Theories and Case Studies.” Examining the description of the event on the NERCOMP web site (http://www.nercomp.org) made two things immediately apparent. This was a workshop looking at a very broad topic and all of the presenters came from an academic background rather than a technological one.

The flow of the day went from the most general, with Jo Ellen Parker beginning the proceedings with a discussion of the various theories of liberal education and their impact and influence on institutional technology decisions, to specific case studies offered by faculty from Emerson, Hamilton, Mt. Holyoke and Hampshire Colleges.

Session 1: What’s So “Liberal” About Higher Ed?

Speaker: Jo Ellen Parker, Executive Director, National Institute for Technology and Liberal Education (NITLE)

Jo Ellen Parker’s essay on the same topic can be found on the Academic Commons website at:
http://www.academiccommons.org/commons/essay/parker-whats-so-liberal-about-higher-ed

In her talk, Jo Ellen laid out a framework for thinking about the relationship between liberal education values and issues relating to instructional technology. She noted that:

  • Resistance to technology can be simply resistance, that is, defending important educational commitments from the perceived threat of technology.
  • The discussion of the role of liberal education and technology is often tangled up in conflicting ideas as to what liberal education really is.
  • Regardless of your background, being able to frame discussions of instructional technology initiatives within the language of liberal education can make you a more effective and articulate spokesperson.

Jo Ellen presented four models or theories of liberal education, noting that some are competing and some complementary. In reality, institutions reflect combinations of these theories rather than any one exclusively.

The first theory of liberal education is that it is one of content-based curriculum, studies liberated from the pressure of immediate applications and pursued without immediate practical benefits. This thread has been and remains dominant in most small, elite liberal arts colleges. In this view, the curriculum consists of pure rather than applied disciplines. Applied studies are not part of a liberal arts curriculum, e.g. accounting, musical performance, or community service (for credit). She noted an example where language acquisition is not given credit, considered simply the acquisition of a tool necessary to study literature in the foreign language.

The second theory of a liberal education comes from a pedagogical perspective and focuses on the development of intellectual skills over the mastery of content. Defining characteristics of this model are practices, not disciplines: group studies, student presentations, active learning, collaboration, and paper writing rather than test taking. This view of education is supported by research from the psychology of learning and pedagogical research. It is possible that an applied discipline can be taught “liberally.” Nursing, for example, can be taught either liberally or illiberally. If nurses are taught to solve problems and work collaboratively, then they are being taught liberally. If they are required to memorize large bodies of information and assigned content, then they are taught illiberally.

To some, liberal education is about the education of citizens. This approach values the development of literacy, numeracy, scientific and statistical proficiency, history, etc. The curriculum should target what is required to produce good citizens. It tends to value ethics and socially responsible behavior and emphasizes developing the whole person. In this model, faculty will view student life as an educational opportunity, and will value service learning and community service requirements. It encourages closer relationships between students, faculty and staff. There is greater concern about extending access, welcoming more low income students and encouraging the sharing of campus resources with the greater community. This civic focus of liberal education is often based in a religious history.

The final model of liberal education is less philosophical and more economic. It associates liberal education with institutions of a specific type. In a sense, it associates the degree with liberal education—whatever these colleges do, it is liberal education. This view tends to emphasize the physical characteristics of an institution: small size, privately funded, residential. These characteristics supposedly foster the goals of liberal educations so that any institution that does not share these physical attributes cannot deliver a liberal education. People who favor this model view the economic viability of these institutions as critical to the well-being of liberal education.

These competing theories of liberal education tend to “muddy the waters” when it comes to thinking about liberal education and instructional technology. The curricular-centric view of liberal education will regard technology as an extension of the library. The acquisition of new online scholarly resources, data sets, art objects, etc. are highly desirable and should be a priority within the technology budget. Advocates of this approach may not see value in spending technology resources on communications technology or course management systems, for example. Jo Ellen said that these folks have no understanding of Wikipedia! Those holding the curricular-centric perspective fret about the difficulty of locating quality material online and worry that students will be unable to distinguish quality materials from second rate ones. This often leads to a demand for “literacy programs.” Technology is not valued for its potential to change the nature of teaching and learning but rather values it for access to primary materials.

In comparison, the second theory sees technology as a change agent that enables faculty and students to do more and different things together. The focus is on a student-centered view of IT. Here, one finds more emphasis on course/learning management systems, communications tools, group study tools and new media formats. In colleges where this view has currency, IT resources will be spent on multimedia centers, collaborative classroom spaces, and developing faculty technical skills, giving these higher priority than acquiring resources. Critics of this approach are often concerned about the role of faculty and how technology may change it for the worse. Faculty pursuing this type of student-based learning can be intimidated by the technological fluency of both students and IT staff. They are concerned about the cost in time of acquiring IT skills at the expense of other scholarly and teaching activities. Faculty, having to confront what it means to become a student learning again, may resist moving in this direction. Librarians often feel threatened in this environment as there is more uncertainty as to exactly what their role should be.

The “citizenship” model of liberal education tries to extend resources. This can include tutoring high school students in the local community, using GIS to help local planners, taking on oral history projects with local primary schools and libraries, electronic portfolio projects and so forth. This view highly values those technologies that support both on- and off-campus communication and making course materials available beyond the institution. Technology in this context is evaluated by its contribution to community.

The “physical” model of liberal education sees IT as a way to overcome some of the limitations of the small size of these institutions and enables the smaller institutions to become competitive with large ones. It hopes to synthesize the virtues of small and the advantages of big. Technology may be seen as a potential cost saver and thus contribute to the economic viability of these smaller entities.

The discussion of liberal education and technology is often intertwined in the discussion of liberal education itself. Decisions about the allocation of technology resources can be most effective when the IT spokesperson has a good understanding of the different competing visions of the liberal arts institution and is able to articulate how various technologies impact these sometimes competing institutional views.

Session 2: Emerging Literacies and the Liberal Arts

Speakers:
David Bogen, Executive Director, Institute for Liberal Arts and Interdisciplinary Studies, Emerson College
Eric Gordon, Assistant Professor, Visual and Media Arts, Emerson College
James Sheldon, Associate Professor, New Media, Emerson College

While Jo Ellen Parker presented four models of liberal arts education and demonstrated how differing models can lead to different technological priorities to support the curriculum, the Emerson team re-framed the discussion in terms of focusing on the technologies and studying how they have changed the ways in which we interact with the world around us. More specifically, David Bogen noted that the process of designing curriculum is an essential foundation of the work of educators. Technology forces us to study not only the changes in content but also changes in the “mode of delivery.”

An important part of the Emerson team’s argument is that new technologies are never without cultural ramifications. They impose constraints in some ways and open new possibilities in others. As such, one must look beyond the purpose and value of technology in the liberal arts per se and study the inter-relatedness of curriculum, technology and pedagogy. While Jo Ellen Parker would argue that clarity about an institution’s vision of itself will help to clarify the technological decisions to complement that vision, the Emerson group would argue that the technology itself can and is changing the essence of the liberal arts and as such, should be placed on the “front burner” of such discussions.

David uses the concept of “emerging literacies” to refer to the combination of literacies that are evolving. Aware of the ambiguity of this very term, David Bogen described it as a placeholder for that combination of literacies that will ultimately transform teaching and learning. In this context, the deliberateness, traditions and methodical rate of transformation in higher education are not necessarily bad things as they allow for careful study of the agents that can transform education and the identification of models that may well be inappropriate or counter-productive.

There is a plurality of literacies. Using Wikipedia as a source, David found over 69 different literacies. This is testimony to the elusiveness and ambiguity of the concept at this stage of technological and social transformation. These emerging literacies are not to be confused with the literacy initiatives referred to by Jo Ellen in her description of the content oriented liberal arts institution. Rather, they include new ways of knowing: information, cultural, visual, media, multi-modal, and scientific literacies are all attempts at describing some social/technological change that necessitates the need for new skills and expertise.

In closing his part of the presentation, Bogen described three approaches to emerging literacies:

  • “Politics of loss” seeks to document the negative impacts of contemporary technology on the traditional liberal arts institution.
  • “Politics of scaled solutions” represents the force within education that represents a technological utilitarianism, trying to deliver the greatest good to the greatest numbers.
  • “Politics of transformation” focuses not on what has been lost, but rather on the creation of a new medium of expression (the integration of visual, multimedia and digital communication) worthy of study in its own right.

David clearly favored the third approach, arguing that it offers the opportunity to study “a whole new semiotics of expression.”

David’s opening remarks were followed by James Sheldon, Associate Professor of New Media at Emerson. James’s presentation featured a “Digital Culture” online first year program developed at Emerson that was team-taught with David Bogen and included the production of electronic portfolios.

James observed that students have changed since 1996. Then, students were proficient in oral, verbal and written expression. Their ability to use a computer, navigate the web, and incorporate technology into their work was very limited. In comparison, today’s students are comfortable authoring web pages, using image manipulation and editing programs like Photoshop, online communication, etc. Every student knows how to use Instant Messenger.

In this class everything was done electronically. Every student needed to produce an electronic portfolio and become proficient in making visual documents. While the students knew how to create material digitally, James noted, they had no idea of visual history. They did not understand how we have arrived at our current state and what the development of technology has allowed.

James then gave a slide presentation. The first slide showed 2 images: an early photograph and a painting clearly influenced by the photograph. The pair of images showed how technology changed the way in which an image could be produced and how the technology subsequently influenced artists. Another slide provided an example of how the slow shutter speed in a photograph influenced a landscape painting. James provided an example of early motion (a famous animation of a horse trotting) with the idea that to create motion, one had to first stop motion. He then described the influence of real color photography in the late 1930’s and 1940’s and compared images from that period to contemporary images that could only be produced with today’s technology, using as an example the image of a bullet traveling through an apple. All of his examples reinforced his contention that not only is contemporary art influenced by new media, but new media can remediate older media,absorbing it and minimizing discontinuities. That is, new media remains dependent on older media both in acknowledged and unacknowledged ways.

After the slide presentation, James talked about a Davis Foundation grant that Emerson had received to develop online learning communities. A key component of this project was the development of electronic portfolios. These portfolios raised a host of interesting questions: how does the instructor assess students’ multimedia work? How does the student see the path of his/her development? What does it mean to be working in digital media? With the electronic portfolio, all of a student’s work is in one place, facilitating faculty evaluation, the ability of students to learn from each other and students thinking of themselves as artists.

The third presenter from Emerson was Eric Gordon, Assistant Professor of Visual and Media Arts. He demonstrated MediaBASE, a tool that he and colleagues at USC’s Institute for Media Literacy developed for creating and working with media objects. The real conceptual innovation is that this is a tool for teaching/learning about media rather than teaching/learning with media.

MediaBASE is a platform for the development of media compositions that enables users to transform, manipulate and arrange media objects according to the intent of the creator without changing the state of the original object. MediaBASE was described as a social software package for use both within archives and classrooms.

The object of MediaBASE is to enable students developing electronic portfolios to include a variety of manipulated images while maintaining the integrity of the original images, the metatagging of all objects and the ability to search contextually while in the authoring environment. It is an attempt to provide a functionality to creators of multimedia, one comparable to that which is currently available to authors of text works.

Overall, the Emerson presentation was a thoughtful assessment of the current state of the liberal art curriculum in light of sweeping technological changes, the need to contextualize current development with a historical understanding of the relationship between changing tools (technology) and creativity, and an exploration of a tool developed to further the articulation of concepts needed to encompass these changes within the educational lexicon.

Session 3: A Different Mission, A Different Method: Assessment of Liberal Arts Education

Speaker: Dan Chambliss, Eugene M. Tobin Distinguished Professor, Department of Sociology, Hamilton College

Liberal arts colleges do not like assessment, period! Faculty dislike assessment more than administrators, but by and large, in the liberal arts environment, assessment is seen as outside interference. Some opponents think that assessment is essentially a business exercise, its rationale underlied by a political antipathy to left-wing intellectuals. Further, many examples of assessment are not intellectually rigorous and faculty see them as a “lightweight” activity.

There is no obvious reason why liberal arts colleges should like assessment. They are doing pretty well already and there is no correlation between doing assessment and being a good college. Swarthmore, for example, does assessment because it has to, but Swarthmore does very well without it. It has a huge endowment, and people are willing to pay the tuition costs to send their kids there. Swarthmore students clearly value their school and their education, donating and bequeathing cash and assets as alumni. Dan noted that very few businesses can claim that level of customer loyalty. So, it is not clear that Swarthmore needs assessment; its survivability without assessment is excellent.

Some of the hostility to assessment in the liberal arts relates to its origins in business/efficiency models that do not transfer well to colleges. The usual assessment drill includes:

  • state clear mission, goals, objectives
  • state in advance what students should get out of a course
  • provide clear links between the goals and the means (every course, program, etc. needs to explicitly state goals and actions to achieve them)
  • motivation is never considered a variable

The entire model has a “throughput” mentality – students are fed in at one end of the process and come out the other end with the requisite skills, thus fulfilling the mission, goals and objectives. While this works well for certain fields, it does not make a lot of sense for liberal education.

Seven years ago, Dan reported, Hamilton was funded by the Mellon Foundation to study the assessment of student learning outcomes at a liberal arts college. A panel study, in the fall of 2001, drew a random sample of 100 students from the class of 2001 and has tracked them ever since. Each student is interviewed every year and grades are tracked.

The findings, discussed below, provide some interesting insights into the uniqueness of the liberal education. Alumni in this survey responded that the specific knowledge that they acquired as undergraduates was virtually irrelevant. This is not job training. Unlike the U.S. military, the liberal arts institution does not know where all of its graduates are headed. We like the fact that our graduates will do all sorts of amazing things with a huge variety of positive outcomes. We are looking for long term results and life long learning. Other goals are uncertain to the point of unknowable. While college presidents may talk about creating “citizens” or great thinkers, the goals are not mutually agreed on. Nor is it clear that what faculty do every day has any impact on any goals that may exist.

As a result, Dan argued, we must have a different approach to assessment for liberal education. We want to have minimal interference and not expect faculty and students to change what they are doing to accommodate the assessment. The assessment should represent sound social science methodologically and be multi-modal, since it is not clear exactly what we are looking for. The assessment must be useful – we should learn something that will help people with their work. Finally, an appropriate assessment should be true to the mission of the liberal education institute; it should be open to possibilities and serendipity.

The Study:

1. Writing samples were collected from the last year of high school through the senior year. Some 1100 student papers were selected and a team of outside evaluators assessed the writing to see if students’ writing actually improves.

2. The outside evaluators were able to rank order the writing of students from high school through the senior year, although they could not differentiate between junior and senior level writing (the sample excluded senior theses).

3. The study also revealed that first and second year advising was not as good as junior and senior advising. Freshman advising is not the same job as senior advising. Freshman advising is more about course selection and is heavily influenced by the academic calendar, more so than the relationship between the student and advisor. It is the interaction of the student’s initial interest and the offering and scheduling of courses that is the most critical relationship in first year advising.

4. Friendships and friendship networks are crucial. The people a student meets in the first semester has a big influence on the student’s academic development.

5. All life is local—if students have 2 or 3 good friends and 1 or 2 good professors, students are in good shape. Students can like almost nobody on campus as long as they have a few good friends and professors. Most faculty can, as Dan described, “be awful,” as long as students take courses from the “good” teachers during the freshman year. A very few professors have a huge impact on a large number of students. At Hamilton, he said, 12 professors out of 180 would do the trick!

6. Happiness is a legitimate outcome of college—students after graduation tend to cherish their undergraduate experience, although in the assessment realm, this is not considered at all. Nonetheless, students, family and parents are very aware of the value of the undergraduate experience and the importance of feeling good about life. These are important ingredients of success.Confidence, optimism and a sense of well-being are very good things with which to leave college.

Some lessons from assessment:

  • Because externalities abound, it is a mistake to look at only a single department. Assessment should not be department-based.
  • The student’s point of view is crucial:
    • They have no idea as to what has gone on in the past, so “innovative” curriculum does not register.
    • Small classes are great for the people who are in them, but not so wonderful for the people who cannot get into them, and take larger ones instead. Classes may be small because they have unpopular topics, pre-requisites, etc.
  • For a residential liberal arts college, what is sold is the  “uniqueness” of the experience, the advantage of not being in a mass market.

In closing, Dan emphasized that small colleges are not businesses in the usual sense of a business; as a result, traditional assessment methods must be modified to fit the needs of the liberal arts institution. When done appropriately, assessment can provide useful insights and information to further the objectives of the liberal education.

 

Session 4: The Liberal Arts College and Technology: Who Captures Whom?

Speakers: Bryan Alexander, Director for Emerging Technologies, National Institute for Technology and Liberal Education (NITLE)
Donald Cotter, Associate Professor Chemistry, Mt. Holyoke College
James Wald, Associate Professor of History, Hampshire College

Bryan Alexander’s study of emerging technologies focuses on gaming, mobile devices and web 2.0 (which he described as a mixture of micro-content and social software). There are similarities among some of the characteristics of liberal education and emerging technologies. They are becoming increasingly trans-disciplinary and require critical thinking.

In a notion similar to David Bogen’s, Bryan argued that while we can speak about teaching with technology, we can develop a better understanding of the process if we study the technology itself using the traditional methodologies of the liberal arts. Thus, it is possible to apply the intellectual heritage of liberal arts to technology. Bryan used the example of Robinson Crusoe to exemplify someone who has no need for a liberal education but rather for specific technical skills. Such informatic needs are often highly individualistic.

The liberal arts approach to technology itself tends be a collectivist one. Echoing one of Jo Ellen Parker’s themes, Bryan noted that if liberal education has a civic engagement thread, then technology is an integral part of the process. Bryan noted that there is now a new wrinkle: students are both producers and consumers of technology.

Donald Cotter then used the example of having his students engage in the xml tagging of source documents in the teaching of the history of science. Donald said that one of his motivations is to incorporate the work of professional historians of science into his course to emphasize how technology makes us behave as cultural beings and to provide a context within which to study both chemistry and the history of chemistry. In the same way that James Sheldon addressed the issue of students being adept with the technology without having an understanding of how we arrived in our present state, Donald Cotter felt the similar need to ground his chemistry students in the history of chemistry.

Fortuitously for Donald, Mt. Holyoke has played a substantial role in the development of science, and particularly the development of women scientists. Science has been taught to and by women over the last 100 years, and its faculty has included a number of pre-eminent women chemists. Mt. Holyoke has rich resources in terms of primary materials related to the history of science. Using xml tags on these original documents enables the students to illuminate and notate this content.

Refining his course over time, Donald now asks his students to develop a project based on what they find in the College’s archives. They cannot begin the project until they know something about the history of chemistry. The students learn the historical context in which to understand science, combined with technical sessions (led by the College’s archivists) to teach them the technical aspects of XML tagging. The technical sessions are conducted during a weekly lab period.

Donald has several objects in mind with this project. He wants the students to develop a taste for doing primary source work and wants them to appreciate that this is reasonable intellectual activity. He noted that students struggle somewhat with the nebulousness of the activity (which they have never done before). He finds that advising them to tell the same story twice, first in standard text and then in the markup of the source document, helps to clarify the objectives of this activity. In closing, he noted that while the outcome of this initiative remains uncertain, he is pleased with the process.

The third participant in this panel was James Wald from Hampshire College. Not only is James an associate professor of History at Hampshire College, he is also the Director of the Center for the Book. In his opening remarks, he noted that he is a historian, not a technologist nor scientist. His research interests include the history of the book and the evolution of the book vis-à-vis technology. As he became more proficient with technology, expanding from email and word processing to the development of course web sites, he became increasingly interested in the technology as it related to his research. Continuing a common thread articulated by David Bogen and Bryan Alexander, he, too, argued that one should not possess merely the technological skills but also an understanding of the underlying concepts of communication and the history of communication.

Noting that writing is a technology (even though we take it for granted and do not think of it as such) James designed his course to increase his students’ understanding of the evolution of writing and technology. He invited a book artist to the class who taught the students how to create a book by hand, using techniques from the 15th and 16th centuries. Using their own writing, they would produce a handmade book. At the same time, in a creative juxtaposition of technologies, they were required to create a web page that discussed and documented the creation of the book, covering topics such as authorship and binding. This exercise engaged the students in considering the different aspects and issues of presenting text on paper and on a computer screen. James Wald is hopeful that students taking this course now have a better understanding of how writing, printing and computers are not competing and different phenomena but rather the same act of expression using different tools.

In closing, Wald remarked that the role of the librarian in the Middle Ages (and until fairly recently, one might add) was primarily to preserve information. Now there is too much information and the focus has shifted to applying search strategies to find relevant information. He argued that the technology revolution helps one to understand the revolutions that came before it, such as the print revolution. There is benefit to studying what was written and predicted about the development of the printed book and its impact on society to see what was accurate and inaccurate. This can provide useful contextual background with which to assess what the current technological revolution will mean to us.

Session 5: Introducing The Academic Commons

Speaker: Mike Roy, Director of Academic Computing & Digital Library Projects,
Wesleyan University

The Academic Commons (http://www.academiccommons.org ) is a recently-launched web publication and community that brings together faculty, technologists, librarians, and other stakeholders in the academic enterprise to foster collaboration, and to critically examine the complex relationship of new technology and liberal arts education. This session provided a brief introduction to the Academic Commons, and highlighted ways in which NERCOMP members can both benefit from and contribute to this initiative.

Conclusion:

This was one of the best NERCOMP workshops that I have attended. My interest in the interaction of technology and pedagogy was well met by presentations combining strategic thinking about what constitutes and shapes a liberal arts education and examples of technology being used in the classroom in a traditionally “liberal” manner.

Bryan Alexander stated the need to study technology in an academic manner and the case studies presented by faculty reflected this approach. Both NITLE presenters effectively set the stage for the presentations that followed.

Dan Chambliss from Hamilton provided very useful insights based on survey data from a Mellon funded survey and set the findings within the context of why liberal arts institutions tend to be dismissive of traditional assessment techniques coming from the business sector. Of particular interest was the survey result that in retrospect, alumni considered discipline-specific learning to be relatively unimportant, compared to the entire undergraduate experience. This finding seemed particularly relevant when compared to Jo Ellen Parker’s contention that the primary model of a liberal arts institution is “content-based.”

The Emerson presentation was very thoughtful and provided excellent examples of faculty grappling with these issues within the context of “teaching media.” Complementing the morning presentation by the Emerson faculty, the Mt. Holyoke and Hampshire faculty reinforced the need for a contextual understanding of technology and how students may be involved in projects that combine the acquisition of new technical skills with a greater understanding of the evolution of such tools and their societal impact.

All in all, it was a very useful event, with high quality presentations and a strong intellectual bent. I suspect that SIGs such as this, emphasizing pedagogical and broader institutional considerations, will become increasingly important and valuable in the future.

Review of “Digital Images Workshop” A NERCOMP event (4/24/06)

by Valerie Gillispie, Wesleyan University

Schedule and biographies of the speakers

This event brought together faculty, information technology specialists, librarians, and others who work with images to discuss the impact of digital images on the liberal arts curriculum. A number of questions were addressed throughout the day: How do faculty work with digital images versus analog images? What skills do students need to successfully interpret images? How can those responsible for digital image management assist faculty and students in their work with images?

The conference was inspired by David Green’s recent survey and interviews with 35 institutions about their use of digital images. Green’s presentation about his work was the first session of the day.

Session I: “The Use of Digital Images in Teaching Today”
Speaker: David Green, Principal of Knowledge Culture

David Green’s Handout

David Green was brought in as a consultant by the Center for Educational Technology to study the use of digital images. This project was supported by the National Institute for Technology and Liberal Education and Wesleyan University. Green explained that his first step was to conduct a literature review of the field to see what had already been determined. Penn State had conducted its own survey, Berkeley had conducted a research project, and RLG had looked at how visual resources in databases had been discovered and used. The studies indicated significant problems related to personal collections and their management, including a lack of metadata associated with personal collections. They also found that faculty often had trouble finding and successfully using technology related to digital images. In addition, there was confusion over copyright issues. Green suggested Christine Borgman’s paper “Personal Digital Libraries: Creating Individual Spaces for Innovation” as a useful summary of these issues.

Following the literature review, Green and his primary contacts at each school encouraged faculty to complete an online survey about the use of digital images in teaching. The 404 responses from faculty, each of whom had taught at least one course with digital images, offered new insight into the ways images were used. Most faculty used images from their own collections, with a smaller number using publicly-accessible online databases, and a smaller number still drawing images from licensed databases. Some faculty had complaints about the difficulty or time needed to set up digital images for their teaching, but they appreciated the volume, creativity, and ease of change allowed by digital images. Additionally, faculty felt that students liked the accessibility and convenience of digital images.

Some faculty did find analog images to be superior in quality and reliability. However, in response to a question about what the advantages of analog images are, 35% of respondents either offered no answer, or wrote that there were no advantages.

In using digital images, faculty liked being able to create their own sequences, to create their own images, and to allow students to review the presentations. Capabilities like altering images and zooming in on them were rated to be less important.

Has teaching changed with digital images? Three-quarters of the survey respondents thought it had. Changes mentioned included greater efficiency, more variety, and new skills required of and used by students. Perhaps surprisingly, 60% of faculty were satisfied with their current display system, of which PowerPoint was most popular. One feature mentioned as “desirable” was the ease of bringing word and image together, which is relatively simple in PowerPoint. Some respondents mentioned their irritation at confusing, elaborate options in some display systems. Simplicity is key, and it was simplicity that was mentioned most often when asked what tools faculty would like in their presentation software, along with better integration of multiple media.

Where do faculty get support for their use of digital images? From a wide variety of sources, it seems. The majority of faculty said assistance in digitizing, finding, and managing images was important, but many did not get support or were dissatisfied with the support they did get. Learning how to use new technology is time-consuming, and faculty feel overwhelmed by the time commitment and lack of institutional support for using digital images.

Following the online survey, Green visited each school and conducted a total of 326 in-person interviews with faculty, staff, information technology specialists, visual resource specialists, and others. Here are a few of the major conclusions drawn from the interviews:

  • Licensed databases must be interoperable.
  • Students need to be trained to “read” digital images.
  • Faculty need to be trained to use digital images in their teaching.
  • A strong digital infrastructure must be in place to support personal collections and presentations.


The full report on Green’s findings will be posted soon on Academic Commons. Green will be presenting his findings at Wesleyan University and other interested participating schools.

Session II: “Digital Image Resource Development”
Speakers: John Saylor, Director of Collection Development for the National Science Digital Library
Susan Chun, General Manager for Collections Information Planning, Metropolitan Museum of Art
Patrick McGlamery, Director, Library Information Technology Services, University of Connecticut Libraries

John Saylor spoke about how the National Science Digital Library (NSDL) is using Open Archives Initiative (OAI) metadata harvesting to gather metadata for a wide range of images that are then centralized through the NSDL website. This endeavor is primarily funded by the National Science Foundation (NSF) Division of Undergraduate Education, and advised by a Core Integration Team made up of UCAR, Cornell, and Columbia.

The NSF also has provided grants in the NSDL program in three different areas: pathways, services, and targeted research. These grants have helped create over 100 unique collections of resources that can be accessed through the NSDL. The most important grant area related to digital image collections is the Pathways grant, which gives the grantee responsibility to select and make available resources for particular subject areas. These $5 million, multi-year grants are intended to help the grantees eventually become self-sufficient in their mission.

One of the major advantages of NSDL is that it offers a single, peer-reviewed, appropriate to find many resources about a given topic. Researchers can benefit by including their images and research in the pathways and connecting with those working in similar areas. The gathering of these many collections has been made possible through the OAI Protocol for Metadata Harvesting. More information can be found at http://www.openarchives.org. A union catalog is being created through this initiative.

Next, Susan Chun spoke on the topic, “Getting It Right: How well can image suppliers determine and meet the image requirements of college and university users?”

The Metropolitan Museum of Art does not have a strategic plan for assisting college and university users with digital images. However, in re-evaluating their cataloging practices and assessing their demand for high resolution images, the Museum began to think about practical changes that would benefit both users and the Museum.

The Met wanted to have a better system of digital asset management than unsecured CDs; they use digital images to both create a better inventory and meet demand for image requests more quickly. By choosing to focus on the most frequently requested or canonical images, the Met streamlined its practices and created digital versions of its analog images. These canonical images may be meeting the needs of the academic community, but the museum is not sure exactly what these needs are.

The progress of digitizing analog images (usually transparencies) is very slow, because of the high resolution of each image, as well as the cropping, spotting, and color balancing that is important for the museum’s inventory needs.

The Met has also tested social tagging—folksonomies—to generate keywords used to describe individual works. Thirty volunteers added their own terms to the objects, and upon review, 88% of the terms had not been previously found in museum descriptions. 77% of the terms were judged to be valid by staff. The implication of these terms is that they describe the work from a non-specialist viewpoint, and may provide access to people who use non-specialist vocabularies. Open source tools for collecting community terms can be found at http://www.steve.museum.

The Met is also grappling with how to provide access to images. The museum has charged licensing fees since 1871, and although they sometimes waive or reduce fees, the practice is inconsistent. It is also time-consuming to have individual users approach the museum each time they need something. To facilitate making these images available free of charge, the Met has teamed up with ARTStor to distribute high resolution images. Using a scholars’ license, it grants permission for approved uses without requiring individual approval.

The third speaker was Patrick McGlamery, whose talk was entitled “Maps, GIS, and Spatial Data: How are maps, aerial photography and geospatial imagery affecting scholarly research?”

McGlamery spoke about maps as not only a picture, but data. One expert has described maps as no longer static, but instead a “dynamic, structured dataset that can be accessed and queried through a Geographic Information System.” Because maps are mathematical, they work well in the digital environment. Multiple maps can be overlaid because there is spatial information indicated by x/y points. Even aerial photographs can be used in conjunction with maps. GIS software makes this spatial data possible.

The Map and Geographic Information Center at the University of Connecticut does not have a lot of historical maps in its collection, but it has created a digital collection of maps at other institutions. The general policy has been to scan maps to a resolution where information is transferable, i.e. where all text or drawings are recognizable. As maps have gotten more detailed over time, this level of resolution has changed.

Historical maps can be used in conjunction with GIS data and aerial photography to learn about changes over time. Faculty use maps and enhance them in their teaching. Landscapers and ecological engineers also use these maps to see how historical information aligns with modern maps. Information is displayed in ARCview, a powerful viewer which state institutions have a license to use.

Using historical aerial photography can be complicated, since there is not much metadata attached to photographs. The University of Connecticut uses ARCview to capture metadata about a photograph’s geographic area. A user can then type in an address and discover which aerial photographs cover that particular area.

Users seem to use large images directly on the UConn server rather than downloading. Some users are also using statistical data to overlay the maps and express other types of information. Maps as images—and data—are dynamic, and can be processed in multiple ways.

Session III: Creating and Managing Digital Institutional Image Collections
Speakers: Mary Litch, Instructional Technology Specialist, Yale University
Elisa Lanzi, Director, Imaging Center, Smith College Department of Art

The first speaker, Mary Litch, spoke about her work in the Instructional Technology Group at Yale University in a talk called “Supporting Faculty in Developing and Deploying Personal Digital Image Collections (PDICs).” The Instructional Technology Group assists faculty in the arts, sciences, and medical school in using instructional technology. This is separate from the library and institutional digital image collections, and primarily works with the personal collections belonging to faculty.

These personal collections range from a few hundred images to over 20,000. The sources of such personal collections are multiple, and the approach to controlling the data varies as well. For those faculty who specialize in art history, the institutional collections are important sources, but for other faculty, personal photography, personal scanning, and images from the web are the major sources.

Yale provides institutional support for PDICs through its digital asset management software, Portfolio, and through special grants and provisions made to help faculty get bulk scanning of their images and slides. Storage is also provided free of charge. Portfolio allows faculty to associate images and data, and can be housed on a server or locally. It supports a wide number of file types and automatically indexes text. Importantly, it allows bulk import and export for ease of use. It also has a virtual light table, which helps faculty used to working with slides feel comfortable.

A major reason faculty develop PDICs is that the institutional collections are not adequate for non-art history scholars. Faculty also like the portability, custom cataloging, image quality, and search interface of their PDICs. The Portfolio system allows them to add data immediately upon scanning or photographing items.

The drawbacks to PDICs are that they are labor intensive and difficult to support. The development of personal collections may draw some energy away from the development of institutional collections. There are also problems in blending quirky personal cataloging preferences into the cataloging of institutional collections, when a faculty member wants to share their collections. There can also be tricky legal issues related to use and reproduction in institutional collections.

The second speaker, Elisa Lanzi, Director of the Smith College Imaging Center, gave a talk entitled “Gather Ye Images: Negotiating Multiple Collections for Teaching.”

Smith College was an early innovator in teaching with digital images. The current challenges faced at Smith are similar to some of those at Yale, such as the use of multiple sources for teaching images. Smith also has noted that a holistic “Image package” strategy is required or desired by faculty. These elements include classroom presentation, student study, management, repository/storage, collaboration/sharing, and interoperability. Other factors include the need for multimedia, discussion forums, and the convergence of analog and digital collections in this transition period.

Smith offers several ways of assisting faculty with image teaching, including their Imaging Center & Visual Communication Resource Center , the “Find Images” page on the art library’s web site, and the Teaching & Learning Support web site, created by the Educational Technology Services department.

Through their use of the Luna Insight presentation tool, Smith offers a virtual “new images shelf” and also has acquired some complimentary shared collections from other Luna users. There also is an image vendor online wish list that faculty use so that the imaging staff can negotiate what can be ordered through various budgets. The Imaging Center is partnering with the Library’s collection development team to license the larger image library subscriptions.

Personal collections are created by faculty using Luna but also through independent systems. Some personal collections are then shared with the institutional collection, but Lanzi notes that this raises quality issues. For example, how can image quality and metadata standards be implemented in personal collections? The content of these personal and institutional collections needs to be accessible and portable. There is a need for strategic planning, but also a realization that there are certain unknowns in these practices.

The recently-released Horizon Report points to trends and provides examples of faculty/student needs for integrated media in teaching and learning. At Smith, students have become more involved in collection building and creating presentations. The popularity of “social tagging” in tools like “Flickr” will have an impact on digital image cataloging. However, differences in metadata can impede access and veracity. Who is the expert, and who should provide the metadata? Faculty note that students need to go beyond looking and become more visually literate to successfully use images in creating arguments. There also are issues around intellectual property that may discourage faculty from sharing their materials. Overall, however, the experience with digital images at Smith has been a positive one, enhancing both teaching and learning.

Session IV: Critical Literacies
Speakers: Christopher Watts, Director, Newell Center for Arts Technology, St. Lawrence University
Flip Phillips, Associate Professor, Psychology & Neuroscience, Skidmore College
John Knecht, Professor of Art & Art History and Film & Media Studies, Colgate University

Christopher Watts spoke on the topic “Critical Literacies: thinking strategically.” He noted that literacy means being able to read and write, and in terms of visual literacy, students both produce and participate in media. It is a mistake to think that students only receive media. In participating in digital media, students are engaging in a rhetorical or communicative act, and need to be sensitive to the audience. Watts mentioned Wayne Booth’s book The Rhetoric of Rhetoric as informative on this subject.

At St. Lawrence University, students use digital media to both “know” the world and create knowledge. In this way, digital images are used to demonstrate what has been learned, and they are also used to learn, period. Two different groups at St. Lawrence discuss these issues: the Critical Literacies Group and the Rhetoric and Communication Group. They have somewhat overlapping memberships.

The Critical Literacies Group is comprised of directors of campus programs and is presided over by the Dean of Academic Affairs. This “top-down” group focuses on operational aspects of developing literacy. It is currently working to expand the role of the Writing Center to better address speaking, images, research, and technology. The Rhetoric and Communication Group is a “bottom-up” group made up of faculty. The focus is on pedagogical innovation. They are currently developing an aims and objectives statement for faculty related to critical literacies, as well as providing training and support for faculty. The formation of and communication between these two groups has resulted in much better overall communication between the various units of academic affairs.

A couple of shortcomings have been discovered. One is a reluctance to address the role of ethics in relation to literacy. For example, analyzing rhetoric requires consideration of what has been selected and presented, and what has been marginalized. A second shortcoming is the need for more participation from the sciences. Both groups are working from their own perspectives to address these shortcomings.

Through the work of these two groups, the university has been able to move strategically to begin addressing the complex issues of critical literacies in liberal education.

The second speaker, Flip Phillips, gave a talk entitled “Visual Story Telling, Grammar, Cognitive Aesthetics and ‘Design.'”

Phillips has had experience working with digital images in both an art and science environment. Using his experience working as an animator at Pixar, he has brought the concept of story boards to the students work in his lab at Skidmore College. Through the story boards, they describe their experiments in pictures.

In animation, story boards have several phases. The preliminary boards give a basic outline in four images. Sequence boards are a series of pictures which make sense along with a human explanation. The goody board contains leftover images not used in the sequence board.

In the science environment, students use a sequence board to prepare for presentations about research. Using a white board or sketching out the images, students create a dynamic space to move around the different boards. The final story reel often makes use of digital images of the original drawings. Using few words, the student is required to make a “pitch” to his or her professor using the story boards and explaining the proposal. Using images communicates information quickly, and helps explain science in a non-textual way. The storyboard approach can be used for non-visual scientific research as a conceptual technique for organizing information.

Seeing items helps to do pattern analyzing, so the visual aspect of this approach is important. There is nothing available digitally that is quite the same as being able to move around analog images physically, but it can come close. Using tools like i-view media, Keynote, and Aperture, students can work digitally on their story boards the way they can with analog images. The benefit of this storyboard approach is that it helps them focus on their ideas, prepare presentations, and design posters. It also helps them develop their argument and write papers, using the images to drive the structure.

Flip Phillips’ website can be found at http://www.skidmore.edu/~flip/

The third speaker was John Knecht, who spoke about “The Threat of Media Illiteracy.”

Media images surround us in all aspects of our lives. Knecht asked, “How do we know what we know? How do we receive information?” This question extends to our role as citizens and our political understanding. Using a series of visual images, Knecht discussed some of the issues facing us as citizens and scholars.

The issue of media literacy is interdisciplinary, and our media resources—TV, computer, and newspaper—are as legitimate an area of study as any other. We make decisions based on the images we see—but what is the context or content of these images? Students need to know how to take in media images, as they negotiate social relationships and meanings through these images.

At Colgate University, a film and media studies minor was established three years ago. Knecht would like to develop a media analysis class required of all students to create media literacy. This literacy is key to understanding relationships of power.

Knecht showed a 19th century photograph called “Fading Away” by H.R. Robinson. He shows this image to his classes and asks students to describe what is happening. In reality, however, the photograph is made up from five different negatives, so there is no real event captured in the image.

With modern images, there is a belief in the objectivity of mechanical images. Knecht used an example of a grainy cell phone photograph from the London subway bombing of 2005. It has the pretense of authenticity because you can see the mechanical components. The same effect might be observed in the digital reports from war correspondents.

Knecht used a variety of other images to describe the way that semiotics—signs—can be “read” in photographs. Signifiers are components of images, and what is signified is what is culturally determined. Using an advertisement of Dan Rather, Knecht pointed out that his casual seated position with his feet up has a culturally determined meaning. The signifier is his feet on the desk; the signified is the impression of a relaxed and honest person. Putting the signifier and the signified together creates a complete sign, one that we use to make judgments.

Related to signs are ideologies, which originate in systems of power in all cultures. It is easier to recognize ideologies in cultures other than our own.

Many systems of analysis can be used to deconstruct images and understand their content and contexts. It is important that students be able to take apart what they are seeing, hearing and reading, and question the source. The interpretation of the media world needs to be part of the national education plan to develop media literacy in students.

Conclusion

This one-day conference offered a way for those of us who work and teach with digital images to share our insights and challenges. It seems clear that digital images are becoming a standard component of curricula, and the ability to interpret and critically analyze these images is becoming a required skill for students and faculty.

A major challenge is finding technology that can meet the requirements of faculty and students. The ideal system is at once sophisticated and dynamic but also intuitive and familiar. Features such as locally determined metadata for individual collections are also desired, but pose problems when multiple individual collections are combined. Institutions are trying to provide support for both institutional and personal collections, but according to David Green’s survey, many faculty members are dissatisfied with the support they get for acquiring and cataloging images. This may be related to the difficulty of providing support for such a rapidly expanding pedagogical tool.

Overall, the conference provided a wealth of ideas about how visual resources can and are being used. There were no clear-cut answers for how to handle the technological or educational issues related to digital images, but many approaches to be considered. This meeting was the beginning of a dialogue about an exciting and evolving educational tool.

Review of “Emerging Trends for Teaching and Learning” A NERCOMP event (10/27/05)

by Gail Matthews-DeNatale, Simmons College

Emerging Trends for Teaching and Learning: A Retrospective

by Gail Matthews-DeNatale, Academic Technology, Simmons College

In the field of educational technology, there have always been “emerging trends.” But as I listened to presenters at the “Emerging Trends for Teaching and Learning” gathering last October, I came away with the perception that, at this juncture, the range of possibilities on the horizon is particularly rich. There was a heightened sense of excitement, creativity, and possibility in the room that I continued to feel for days after the event.

Given the range of presentations and the many examples that were provided, it is difficult to write a summary that does the day justice. Instead of a blow-by-blow recap of each session, I’ve decided to highlight some of the main ideas discussed and provide a list of links to technologies that were referenced during each presentation.

Session I: Introduction and Overview (Bryan Alexander) Session I LinksSession Links

– General Sources
Bryan’s Website
infocult.typepad.com

– Storytelling
Center for Digital Storytelling
StoryCorps

– IP/Sharing (New Approaches)
Creative Commons
Academic Commons

– Gaming and Alternate Reality
Halo2, Machinima, Bad Wolf
BA’s commentary

– Social Bookmarking
Flickr, Del.icio.us, Furl
BA’s commentary

– Web 2.0
Flock, BA’s commentary

– Other Sites Mentioned
NASA Worldwind
Google
(Scholar, Map, Desktop, etc.)

HighlightsDuring his 30-minute introduction, Brian touched on a range of themes, including: vernacular storytelling, strategies for sharing and aggregating content, and the social dimension of emerging technology.

Vernacular Storytelling: As multimedia tools become more affordable and user-friendly, students from all disciplines can become producers as well as consumers of new media. In addition to written papers, students now have a range of options for communicating what they have learned. Digital storytelling also helps students make connections between school-based learning and other experiences outside the classroom. This narrative trend is exemplified by the work of two entities: the Center for Digital Storytelling and StoryCorps.

Sharing and Aggregating Content: As digital video becomes easier to produce, it is important for schools to help students and faculty understand the ethical implications of intellectual property and copyright that are associated with their digital creations. As more people use the web as a space for multimedia publication, many are deciding that they want to share their work in a format that can be used by others. As a result, new resources are becoming available that are specifically designed to broker the sharing of intellectual and creative products, such as the Creative Commons and the Academic Commons.

The Social Dimension of Emerging Technology: As individuals produce and accumulate links to reams of digital resources, there is increasing need for better ways to organize, search, connect, share, and aggregate information meaningfully. While customization is not necessarily a new idea, the newest crop of tools adds a social dimension to the process of adapting technology to suit personal preferences. “Social bookmarking,” a relatively new term, now has 49 entries in Wikipedia. Many are also exploring the role that social networking can play in filtering and sorting information as well as developing communities of learners. Tools that exemplify this trend include: the photograph browser Flickr; Del.ici.ous, a social bookmarking tool that allows users to add meta data to links and cross link with other like-minded bookmarkers; and Flock, a resource that allows “micro content” to be drawn from a range of sources into one browser.

 

Session II: Videogames and Learning (Joel Forman) Session II LinksSession Links

– General Sources
Educause: Games & Gaming

-Game Links
World of Warcraft (WOW)
Pocket Kingdom
Lineage
Ultima
Anshechung.com
Secondlife.com
Havok
Eve
  Neverwinter Nights
Open croquet project

HighlightsWhen Joel Forman looks at online gaming spaces such as MMOGs (Massively Multiplayer Online Games), he sees spaces in which players are learning all the time. As he reviews recent developments in the field, three themes emerge: distributed group intelligence, blurred boundaries between the virtual and the physical worlds of gamers, and emerging tension between corporate and gamer perspectives on the game worlds that are being created.

Games as Intelligent Swarms: Similar to the intelligent, decentralized swarming of migrating birds, MMOG’s foster the development of group intelligence, an extended cognitive system that can be likened to a global brain. For example, consider WOW, which has more than 1 million subscribers, or Eve, a gaming world that is home to 60,000 people. Up to17,000 players have interacted within Eve simultaneously. The average player of Pocket Kingdom, a game played through mobile phones, spends 7.3 hours per month with the game. These hours are, in effect, leisure time spent learning things like collaboration, strategic thinking, planning, problem-solving, etc.

Blurring the Boundaries: As games become increasingly realistic, illusion and reality are become indistinguishable. For example, Havok allows players to construct spaces that adhere to the real world’s properties of physics.

Some online games are even developing their own economies, in which players make a “real” living creating virtual assets and selling them to other players. For example, one site (anshechung.com) is dedicated to the development and sale of virtual real estate. Another gamer draws on her advanced programming capabilities to make virtual Samuri swords that can be sold to other players. Down the road, will there also be real world consequences for people who are caught “stealing” or “defacing” online assets? At least one instance in Japan resulted in the vandal being held legally accountable.

Who Owns the Game?: Corporations that have gotten into the business of online gaming have sometimes found themselves at odds with the players they originally wished to court. Multiplayer online games involve players in the co-creation of increasingly rich and complex worlds. Given the considerable investment that players make in developing these spaces, it is not surprising that they feel a genuine sense of ownership. What is the relationship between virtual property rights and real property rights? The license agreements for some online games state that property created during the game belongs to the corporation that created the game. Those who have tried to enforce these agreements have experienced revolt.

Final Questions: Online games hold promise as tools for learning because they engage players in a deep and active participation. Some of the most intriguing questions are still open for consideration. How can these gaming worlds be adapted for learning purposes? What can we learn from online games about factors that contribute to learner engagement? As early text-based games give way to virtual worlds that are image-based and visually rich, how will this affect the preferences and learning styles of students who are gamers? Finally, are we willing to allocate sufficient funding for research and development so that in the future we can offer our students “Massively Multiplayer Online Learning Environments”?

 

Session III: Mobile Learning (Bryan Alexander) Session III LinksSession Links

– Examples
Mobile Bristol
NYC2123
Trans-Siberian Radio
(see also the report)
Art Mobs
Uncle Roy All Around You
34 North 118 West

– For Further Reading
Smart Mobs
NITLE.org

HighlightsCulture and Pedagogy: When it comes to mobile technology, the United States is arguably out of step with the rest of the world, and this discrepancy has an adverse affect on our use of mobile technology for teaching and learning.

As a semantic case-in-point, Alexander noted that we (the U.S.) are the only ones who use the term “cell phone” instead of “mobile phone.” If you want to study the innovative things that are going on with mobile technology, you inevitably have to look outside the United States. For example, Britain has poetry contests in SMS (Short Message Service, a technology that allows text messages up to 160 characters in length to be sent over the phone).

Mobile devices appear to be pulling us in opposite directions: cell phones expand our abilities to connect, while iPods are used to renegotiate privacy in public spaces. Another complicating factor is that, at least in the U.S., cultural norms for mobile devices are still a work in progress, as witnessed by “dear cell phone user” cards that can be distributed to nearby people who are talking on their phones too loudly. In Japan, people use SMS to communicate during mass transit and in other settings where a verbal conversation would be annoying to those nearby.

Some of conventions for the use of mobile technology will turn out to be passing fads, such as “flash mobs” (using text messages to coordinate the behavior of groups). This induced human swarming was seemingly ubiquitous, then suddenly faded once the novelty wore off. Other uses will become integrated into our everyday lives, but it may be too early to tell which uses will persist.

Surveillance and Memory: The size and portability of mobile technology raises concerns as well as possibilities. Because cell phones now come with built-in cameras, we are becoming a culture of surveillance. To protect patron privacy, phones are often banned at gyms and pools. Yet these same phones make it possible for everyone to be a documentarian. On a moment’s notice, the average person can create a visual, aural, or written record of a child’s first steps, a front row concert view, the Pope’s funeral, a subway disaster, suspected police brutality, or even an ill-fated September 11 flight.

Microcontent: Small devices challenge us to package content in smaller and shorter segments. This trend has been dubbed “microcontent.” For example, NYC2123 is a graphic novel produced with mobile devices in mind. These creations may be grassroots, intended to oppose, or provide alternatives to, the messages of mass media. For example, Art Mobs asks the question, “Should museums and galleries have exclusive control over making audio tours of their exhibits?” Their most recent project involves creating alternative audio tours for the Museum of Modern Art, tours that counter those produced by the museum. These ideas could adapted for pedagogical purposes: increasing students’ critical engagement, fostering media literacy, enhancing dialogue/participation, etc.

Combining Technologies: Increasingly, mobile devices are used in conjunction with other technologies (for example, web + mobile, portable gaming devices + mobile). For example, in the game “Uncle Roy All Around You” online players search for Uncle Roy alongside on-the-street players with mobile devices. This “augmented reality” makes it possible to add digital data to physical places. In the same way that information is “tagged”within web pages, physical places can be tagged and correlated with online content.

Mobile technology has great potential for use in research, ethnography, and field-based learning experiences such as semesters abroad. It can also increase student and faculty opportunities for connectedness. For example, it could be used to broadcast an invitation for others on campus to join an a pickup game of volleyball. Yet very real challenges need to be addressed for this technology to achieve its full potential in educational settings, including: technical support, market instability, device content limitations (small screen size), digital divide and accessibility issues, privacy and intellectual property concerns, and faculty resistance.

Session IV: iPods and Podcasting (Bryan Alexander, Alex Chapin, Shel Sax) Session IV LinksSession Links

Alex Chapin’s iPod blog
Podcast of Chapin’s SIG talk
Berkeley Groks Science Radio
The Internet Archive
Feedburner
Odeo
Podcasting Demo Server
(includes tutorials)
IT Conversations (podcasts
on Information Technology)

HighlightsEase of Use: A variety of resources are now available that make it relatively easy to set up podcasts and RSS feeds. For example, Feedburner walks you though the process, as does the Podcasting Demo Server.

Aggregating Content: In addition to tools like iTunes, sites like Odeo provide directories of podcasts. Odeo Studio also can be used to produce recordings over the web — the studio serves as a browser plugin.

Finding What You Need, Knowing Where You’ve Been: One of the challenges of audio is accessing the exact segment that you want to listen to. For this reason, digital audio has enlivened metadata. Metadata, when used in conjunction with other technologies such as XML, provides a level of granularity that makes it possible for listeners to jump to a specific portion of a podcast. It also makes it possible to browse and search the increasingly large collections of audio available on the web.

With iPods/iTunes you can also keep track of your listening history — you can know when you last accessed a file, where you left off, and you can even rate a file. These capabilities will make it possible for students to create “smart” audio study lists, rate files by difficulty and sort to perform a self-assessment, etc.

Middlebury’s iPod Case Study: Shel Sax described a recent iPod initiative at Middlebury as a “strategic failure.” The project was intended to demonstrate innovative use of iPods for language learning. Over the summer of 2005, iPods loaded with language lab files were made available to students. iPods were distributed from the library circulation desk and could be checked out just like a book. They anticipated that students would use the iPods in many creative ways. In fact, iPod use was minimal (approximately 1.5 hour per week per student) and focused more on convenience than on innovation (the iPods were used primarily as a “glorified discman”). Fortunately, assessment was a built-in component of the project, so they have a good idea of factors that affected the project’s outcome. The following issues were identified as problems they plan to address in future projects:

  • Content Ownership: It was unclear if the rights they had to language lab content extended to the use of these files for mobile devices.
  • Technical Problems: They encountered problems with physical handling, iPods freezing up, and with peripherals such as microphones.
  • Insufficient Time for Testing: This project was an unanticipated opportunity, a windfall, and the short development timeline did not allow for adequate testing and technical problem-solving.
  • Insufficient Training and Documentation: Again, the short timeline did not allow for the development of documentation that would have helped users solve routine problems. In addition, language students enrolled in the summer intensive program take a pledge to only communicate in the language they are studying — and it was difficult to fit in sufficient training before the pledge took effect.
  • Ease of Access: Students had to go to the library to check out the iPods — as Shel noted, “don’t underestimate the factor of convenience.”

Overall, many of the problems could be directly attributed to the locus of energy for the project. As Shel said, “it was a technology-driven project.” For the fall, they changed their approach. The offered 1-3 training sessions for students taking courses that involve iPods. They also solicited proposals from faculty, focusing their work with faculty who wanted to use the iPods and who had innovative ideas for how this technology could be used in their classes. This second round of projects is much more innovative and pedagogically substantive:

  • Biology: developing a podcast web site of bird songs
  • Museum Studies: developing a podcast audio tour
  • Teacher Education: developing audio portfolios

Session V: Fast, Cheap, and Out of Control — Social Software in the Academy (Brian Lamb) Session V LinksSession Links

Brian’s Presentation Link
Brian’s Blog, Abject Learning
Clay Shirky’s Writings About   the Internet
Weblogs @ UBC
Denise’s Blog
Peru 2006
Michelle Chua
UBC’s Blogfolio Guide
MovableType
Drupal
Edublogs (provides free blogs
for education professionals)
Textologies
NetNewsWire (RSS Reader)
Bloglines
AggRSSive

HighlightsBrian had enough content for as many as four presentations in mind, so he began his session with a “group hum” exercise to assess our areas of interest. He introduced several ideas for directions he could take the talk, then asked for us to hum after each idea if we were interested in that particular direction. We “decided” to have him provide an overview of Social Software.

Social Software Defined: According to Lamb, social software is:

  • free (or cheap)
  • easy to use (a form of mass amateurization)
  • serves the needs of small groups and individuals, but also allows for new forms of interaction and aggregated presentation that can be remarkably rich (small pieces, loosely joined)
  • being introduced into educational practice and is gaining popularity rapidly (to varying degrees)

In the words of Clay Shirky, social software is “stuff that gets spammed.” Merriam-Webster, which named “weblog” its 2004 Word of the Year, defines a blog as

A website that contains an online personal journal with reflections, comments, and often hyperlinks provided by the writer

Lamb’s original charge at UBC was to advocate the development and use of learning objects (LOs). He quickly realized that LOs are a “dog that doesn’t hunt” — because LOs comprise a “singular world,” the adoption rate is minimal.

In contrast, Lamb set up a blog for the University and, within a relatively short time, UBC was hosting 700 weblogs for 1500 people. For example, Denise Hubert uses her blog to coordinate the work of writing TAs, post assignments, and provide writing tips. Political Science faculty member Maxwell Cameron developed a class blog to document and discuss the 2006 Peruvian election. Michelle Chua used MovableType for ePortfolio development (blogfolio), then aggregated RSS feeds from individual students to create a network of blogfolios on one site. However, Lamb stressed that you can expect administrative and technical friction when you consider offering weblogs, because they are perceived as increasing risk and decreasing control.

Despite resounding popularity and success, some concerns were voiced — but the benefits outweighed the risks and each concern had a work-around. For example, some were anxious about “forcing” students to write publicly, but it was pointed out that student can assume a pseudonym. Others were worried about spamming, but that problem can be addressed by setting comments to “moderated.” While it’s understandable that administrators would be anxious about inappropriate posts, Lamb noted that, with over 4,500 pages of writing in UBC’s weblogs, they haven’t been made aware of a single objectionable post.

For those who are worried about information overload, the solution may be to change how you think about online learning. Instead of viewing blogs and other forms of online learning as “texts” (collections of objects), think of them as flow (something that you follow and/or dip into). Social software is changing how we write and read — it’s a new kind of narrative, a living text, that’s developing over the web. For more information about digital writing, see UBC’s Textologies site.

Lamb provides the following parting words of advice: invest in an RSS reader or aggregator such as NetNewsWire (for Macs), Bloglines, or AggRSSive. There is a high signal to noise ratio in blogs, yet with NetNewsWire he is able to scan 200 sites a day to glean the half dozen nuggets of useful information.

Session VI: Scientific Visualization Software (Dave Guertin) Session VI LinksSession Links

Science Visualization Lab
Sample Visualizations
Maya
Lightwave
3ds Max

Why use visualizations?

  • To help students form questions in their minds, to encourage them to create their own questions (as opposed to knowing the answers to questions they never asked).
  • To help students make connections.

Many difficult subjects don’t lend themselves to traditional representation. In addition, many disciplines are comprised of levels of understanding. For example, in chemistry there is the observable level (which can be attained during a reaction in a lab experiment), the molecular level (which can be attained through a visualization), and the symbolic level (which can be attained through the formulation of an equation). Examples of visualizations include: illustrations, models, video, 2D animations (Flash, Java), and 3D animations (Maya, Lightwave, 3ds Max).

At Middlebury, over the past four summers they have taught students how to use visualization software, then paired them with faculty members to develop visualizations for specific courses. Students in this program usually are fine arts, math, or computer science majors. Animations are developed over the summer, with faculty and students working in close collaboration. Students work long hours (often well into the night) solving challenging representational problems. To date, seven students have developed about two dozen 30-60 second animations for faculty in five departments. As opposed to stand-alone learning objects, these animations are learning assets designed for use in conjunction with teacher explanations.

Cost / Value: The program is not inexpensive. Each student requires a high-end work station ($3,000-$6,000), software (Lightwave = $125/seat, Maya = $375/seat), and their hourly wages total $1,000-$2,000 per animation. Students usually need about a month of time using the software before they are prepared to create a high-quality animation. However, despite the costs, informal program assessment indicates that faculty are happy, the animations are useful, and students have benefited from the experience.

About the Event, Sponsor, and the Presenters”Emerging Trends for Teaching and Learning” was day-long SIG event sponsored by NERCOMP, the Northeast Regional Computing Program. The SIG took place in Bolton, Massachusetts on October 27th. Presenters included:

  • Bryan Alexander (SIG Organizer)
    Director for Research, National Institute for Technology and Liberal Education (NITLE)
  • Shel Sax
    Director, Educational Technology Services, Middlebury College (SIG Organizer)
  • Alex Chapin
    Educational Technologist, Middlebury College
  • Joel Forman
    Associate Professor, English Department, George Mason University
  • Dave Guertin
    Educational Technology Specialist, Middlebury College
  • Brian Lamb
    Learning Objects Coordinator, University of British Columbia

For Further Reading About Emerging Technologies in Higher EducationEducause

In particular, the Learning Technologies Initiative page, the Emerging Practices and Learning Technologies page, the 7 Things You Should Know About page, and “Tomorrowland: When New Technologies Get Newer,” an article in the November/December 2006 edition of the Educause Review.

New Media Consortium

Emerging Technologies Initiative, including the 2006 annual Horizon Report.

Using Student Podcasts in Literature Classes

by Liz Evans, Swarthmore College

Details
Instructor Name:

Peter Schmidt

Course Title:

U.S. Fiction, 1945 – Present

Institution:

Swarthmore College

What is the overall aim of the course?:
The course is a survey of important novels published by U.S. authors since World War II. Shared themes include war, peace, complex personal and family histories, U.S. state power, border-crossings, and the use of fiction to narrate crises in individual and national identities. Students learn to vary their interpretive techniques so as to appreciate tragedy vs. comedy, satire, and farce. This is one of a number of survey courses offered by Swarthmore’s English Department designed to introduce students to a wide range of authors, historical contexts, and interpretive techniques.
Course design and scope of the project:

Forty-six students enrolled for thirteen weeks in Spring 2006. We met twice weekly for an hour and fifteen minutes per class, with a mix of lecture and large- and small-group discussion. Aside from reading the novels, students also had to do secondary assignments, including making and listening to podcasts and reading and evaluating discussion summaries.

This podcast project tied in very well to a literature course, because in addition to teaching students about particular works of fiction, the key skill modeled when students quote and expand on each other’s words is that thinking about cultural works is a collaborative process that happens in dialogue, not only in isolation. Cultural objects (including novels) are not static; they circulate, they are events. We may receive them privately, as when we read or work on a computer, but the process is not complete until we take the next step, which is to re-connect with others. We get ideas about interpretation from others, improve them (we hope) on our own, then place these ideas back into the cultural stream.

Incorporation of Technology:

Each podcast assignment consisted of a “podcast pair” (two podcasts); students made a five-minute reading of a passage from a novel, coupled with a five-minute discussion of that passage: why the student chose it, what details were most important, what themes and issues the passage raised, and how the passage related to the rest of the novel. These podcasts were posted on a server and all students in the class were required to listen to selected podcasts on what they were reading before coming to class discussions.

The students received two sets of instructions for making podcasts. One, written by the professor, stressed what kind of content was expected. The other, written by Liz Evans of Swarthmore’s Information Technology Services in collaboration with the professor, gave step-by-step technical instructions for recording and posting and subscribing to podcasts.

In order to prepare their MP3 recordings, students were given instructions for basic installation and use of recording software (Audacity or GarageBand) on Windows or Macintosh PC. In most cases, students used their own computers and devices, although additional equipment and assistance was available through Information Technology Services. After recording, students were provided guidance on posting their MP3 files on a weblog page hosted on the college’s OS X web server. This page, in turn, created the URL for subsciption to the podcast in iTunes or other players. Students were required to subscribe and listen to the recordings each week on their computers or portable media players.

A key to the project’s success with a relatively large class was placing the technology in the hands of the students themselves. Though the comfort level of individual students varied, providing good documentation from the outset helped most students handle the recording work independently, and only a very few technical problems with audio quality or file format issues were encountered.

Lessons Learned:

Podcasts are a superb new technology that can be used in any situation where instructors want students to read and perform written material and then discuss it. Beyond literature or theater classes, they can also work well in foreign language courses to help students improve their speaking and hearing skills. Requiring students to post the material before class meant that the performances, passages, and student materials could be one (not the only) focus of the in-class discussions, which greatly enriched the quality of the discussion. Students found that the readings brought the passages and the novels to life—and that when they heard passages aloud, they noticed many more things than when they just read an assignment before class. In addition, students could respond to the interpretations of the selections that the podcasts made—adding their own collaborative insights, arguing with the interpretation, etc. With literature, this new technology encourages close reading, thoughtful interpretation, and student involvement. Also, students love performing works of literature (even excerpts) aloud—it greatly adds to the fun of the class. Students took the assignments very seriously and in general did very high quality work with them.

Student-made podcasts could work well for many other kinds of courses (from history to foreign languages to any of the social sciences) where a premium is placed on texts and careful interpretation.

The one thing I would do differently next time is cut back on the number of podcasts required for each class. For some assignments it was 3-4 podcast pairs, given the popularity of an author and the large number of students in the class. Since the reading assignments were long, most students did not have time to complete both the reading assignments and all the podcast assignments. I made a mid-course correction and had the students listen to just 1 pair of podcasts of their own choosing before most classes, and then we discussed these in class; this worked well. Professors incorporating podcast assignments into their syllabi need to be sure that they are well-integrated and well-balanced with the other assignments.

Aside from giving the students clear instructions about the goals and methods of making podcasts for the course, I recommend that some class time be devoted to discussing the podcasts when they are assigned. Otherwise, the students who make the podcasts for that day won’t get enough feedback from their fellow students, and there will be too big a break between the outside-class and in-class work. Most often I found that beginning with a discussion of the podcasts was a superb way to open a larger discussion of the themes and ideas and interpretive issues of the assigned material for that day. Podcasts also complemented well the lectures that I gave; I often found myself referring to assigned podcasts as part of my lecture on the novelist we were studying.

References, links:

Course Podcast Webpage:

http://acad.swarthmore.edu/weblog/e52b/

Podcast Creation Guide for Students:

http://acad.swarthmore.edu/podcast/

Measured Results:

The student podcasts did not replace traditional writing assignments, such as exams and papers; they were a very successful supplement to them. I gave students written feedback and grades on their podcasts, evaluating both their dramatic readings and the subsequent interpretation they gave of the material.

We discussed in class what the students thought of the podcast assignments, from how clear the instructions were to how they evaluated the results. Many students also gave me their opinions after class, or comments via email later and via the written course evaluations. The vast majority (40+) of my 46 students loved the assignments and put lots of effort and thought into them. They understood right away the learning possibilities in this new medium. They also much preferred making their own podcasts on material relevant to the course, over listening to long podcast lectures by the professor.

Additional measured results: Students were required to evaluate podcast content as part of some of their writing assignments, especially the in-class exams. (I had told the students that the podcast analysis would be required on the exam, so they had to listen to some as part of their exam preparation. The exam was open-book and open-notes for this reason.) In this way, I could judge how well the students were paying attention to the podcasts and using them to supplement their own ideas. Here are two examples:

The first exam essay excerpt below is on Barbara Kingsolver’s novel The Poisonwood Bible. The student transcribed a quotation she liked from a podcast she had listened to ahead of time, then embedded it within her own discussion:

“In his podcast, Dan discusses how [the character] Nathan, ‘by never forgiving himself[,] … effectively prevents God from forgiving him as well, as Christianity requires both penance and acceptance of one’s own flaws, a concept that seems entirely alien to Nathan’s perception of the world…’ Because Nathan retains the characteristic of being ‘more sure of himself than I’d thought it possible for a young man to be,’ as [the character] Orleanna describes him, he can never be forgiven…”

The quotation from the student named Dan shows, first of all, the high level that some of the podcasts achieved as they discussed their chosen passage. This excerpt also hints at how the student taking the exam then proceed to develop Dan’s idea that the two major adult characters seek forgiveness but create only a cycle of self-punishment. She made the idea her own, adding her own nuances and examples and new directions. But clearly the “germ” for her idea was inspired by the podcast.

Another example, from a student writing on Thomas Pynchon’s Vineland, well shows how podcast content can be “quoted” just like any text-based source would be:

“Similar to the blurring of identity found with the Thanatoid characters, there also exists in Vineland a blur between the present and the past. Throughout the novel, the reader is constantly confronted with flashbacks, many times caught unaware where the flashback ends and the present story begins. As Micah states in his podcast, the blurring of the present and past is done to such an extent so that ‘present and past are inseparable.’ The blurring is heightened by Pynchon’s use of media, notably TV and film….” The student then proceeded with discussing examples.

Podcasts are a great new way to communicate. In some ways, though, educators have been slow to explore the possibilities for back-and-forth interaction that the web allows, so that such interchanges can occur outside of the classroom as well as within it. But anything we can do to heighten the intensity and intelligence that students will bring to the classroom conversation is a good thing. Podcasts present fascinating new possibilities for doing just that.

css.php