Cyberinfrastructure and the Sciences at Liberal Arts Colleges

by Francis Starr, Wesleyan.edu

Introduction
The technical nature of scientific research led to the establishment of early computing infrastructure and today, the sciences are still pushing the envelope with new developments in cyberinfrastructure. Education in the sciences poses different challenges, as faculty must develop new curricula that incorporate and educate students about the use of cyberinfrastructure resources. To be integral to both science research and education, cyberinfrastructure at liberal institutions needs to provide a combination of computing and human resources. Computing resources are a necessary first element, but without the organizational infrastructure to support and educate faculty and students alike, computing facilities will have only a limited impact. A complete local cyberinfrastructure picture, even at a small college, is quite large and includes resources like email, library databases and on-line information sources, to name just a few. Rather than trying to cover such a broad range, this article will focus on the specific hardware and human resources that are key to a successful cyberinfrastructure in the sciences at liberal arts institutions. I will also touch on how groups of institutions might pool resources, since the demands posed by the complete set of hardware and technical staff may be larger than a single institution alone can manage. I should point out that many of these features are applicable to both large and small universities, but I will emphasize those elements that are of particular relevance to liberal arts institutions. Most of this discussion is based on experiences at Wesleyan University over the past several years, as well as plans for the future of our current facilities.

A brief history of computing infrastructure
Computing needs in the sciences have changed dramatically over the years. When computers first became an integral element of scientific research, the hardware needed was physically very large and very expensive. This was the “mainframe” computer and, because of the cost and size, these machines were generally maintained as a central resource. Additionally, since this was a relatively new and technically demanding resource, it was used primarily for research rather than education activities.

The desktop PC revolution started with the IBM AT in 1984 and led to the presence of a computer on nearly every desk by the mid 1990’s. The ubiquity of desktop computing initiated tremendous change to both the infrastructure and uses of computational resources. The affordability and relative power of new desktops made mainframe-style computing largely obsolete. A computer on every desktop turned users into amateur computer administrators. The wide availability of PCs also meant that students grew up with computers and felt comfortable using them as part of their education. As a result, college courses on programming and scientific computing, as well as general use of computers in the classroom, became far more common.

Eventually, commodity computer hardware became so cheap that scientists could afford to buy many computers to expand their research. Better yet, they found ways to link computers together to form inexpensive supercomputers, called clusters or “Beowulf” clusters, built from cheap, off-the-shelf components. Quickly, the size of these do-it-yourself clusters grew very large, and companies naturally saw an opportunity to manufacture and sell them ready-made. People no longer needed detailed technical knowledge of how to assemble these large facilities; they could simply buy them.

This widespread availability of cluster resources has brought the cyberinfrastructure needs full circle. The increasing size, cooling needs, and complexity of maintaining a large computing cluster has meant that faculty now look to information technology (IT) services to house and maintain cluster facilities. Maintaining a single large cluster for university-wide usage is more cost effective than maintaining several smaller clusters and reduces administrative overhead. Ironically, we seem to have returned to something resembling the mainframe model. At the same time, the more recently developed desktop support remains critical. As technology continues to progress, we will doubtless shift paradigms again, but the central cluster would appear to be the dominant approach for at least the next five years.

Hardware resources
The cluster is the central piece of hardware–but what makes up the cluster? How large a cluster is needed? Before we can address the question of size, we should outline the key elements. This becomes somewhat technical, so some readers may wish to skip the next five paragraphs.

First, there is the raw computing power of the processors to consider. This part of the story has become more confusing with the recent advent of multiple core processors. In short, a single processor may have 2, 4 or, soon, 8 processing cores, each of which is effectively an independent processor. This does not necessarily mean it can do a task faster, but it can perform multiple tasks simultaneously. Today, I think of the core as the fundamental unit to count, since a single processor may have several cores, and a single “node” (physically, one computer) may have several processors. For example, at Wesleyan, we recently installed a 36-node cluster, each node having 2 processors and each processor having 4 cores. So while a 36-node cluster may not sound like much, it has packed into it 288 computing cores.

This high density of computing cores has several advantages: it decreases the footprint of the cluster; decreases cooling needs; and decreases the number of required connections. For the moment, let’s focus on connectivity. The speed of connections between computers is glacial in comparison to the speed of the processors. For example, a 2-GHz processor does one operation every 0.5 nanoseconds. To get an idea of how small amount of time this is, consider that light travels just about 6 inches in this time. The typical latency–the time lost to initiate a transmission–of a wired ethernet connection is in the range of 0.1-1 milliseconds, or around 2000 clock cycles of the processor. Hence, if a processor is forced to wait for information coming over a network, it may spend a tremendous number of cycles twiddling its thumbs, just due to latency. Add the time for the message to transmit, and the problem becomes even worse. Multiple cores may help limit the number of nodes, and therefore reduce the number of connections, but the connectivity problem is still unavoidable. So what to do?

The answer depends on the intended usage of the cluster. In many cases, users want to run many independent, single process, or serial, tasks. In this case, communication between the various pieces is relatively unimportant, since the vast majority of the activity is independent. Ordinary gigabit ethernet should suffice in this situation and is quite cheap. If the usage is expected to include parallel applications, where many cores work together to solve a single problem faster, it may be necessary to consider more expensive solutions. However, given that it is easy to purchase nodes containing 8 cores in a single box, these expensive and often proprietary solutions are only needed for rather large parallel applications, of which there are relatively few.

All this processing power is useless, however, without a place to store the information. This is most commonly achieved by hard disks that are bundled together in some form, though for the sake of simplicity, they appear to the end user as a single large disk. These bundles of disks can easily achieve storage sizes of tens to hundreds of terabytes, a terabyte being 1000 gigabytes. The ability to store such large amounts of information is particularly important with the emergence in the last decade of informatics technologies, which rely on data-mining of very large data sets.

The last, and sometimes the greatest challenge, is housing and cooling the cluster. Even with the high density of computing cores, these machines can be large and require substantial cooling. A dedicated machine room with supplemental air conditioning is needed, typically maintained by an IT services organization. Fortunately, most IT organizations already have such a facility, and with the decreasing size of administrative university servers, it is likely that space can be found without major building modifications. However, do not be surprised if additional power or further boosting of cooling is needed. The involvement of the IT organization is critical to the success of infrastructure. Accordingly, it is important that IT services and technically-inclined faculty cultivate a good working relationship in order to communicate effectively about research and education needs.

OK, but how big?
Given these general physical specifications for the key piece of hardware, the question remains, how big a cluster? Obviously the answer depends on the institution, but I estimate 3 or 4 processing cores for each science faculty member. An alternate and perhaps more accurate way to estimate is to consider how many faculty members are already heavy computational users and already support their own facilities. I would budget about 50 cores for each such faculty member, though it is wise to more carefully estimate local usage. Part of the beauty of a shared facility is that unused computing time that might be lost on an individual faculty member’s facility can be shared by the community, reducing the total size of the cluster necessary to fulfill peak needs.

Software needs tend to be specialized according to the intended uses, but it is important to budget funds for various software needs, such as compilers and special purpose applications. The Linux operating system is commonly used on these clusters and helps to keep down software costs since it is an open source system. For many scientific computing users, Linux is also the preferred environment regardless of cost.

The cluster itself is of limited use without the human resources–that is, the technical staff–to back it up. At a minimum, a dedicated systems administrator is needed to ensure the smooth operation of the facility. Ideally, the administrator can also serve as a technical contact for researchers to assist in the optimal use of the cluster facility. However, to make the facility widely accessible and reap the full benefit for the larger university community, a more substantial technical support staff is needed.

The human element: resource accessibility
The presence of a substantial cluster is an excellent first step, but without additional outreach, the facility is unlikely to benefit anyone other than the expert users who were previously using their own local resources. Outreach is key and can take a number of forms.

First, faculty who are expert in the use of these computer facilities need to spearhead courses that introduce students to the use and benefits of a large cluster. This will help build a pool of competent users who can spread their knowledge beyond the scope of the course. This effort requires little extra initiative and is common at both liberal arts and larger universities.

Second, it is particularly important in a liberal arts environment to develop and sustain a broad effort to help non-expert faculty take advantage of this resource for both research and educational purposes. Otherwise, the use of these computers will likely remain limited to the existing expert faculty and the students whom they train.

Outreach across the sciences can also take the form of a cross-disciplinary organization. At Wesleyan, we established a Scientific Computing and Informatics Center, with the goal of both facilitating the use of high-performance computing and supporting course initiatives that use computational resources. The center is directed by a dedicated coordinator, who is not burdened with the technical duties of the systems administrator, and is assisted by trained student tutors.

The first goal of the center, facilitating cluster use, is primarily research-oriented. That is, the center serves as a resource where faculty and students can seek assistance or advice on a range of issues–from simple tasks like accessing the resources to complex problems like optimization or debugging complex codes. In addition, the center offers regular tutorials on the more common issues, making broader contact across the institution.

The second goal–educational outreach–is particularly important for liberal arts institutions. Educational outreach deals with all aspects of computational activities in the curriculum, not just cluster-based activities. For example, if a faculty member wishes to make use of computational software, the center staff will offer training to the students in the course, thereby leaving class time to focus on content. The center staff will also be available for follow-up assistance as the need arises. This eliminates the problem of trying to add or include training for computational resources in existing courses.

But efforts should not stop at this level. While we are still in the early stages of our experiment at Wesleyan, I believe that such a support organization will not have a significant impact if it simply exists as a passive resource. The center must actively seek out resistant faculty and demonstrate through both group discussions and one-on-one interactions how computational resources can enhance their teaching activities.

To maintain the long-term vitality of this kind of center, it is important to maintain a group of trained and motivated student tutors. To do this, we have chosen is to offer students summer fellowships to work on computationally demanding research projects with faculty. Some of these students then serve as tutors during the academic year. Combined with this summer program are regular lecture and tutorial activities. These tutorials may also be expanded to reach beyond the bounds of the university to other institutions as workshop activities.

Cross-institutional collaboration
Sometimes, all of these goals can be met by a single institution. But even if this is possible, there are still benefits to looking outside the institution. And for smaller institutions, pooling resources may be the only way to develop an effective cyberinfrastructure.

While high-speed networks now make it technically possible to establish inter-institutional efforts across the country, it is important to be able to gather together a critical mass of core users who can easily interact with each other. In my own experience, this happens more easily when the users are relatively nearby, say no more than 100 miles apart. It means that institutions can share not only the hardware resources over the network, but also the technical support staff. Of course, day-to-day activity is limited to interaction within an institution or virtual communications between institutions, but frequent and regular person-to-person interaction can be established at modest distances.

Balancing individual institutional priorities in such a collaboration is obviously a delicate process, but I envision that the institution with the most developed IT services can house and maintain the primary shared hardware resource, thereby reducing the administrative needs across several institutions. Adequate access to facilities can be guaranteed by taking advantage of the fact that most states maintain high-speed networks dedicated for educational usage. In addition, there are many connections between these state networks, such as the New England Regional Network. Personal interactions can be facilitated by regular user group meetings where users can share their questions and concerns with an audience that extends beyond their institution. In addition, new electronic sharing tools, such as wikis and blogs, can help foster more direct virtual communications.

Summary
To have a successful cyberinfrastructure in the sciences, it is essential to develop both hardware and human resources. Personal support and outreach to faculty and students is crucial if the benefits of the infrastructure are to serve a wider clientele. For liberal arts institutions, the presence of state-of-the-art infrastructure helps them to compete with larger institutions, both in terms of research and in attracting students interested in technology. At the same time, emphasizing outreach is of special importance to achieve the educational goals that make liberal arts institutions attractive to students.

Acknowledgments
I wish to thank Ganesan Ravishanker (Associate Vice President for Information Technology at Wesleyan University) and David Green for their assistance preparing this article.

You.Niversity? A Review of Reconstructions Special Issue: “Theories/Practices of Blogging”

by Kevin Wiliarty, Wesleyan University

You.Niversity? A Review of Reconstruction’s Special Issue: “Theories/Practices of Blogging

At a recent workshop on academic applications of Web 2.0 technologies, NITLE’s Bryan Alexander acknowledged that one of the challenges for the converted is to help their peers get past the often playfully silly names associated with the tools in question: “Blogs? Wikis? Are you serious?” The point is not insignificant. I remember my own initial reactions to these terms, and I read the wariness in the faces of the faculty I now advise on matters of academic computing. Scholars often correctly intuit that they are not the target demographic for Web 2.0. Negative press only reinforces that visceral inclination. Most academics presumably know the Wikipedia better for its vulnerabilities and pitfalls than for its real or potential strengths, and blogging is perceived, even by some of its scholarly practitioners, as extraprofessional, if not outright “neurotic or masturbatory” (Benton 2006).

Intended end-user or not, higher education stands to benefit greatly from technologies that significantly enrich our information infrastructure. We are only beginning to appreciate how Web 2.0 can facilitate or enhance familiar scholarly and pedagogical endeavors, not least of all by helping us to manage the information glut for which Web 2.0 is, itself, partly responsible. We sense, if vaguely, that these new tools will change some of our ways of working, perhaps dramatically. For good or ill, it has already begun, and even adherents of the cause generally recognize the need for attention to a number of issues: peer review, professionalism, promotion, and intellectual property, to mention only a few.

Amid the shifting technological sands, the recent special issue (vol. 6, no. 4, 2006) of the freely available, online, peer-reviewed, academic quarterly Reconstruction offers a welcome antidote to the speculation and scuttlebutt. Titled “Theories/Practices of Blogging,” the issue tackles a wide range of topics using disparate methodologies. Academic blogging features prominently (see Michael Benton’s introductory “Thoughts on Blogging by a Poorly Masked Academic,” Craig Saper’s “Blogademia,” Lilia Efimova’s “Two papers, me in between,” and to a lesser extent Tama Leaver’s “Blogging Everyday Life“). In closely allied projects, the multi-authored “Webfestschrift for Wealth Bondage/The Happy Tutor” celebrates blogging as a literary enterprise, while Erica Johnson’s “Democracy Defended: Polibloggers and the Political Press in America” examines blogging’s relationship to still another form of professional writing: journalism.

Further contributions to the issue address questions of blogging and identity in international contexts (see Carmel L. Vaisman’s “Design and Play: Weblog Genres of Adolescent Girls in Israel,” David Sasaki’s “Identity and Credibility in the Global Blogosphere,” and Lauren Elkin’s “Blogging and (Expatriate) Identity“). The expatriate blogger Esther Herman’s beautifully written “My Life in the Panopticon: Blogging From Iran” serves as a highly personalized foil for the more analytical pieces.

True to the spirit of blogging, perhaps, the contributions are diverse and international. They include theoretical and empirical analyses alongside a number of ‘primary’ sources, i.e. bloggers’ own reflections on blogging. This one-two punch provides ammunition for the advocate and manna for the believer. As befits a publication whose subtitle reads “Studies in Contemporary Culture,” all of the offerings address the intersections of technology and culture. The theoretical papers cover thinkers from Habermas (see Anna Notaro on “The Lo(n)g Revolution: The Blogosphere as an Alternative Public Sphere?“) to de Certeau (see Leaver) to Ong, Lakoff, and Goffman (see danah boyd’s “A Blogger’s Blog: Exploring the Definition of a Medium,” a piece whose theoretical insights I find particularly nuanced and enlightening).

A theme that informs most of the pieces is a distinction between blogs as a form of technology and blogging as a form of cultural activity. Not surprisingly, given the focus of the publication, the emphasis is generally on the latter. A few of the authors point out that blogs need not be focused on individual expression (political, scholarly, or otherwise), but still the emphasis of the issue is on highly personal, completely accessible blogs. The topic that the editors put to the blogosphere for comment was “Why I blog.” The choice of the singular pronoun is telling.

From the standpoint of academic technology, however, I cannot help but suspect that some of the most effective usage of blogs is restricted, practical, and collaborative rather than public, expressive, and individual. Researchers collaborating from different institutions, for example, might well find a shared blog with built-in archiving and navigation a more convenient way to document their progress than, say, flurries of emails. There are plenty of reasons to put work on a server other than wanting to share it indiscriminately with the world. More practical uses of blogs do not get much attention, though, not even in diverse collections like the volume under review. Practical blogging is less controversial, and from a methodological perspective, it is also harder to research what is being done with private or restricted blogs, or even how many there are, or who has them.

For now, at any rate, the stereotype of blogging as individual public expression will probably continue to dominate in the public perception. The volume under review gives us a number of reasons to take professional and scholarly interest in individual public blogging, but those of us working to promote the less obvious uses of blogging technology still have a long way to go before our colleagues feel comfortable entrusting their ‘serious’ content to what is still widely perceived as a ‘frivolous’ medium.

Review of “Digital Images Workshop” A NERCOMP event (4/24/06)

by Valerie Gillispie, Wesleyan University

Schedule and biographies of the speakers

This event brought together faculty, information technology specialists, librarians, and others who work with images to discuss the impact of digital images on the liberal arts curriculum. A number of questions were addressed throughout the day: How do faculty work with digital images versus analog images? What skills do students need to successfully interpret images? How can those responsible for digital image management assist faculty and students in their work with images?

The conference was inspired by David Green’s recent survey and interviews with 35 institutions about their use of digital images. Green’s presentation about his work was the first session of the day.

Session I: “The Use of Digital Images in Teaching Today”
Speaker: David Green, Principal of Knowledge Culture

David Green’s Handout

David Green was brought in as a consultant by the Center for Educational Technology to study the use of digital images. This project was supported by the National Institute for Technology and Liberal Education and Wesleyan University. Green explained that his first step was to conduct a literature review of the field to see what had already been determined. Penn State had conducted its own survey, Berkeley had conducted a research project, and RLG had looked at how visual resources in databases had been discovered and used. The studies indicated significant problems related to personal collections and their management, including a lack of metadata associated with personal collections. They also found that faculty often had trouble finding and successfully using technology related to digital images. In addition, there was confusion over copyright issues. Green suggested Christine Borgman’s paper “Personal Digital Libraries: Creating Individual Spaces for Innovation” as a useful summary of these issues.

Following the literature review, Green and his primary contacts at each school encouraged faculty to complete an online survey about the use of digital images in teaching. The 404 responses from faculty, each of whom had taught at least one course with digital images, offered new insight into the ways images were used. Most faculty used images from their own collections, with a smaller number using publicly-accessible online databases, and a smaller number still drawing images from licensed databases. Some faculty had complaints about the difficulty or time needed to set up digital images for their teaching, but they appreciated the volume, creativity, and ease of change allowed by digital images. Additionally, faculty felt that students liked the accessibility and convenience of digital images.

Some faculty did find analog images to be superior in quality and reliability. However, in response to a question about what the advantages of analog images are, 35% of respondents either offered no answer, or wrote that there were no advantages.

In using digital images, faculty liked being able to create their own sequences, to create their own images, and to allow students to review the presentations. Capabilities like altering images and zooming in on them were rated to be less important.

Has teaching changed with digital images? Three-quarters of the survey respondents thought it had. Changes mentioned included greater efficiency, more variety, and new skills required of and used by students. Perhaps surprisingly, 60% of faculty were satisfied with their current display system, of which PowerPoint was most popular. One feature mentioned as “desirable” was the ease of bringing word and image together, which is relatively simple in PowerPoint. Some respondents mentioned their irritation at confusing, elaborate options in some display systems. Simplicity is key, and it was simplicity that was mentioned most often when asked what tools faculty would like in their presentation software, along with better integration of multiple media.

Where do faculty get support for their use of digital images? From a wide variety of sources, it seems. The majority of faculty said assistance in digitizing, finding, and managing images was important, but many did not get support or were dissatisfied with the support they did get. Learning how to use new technology is time-consuming, and faculty feel overwhelmed by the time commitment and lack of institutional support for using digital images.

Following the online survey, Green visited each school and conducted a total of 326 in-person interviews with faculty, staff, information technology specialists, visual resource specialists, and others. Here are a few of the major conclusions drawn from the interviews:

  • Licensed databases must be interoperable.
  • Students need to be trained to “read” digital images.
  • Faculty need to be trained to use digital images in their teaching.
  • A strong digital infrastructure must be in place to support personal collections and presentations.


The full report on Green’s findings will be posted soon on Academic Commons. Green will be presenting his findings at Wesleyan University and other interested participating schools.

Session II: “Digital Image Resource Development”
Speakers: John Saylor, Director of Collection Development for the National Science Digital Library
Susan Chun, General Manager for Collections Information Planning, Metropolitan Museum of Art
Patrick McGlamery, Director, Library Information Technology Services, University of Connecticut Libraries

John Saylor spoke about how the National Science Digital Library (NSDL) is using Open Archives Initiative (OAI) metadata harvesting to gather metadata for a wide range of images that are then centralized through the NSDL website. This endeavor is primarily funded by the National Science Foundation (NSF) Division of Undergraduate Education, and advised by a Core Integration Team made up of UCAR, Cornell, and Columbia.

The NSF also has provided grants in the NSDL program in three different areas: pathways, services, and targeted research. These grants have helped create over 100 unique collections of resources that can be accessed through the NSDL. The most important grant area related to digital image collections is the Pathways grant, which gives the grantee responsibility to select and make available resources for particular subject areas. These $5 million, multi-year grants are intended to help the grantees eventually become self-sufficient in their mission.

One of the major advantages of NSDL is that it offers a single, peer-reviewed, appropriate to find many resources about a given topic. Researchers can benefit by including their images and research in the pathways and connecting with those working in similar areas. The gathering of these many collections has been made possible through the OAI Protocol for Metadata Harvesting. More information can be found at http://www.openarchives.org. A union catalog is being created through this initiative.

Next, Susan Chun spoke on the topic, “Getting It Right: How well can image suppliers determine and meet the image requirements of college and university users?”

The Metropolitan Museum of Art does not have a strategic plan for assisting college and university users with digital images. However, in re-evaluating their cataloging practices and assessing their demand for high resolution images, the Museum began to think about practical changes that would benefit both users and the Museum.

The Met wanted to have a better system of digital asset management than unsecured CDs; they use digital images to both create a better inventory and meet demand for image requests more quickly. By choosing to focus on the most frequently requested or canonical images, the Met streamlined its practices and created digital versions of its analog images. These canonical images may be meeting the needs of the academic community, but the museum is not sure exactly what these needs are.

The progress of digitizing analog images (usually transparencies) is very slow, because of the high resolution of each image, as well as the cropping, spotting, and color balancing that is important for the museum’s inventory needs.

The Met has also tested social tagging—folksonomies—to generate keywords used to describe individual works. Thirty volunteers added their own terms to the objects, and upon review, 88% of the terms had not been previously found in museum descriptions. 77% of the terms were judged to be valid by staff. The implication of these terms is that they describe the work from a non-specialist viewpoint, and may provide access to people who use non-specialist vocabularies. Open source tools for collecting community terms can be found at http://www.steve.museum.

The Met is also grappling with how to provide access to images. The museum has charged licensing fees since 1871, and although they sometimes waive or reduce fees, the practice is inconsistent. It is also time-consuming to have individual users approach the museum each time they need something. To facilitate making these images available free of charge, the Met has teamed up with ARTStor to distribute high resolution images. Using a scholars’ license, it grants permission for approved uses without requiring individual approval.

The third speaker was Patrick McGlamery, whose talk was entitled “Maps, GIS, and Spatial Data: How are maps, aerial photography and geospatial imagery affecting scholarly research?”

McGlamery spoke about maps as not only a picture, but data. One expert has described maps as no longer static, but instead a “dynamic, structured dataset that can be accessed and queried through a Geographic Information System.” Because maps are mathematical, they work well in the digital environment. Multiple maps can be overlaid because there is spatial information indicated by x/y points. Even aerial photographs can be used in conjunction with maps. GIS software makes this spatial data possible.

The Map and Geographic Information Center at the University of Connecticut does not have a lot of historical maps in its collection, but it has created a digital collection of maps at other institutions. The general policy has been to scan maps to a resolution where information is transferable, i.e. where all text or drawings are recognizable. As maps have gotten more detailed over time, this level of resolution has changed.

Historical maps can be used in conjunction with GIS data and aerial photography to learn about changes over time. Faculty use maps and enhance them in their teaching. Landscapers and ecological engineers also use these maps to see how historical information aligns with modern maps. Information is displayed in ARCview, a powerful viewer which state institutions have a license to use.

Using historical aerial photography can be complicated, since there is not much metadata attached to photographs. The University of Connecticut uses ARCview to capture metadata about a photograph’s geographic area. A user can then type in an address and discover which aerial photographs cover that particular area.

Users seem to use large images directly on the UConn server rather than downloading. Some users are also using statistical data to overlay the maps and express other types of information. Maps as images—and data—are dynamic, and can be processed in multiple ways.

Session III: Creating and Managing Digital Institutional Image Collections
Speakers: Mary Litch, Instructional Technology Specialist, Yale University
Elisa Lanzi, Director, Imaging Center, Smith College Department of Art

The first speaker, Mary Litch, spoke about her work in the Instructional Technology Group at Yale University in a talk called “Supporting Faculty in Developing and Deploying Personal Digital Image Collections (PDICs).” The Instructional Technology Group assists faculty in the arts, sciences, and medical school in using instructional technology. This is separate from the library and institutional digital image collections, and primarily works with the personal collections belonging to faculty.

These personal collections range from a few hundred images to over 20,000. The sources of such personal collections are multiple, and the approach to controlling the data varies as well. For those faculty who specialize in art history, the institutional collections are important sources, but for other faculty, personal photography, personal scanning, and images from the web are the major sources.

Yale provides institutional support for PDICs through its digital asset management software, Portfolio, and through special grants and provisions made to help faculty get bulk scanning of their images and slides. Storage is also provided free of charge. Portfolio allows faculty to associate images and data, and can be housed on a server or locally. It supports a wide number of file types and automatically indexes text. Importantly, it allows bulk import and export for ease of use. It also has a virtual light table, which helps faculty used to working with slides feel comfortable.

A major reason faculty develop PDICs is that the institutional collections are not adequate for non-art history scholars. Faculty also like the portability, custom cataloging, image quality, and search interface of their PDICs. The Portfolio system allows them to add data immediately upon scanning or photographing items.

The drawbacks to PDICs are that they are labor intensive and difficult to support. The development of personal collections may draw some energy away from the development of institutional collections. There are also problems in blending quirky personal cataloging preferences into the cataloging of institutional collections, when a faculty member wants to share their collections. There can also be tricky legal issues related to use and reproduction in institutional collections.

The second speaker, Elisa Lanzi, Director of the Smith College Imaging Center, gave a talk entitled “Gather Ye Images: Negotiating Multiple Collections for Teaching.”

Smith College was an early innovator in teaching with digital images. The current challenges faced at Smith are similar to some of those at Yale, such as the use of multiple sources for teaching images. Smith also has noted that a holistic “Image package” strategy is required or desired by faculty. These elements include classroom presentation, student study, management, repository/storage, collaboration/sharing, and interoperability. Other factors include the need for multimedia, discussion forums, and the convergence of analog and digital collections in this transition period.

Smith offers several ways of assisting faculty with image teaching, including their Imaging Center & Visual Communication Resource Center , the “Find Images” page on the art library’s web site, and the Teaching & Learning Support web site, created by the Educational Technology Services department.

Through their use of the Luna Insight presentation tool, Smith offers a virtual “new images shelf” and also has acquired some complimentary shared collections from other Luna users. There also is an image vendor online wish list that faculty use so that the imaging staff can negotiate what can be ordered through various budgets. The Imaging Center is partnering with the Library’s collection development team to license the larger image library subscriptions.

Personal collections are created by faculty using Luna but also through independent systems. Some personal collections are then shared with the institutional collection, but Lanzi notes that this raises quality issues. For example, how can image quality and metadata standards be implemented in personal collections? The content of these personal and institutional collections needs to be accessible and portable. There is a need for strategic planning, but also a realization that there are certain unknowns in these practices.

The recently-released Horizon Report points to trends and provides examples of faculty/student needs for integrated media in teaching and learning. At Smith, students have become more involved in collection building and creating presentations. The popularity of “social tagging” in tools like “Flickr” will have an impact on digital image cataloging. However, differences in metadata can impede access and veracity. Who is the expert, and who should provide the metadata? Faculty note that students need to go beyond looking and become more visually literate to successfully use images in creating arguments. There also are issues around intellectual property that may discourage faculty from sharing their materials. Overall, however, the experience with digital images at Smith has been a positive one, enhancing both teaching and learning.

Session IV: Critical Literacies
Speakers: Christopher Watts, Director, Newell Center for Arts Technology, St. Lawrence University
Flip Phillips, Associate Professor, Psychology & Neuroscience, Skidmore College
John Knecht, Professor of Art & Art History and Film & Media Studies, Colgate University

Christopher Watts spoke on the topic “Critical Literacies: thinking strategically.” He noted that literacy means being able to read and write, and in terms of visual literacy, students both produce and participate in media. It is a mistake to think that students only receive media. In participating in digital media, students are engaging in a rhetorical or communicative act, and need to be sensitive to the audience. Watts mentioned Wayne Booth’s book The Rhetoric of Rhetoric as informative on this subject.

At St. Lawrence University, students use digital media to both “know” the world and create knowledge. In this way, digital images are used to demonstrate what has been learned, and they are also used to learn, period. Two different groups at St. Lawrence discuss these issues: the Critical Literacies Group and the Rhetoric and Communication Group. They have somewhat overlapping memberships.

The Critical Literacies Group is comprised of directors of campus programs and is presided over by the Dean of Academic Affairs. This “top-down” group focuses on operational aspects of developing literacy. It is currently working to expand the role of the Writing Center to better address speaking, images, research, and technology. The Rhetoric and Communication Group is a “bottom-up” group made up of faculty. The focus is on pedagogical innovation. They are currently developing an aims and objectives statement for faculty related to critical literacies, as well as providing training and support for faculty. The formation of and communication between these two groups has resulted in much better overall communication between the various units of academic affairs.

A couple of shortcomings have been discovered. One is a reluctance to address the role of ethics in relation to literacy. For example, analyzing rhetoric requires consideration of what has been selected and presented, and what has been marginalized. A second shortcoming is the need for more participation from the sciences. Both groups are working from their own perspectives to address these shortcomings.

Through the work of these two groups, the university has been able to move strategically to begin addressing the complex issues of critical literacies in liberal education.

The second speaker, Flip Phillips, gave a talk entitled “Visual Story Telling, Grammar, Cognitive Aesthetics and ‘Design.'”

Phillips has had experience working with digital images in both an art and science environment. Using his experience working as an animator at Pixar, he has brought the concept of story boards to the students work in his lab at Skidmore College. Through the story boards, they describe their experiments in pictures.

In animation, story boards have several phases. The preliminary boards give a basic outline in four images. Sequence boards are a series of pictures which make sense along with a human explanation. The goody board contains leftover images not used in the sequence board.

In the science environment, students use a sequence board to prepare for presentations about research. Using a white board or sketching out the images, students create a dynamic space to move around the different boards. The final story reel often makes use of digital images of the original drawings. Using few words, the student is required to make a “pitch” to his or her professor using the story boards and explaining the proposal. Using images communicates information quickly, and helps explain science in a non-textual way. The storyboard approach can be used for non-visual scientific research as a conceptual technique for organizing information.

Seeing items helps to do pattern analyzing, so the visual aspect of this approach is important. There is nothing available digitally that is quite the same as being able to move around analog images physically, but it can come close. Using tools like i-view media, Keynote, and Aperture, students can work digitally on their story boards the way they can with analog images. The benefit of this storyboard approach is that it helps them focus on their ideas, prepare presentations, and design posters. It also helps them develop their argument and write papers, using the images to drive the structure.

Flip Phillips’ website can be found at http://www.skidmore.edu/~flip/

The third speaker was John Knecht, who spoke about “The Threat of Media Illiteracy.”

Media images surround us in all aspects of our lives. Knecht asked, “How do we know what we know? How do we receive information?” This question extends to our role as citizens and our political understanding. Using a series of visual images, Knecht discussed some of the issues facing us as citizens and scholars.

The issue of media literacy is interdisciplinary, and our media resources—TV, computer, and newspaper—are as legitimate an area of study as any other. We make decisions based on the images we see—but what is the context or content of these images? Students need to know how to take in media images, as they negotiate social relationships and meanings through these images.

At Colgate University, a film and media studies minor was established three years ago. Knecht would like to develop a media analysis class required of all students to create media literacy. This literacy is key to understanding relationships of power.

Knecht showed a 19th century photograph called “Fading Away” by H.R. Robinson. He shows this image to his classes and asks students to describe what is happening. In reality, however, the photograph is made up from five different negatives, so there is no real event captured in the image.

With modern images, there is a belief in the objectivity of mechanical images. Knecht used an example of a grainy cell phone photograph from the London subway bombing of 2005. It has the pretense of authenticity because you can see the mechanical components. The same effect might be observed in the digital reports from war correspondents.

Knecht used a variety of other images to describe the way that semiotics—signs—can be “read” in photographs. Signifiers are components of images, and what is signified is what is culturally determined. Using an advertisement of Dan Rather, Knecht pointed out that his casual seated position with his feet up has a culturally determined meaning. The signifier is his feet on the desk; the signified is the impression of a relaxed and honest person. Putting the signifier and the signified together creates a complete sign, one that we use to make judgments.

Related to signs are ideologies, which originate in systems of power in all cultures. It is easier to recognize ideologies in cultures other than our own.

Many systems of analysis can be used to deconstruct images and understand their content and contexts. It is important that students be able to take apart what they are seeing, hearing and reading, and question the source. The interpretation of the media world needs to be part of the national education plan to develop media literacy in students.

Conclusion

This one-day conference offered a way for those of us who work and teach with digital images to share our insights and challenges. It seems clear that digital images are becoming a standard component of curricula, and the ability to interpret and critically analyze these images is becoming a required skill for students and faculty.

A major challenge is finding technology that can meet the requirements of faculty and students. The ideal system is at once sophisticated and dynamic but also intuitive and familiar. Features such as locally determined metadata for individual collections are also desired, but pose problems when multiple individual collections are combined. Institutions are trying to provide support for both institutional and personal collections, but according to David Green’s survey, many faculty members are dissatisfied with the support they get for acquiring and cataloging images. This may be related to the difficulty of providing support for such a rapidly expanding pedagogical tool.

Overall, the conference provided a wealth of ideas about how visual resources can and are being used. There were no clear-cut answers for how to handle the technological or educational issues related to digital images, but many approaches to be considered. This meeting was the beginning of a dialogue about an exciting and evolving educational tool.

Learning Outcomes Related to the Use of Personal Response Systems in Large Science Courses

by Jolee West, Wesleyan University

 

The use of Personal Response Systems, or polling technology, has been receiving wider attention within academia and also in the popular press. While neither the technology nor the pedagogical goals are new, general knowledge and implementation of course-related polling appears to recently have reached the critical threshold. Between 2004 and 2005, the implementations by “early adopters”[1]began to seriously influence the “early majority” resulting in wider visibility of the technology. This trend is illustrated by the increasing number of references to “clickers” and “personal response systems” on the EDUCAUSE website from 2004 until the present, as well as a recent spate of newspaper and e-zine articles.[2]

Many institutions, including community colleges, liberal arts colleges, and large research universities have now adopted Personal Response Systems (i.e., polling technology) in their larger lecture courses across the curriculum. For example, MIT, University of Massachusetts-Amherst, Harvard, Yale, Brown, University of Virginia, Vanderbilt, and Duke, have all implemented personal response systems for larger physics and/or biology lecture courses. A number of implementations took place under the auspices of granting programs, such as the Pew Center for Academic Transformation and the Davis Educational Foundation’s Creating Active Learning Through Technology, which focus on the economics of teaching large lecture courses and the transformation of these typically passive learning style courses into active learning experiences for students.

But as is often the case in the adoption of new instructional technologies, arguments for adoption rarely rest on published analyses demonstrating improvements in learning outcomes. Commonly, such assessments simply have not been performed. Nevertheless, in researching the technology for my own institution, I did a hard search for learning outcome studies. I found that data abound on student satisfaction with personal response systems, on whether it made their class more interesting, improved attendance and the like.[3] But reports of learning outcomes are few and far between. What follows is a discussion of four references I found reporting learning outcome analyses related to the use of interactive engagement pedagogical methods in large science courses. Only the last two cases are personal response systems specifically mentioned. But as we will see, the technology is not really the star in this show; not surprisingly, it is the pedagogy that takes center stage.

A controlled study by Ebert-May et al. shows that student confidence in their knowledge of course materials is significantly increased in courses taught using interactive engagement methods over those taught by traditional lecture: “Results from the experimental lectures at NAU suggest that students who experienced the active-learning lecture format had significantly higher self-efficacy and process skills than students in the traditional course. A comparison of mean scores from the self-efficacy instrument indicated that student confidence in doing science, in analyzing data, and in explaining biology to other students was higher in the experimental lectures (N = 283, DF = 3, 274, P < 0.05).”[4]

A large study by Hake of 63 introductory physics courses taught with traditional methods versus interactive engagement (IE) methods, examined student learning outcomes using a commonly applied pre- and post-test design based on the Halloun-Hestenes Mechanics Diagnostic test and Force Concept Inventory. The study, which included 6,542 students, concluded that “A plot of average course scores on the Hestenes/Wells problem-solving Mechanics Baseline test versus those on the conceptual Force Concept Inventory show a strong positive correlation with coefficient r = + 0.91. Comparison of IE and traditional courses implies that IE methods enhance problem-solving ability. The conceptual and problem-solving test results strongly suggest that the use of IE strategies can increase mechanics-course effectiveness well beyond that obtained with traditional methods [original emphasis].”[5]

The Pew Center for Academic Transformation has been interested in examining transformation of courses from passive to active learning experiences by using controlled experiments. One of its beneficiaries, the University of Massachusetts-Amherst, conducted a two year study of courses redesigned for use of a Personal Response System. The Office of Academic Planning and Assessment at University of Massachusetts concluded that in these courses “[attendance] in the redesigned sections was consistently high, and students performed consistently better on the new problem-centered exams compared to the old exams based on recall of facts.”[6]

Lastly, a recent study by Kennedy and Cutts examined actual response data per student over the course of a single semester. In-class questions were of two types, one which asked the student to self-assess their study habits, and the other which focused on course content. These data were analyzed with end-of-semester and end-of-year exam performance results using cluster analysis and MANOVA. Their investigation showed that students who more frequently participated in use of the personal response system and who were frequently correct in their responses, performed better on formal assessments. Students who infrequently responded, but did so correctly, nevertheless performed poorly on formal assessments, suggesting level of involvement during the class is positively correlated with better learning outcomes.[7]

To sum up, what my search found was that where data exist, they are positive in supporting not just the use of personal response systems, but more specifically, the pedagogy associated with the use of these systems. These studies suggest that better learning outcomes are really the result of changes in pedagogical focus, from passive to active learning, and not the specific technology or technique used. This is an important caveat to interested faculty–the technology is not a magic-bullet. Without a focused, well-planned transformation of the large lecture format and pedagogical goals, the technology provides no advantage. If the manner in which the technology is implemented in class is not meaningful nor interesting to the student, participation lapses. Ultimately, what these studies demonstrate is that student participation is key to positive learning outcomes.

Notes:

  1. See E. M. Rogers, Diffusion of Innovations (New York: Collier Macmillan, 1983).2. C. Dreifus, “Physics Laureate Hopes to Help Students Over the Science Blahs,” New York Times(Nov. 1, 2005), http://www.nytimes.com/2005/11/01/science/01conv.html?ex=1132376400&en=c13349a4a1f8cf78&ei=5070&oref=login; Alorie Gilbert, “New for Back-to-school: ‘Clickers,'” CNET’s News.com (2005), http://news.com.com/New+for+back-to-school+clickers/2100-1041_3-5819171.html?tag=html.alert; Natalie P. McNeal, “Latest Campus Clicks a Learning Experience,” The Miami Herald (Oct 17, 2005), http://www.miami.com/mld/miamiherald/news/12920758.htm.3. Steven R. Hall, Ian Waitz, Doris R. Brodeur, Diane H. Soderholm, Reem Nasr, “Adoption of Active Learning in a Lecture-based Engineering class” IEEE Conference, (Boston, MA, 2005), http://fie.engrng.pitt.edu/fie2002/papers/1367.pdf; SW Draper and MI Brown, “Increasing Interactivity in Lectures Using an Electronic Voting System,” Journal of Computer Assisted Learning, 20 (2004): 81-94, http://www.blackwell-synergy.com/links/doi/10.1111/j.1365-2729.2004.00074.x/full/; Ernst Wit, “Who Wants to be… The Use of a Personal Response System in Statistics Teaching” MSOR Connections 3(2) (2003), http://ltsn.mathstore.ac.uk/newsletter/may2003/pdf/whowants.pdf4. Diane Ebert-May, Carol Brewer and Sylvester Allred, “Innovation in Large Lectures–Teaching for Active Learning” BioScience 47 (1997): 601-607, 604.

    5. Richard R. Hake, “Interactive-engagement Versus Traditional Methods: a Six-thousand-student Survey of Mechanics Test Data for Introductory Physics Courses,” American Journal of Physics 66 (1998): 64-74, http://www.physics.indiana.edu/~sdi/ajpv3i.pdf,18.

    6. Office of Academic Planning and Assessment, University of Massachusetts, Amherst, Faculty Focus on Assessment, v.3(2) (Spring 2003), http://www.umass.edu/oapa/oapa/publications/faculty_focus/faculty_focus_spring2003.pdf, 2.

    7. GE Kennedy, QI Cutts, “The Association Between Students’ Use of an Electronic Voting System and their Learning Outcomes,” Journal of Computer Assisted Learning21 (2005): 4, 260-268, http://www.blackwell-synergy.com/doi/pdf/10.1111/j.1365-2729.2005.00133.x

css.php