Profiles of Key Cyberinfrastructure Organizations

by David Green, Knowledge Culture

We present here a collection of short profiles, specially written for Academic Commons, on key service organizations and networks that will be poised to assist and lead others who are working to bring a rich cyberinfrastructure into play. Some are older humanities organizations for which cyberinfrastructure is a totally new environment, others have been created specifically around the provision of digital resources and support.

We invite your comments and your suggestions for other organizations and networks that you see as key players in providing CI support.

Alliance of Digital Humanities Organizations (AHDO)

American Council of Learned Societies (ACLS)

ARTstor

Council on Library and Information Resources (CLIR)

Cyberinfrastructure Partnership (CIP) & Cyberinfrastructure Technology Watch

Humanities, Arts, Science, and Technology Advanced Collaboratory (HASTAC)

CenterNet

Institute of Museum and Library Services (IMLS)

Ithaka

The Andrew W. Mellon Foundation

National Endowment for the Humanities (NEH)

NITLE

Open Content Alliance

Software Environment for the Advancement of Scholarly Research (SEASR)

The Bates College Imaging Center: A Model for Interdisciplinarity and Collaboration

by Matthew J. Coté, Bates College

The Bates College Imaging and Computing Center (known on campus simply as the Imaging Center) is a new interdisciplinary facility designed to support Bates’s vision of a liberal arts education, as codified by its newly-adopted General Education Program. This program reflects the increasingly porous and mutable nature of disciplinary boundaries and emphasizes the effectiveness of teaching writing as a means of improving students’ ability to think, reason and communicate. The Imaging Center strives to further expand the reach of this program by promoting visual thinking and communication–serving as a catalyst for interdisciplinary and transdisciplinary work. In many ways the Center embodies most of the ideas underpinning Bates’s new General Education Program and is a model on this campus for the kind of transformative work cyberinfrastructure will enable. Floorplan image courtesy of the Bates College Imaging and Computing Center.

The Imaging Center’s physical space, its imaging resources and its place within the college’s cyberinfrastructure, are all designed to foster interactions between scholars from disparate fields and to further the Center’s goal of promoting visual literacy. Traditional campus structures–whether organizational or architectural–are efficient from the administrative perspective, but often have the unintended consequence of reifying disciplinary boundaries. For example, the spatial grouping of faculty by academic discipline provides few opportunities for faculty from different fields to interact with each other, either purposefully or by happenstance, while doing their work. Such campus structures have significant pedagogical ramifications as well. They encourage students to pigeon-hole ideas and ways of thinking according to academic field rather than inspiring them to find connections between fields of inquiry.
These consequences, of course, are antithetical to the goals of academic programs intended to foster interdisciplinary thinking. To counter these effects, the Bates Imaging Center provides a visually-inviting space available to all members of the campus community. Its array of equipment and instrumentation, and its extensive computer networking, make it the campus hub for collaborative and interdisciplinary projects, especially those that are computationally intensive, apply visualization techniques, or include graphical or image-based components.

Imaging Center Public GalleryBates College Imaging Center Public Space
Imaging Center Public Gallery (photo courtesy of the
Bates College Imaging and Computing Center)

The Imaging Center’s central public gallery provides comfortable seating, readily accessible kiosk computers and wireless networking to encourage faculty and students to use the space for both planned and spontaneous meetings of small groups. To make more public the scholarly activities taking place within the Center, a contiguous array of three large flat-panel LCD monitors displays looped sequences of images created by faculty and students who are using the Center’s resources to support their work. Image sequences include, for example, micrographs obtained using the Center’s microscopes, digital photographs taken by students working in the fine arts, maps generated using GIS mapping software, and animated multidimensional graphs of political data. The sequences are designed to exemplify effective visual communication and to juxtapose work by faculty and students drawn from widely varied disciplines throughout the campus. The display publicizes the scholarly activities taking place within the Center, and by encouraging viewers to think more deeply about the images, cultivates more sophisticated approaches to the images they encounter or create in their own work.

The Center’s gallery is abutted on one side by an imaging lab and on the other by a computer room.The imaging lab contains a digital photography studio and a suite of optical microscope rooms with a shared sample preparation room. Driven by the goals of improving the accessibility of work that is conventionally done in isolation, and of making the Center’s resources available to as broad an audience as possible, the microscope rooms are each electronically linked with the computer room. This allows images obtained with the microscopes to be displayed for large groups in real time, complete with two-way audio communication between the microscope operator and the audience.

Imaging Lab (photo courtesy of the Bates College Imaging and Computing Center)

Imaging Lab (photo courtesy of the Bates College Imaging and Computing Center)


Computer Room (photo courtesy of the Bates College Imaging and Computing Center)

The Imaging Center’s resources are leveraged by a one-gigabit-per-second network that connects the Imaging Center to the campus’s Language Resource Center and the Digital Media Center (the latter supports audio and video work). In this way each center can be physically located for the convenience of its most frequent users yet large data files and other electronic resources can be readily shared between centers. Local storage of the large data sets and images is provided by a two terabyte storage array.

As the Imaging Center moves forward, its participation in the Internet2 consortium will provide wide bandwidth access to large databases such as those relied upon by users of GIS mapping software and bioinformatics researchers. It will also make it possible for scientists working on the Bates campus to operate specialized instrumentation located at large research institutions and to do so in real time. These capabilities will bring to a small liberal arts college in Maine the unfettered access to databases, equipment and distributed expertise that were formerly available only to those working in large research facilities.

As is true with cyberinfrastructure generally, it’s the Imaging Center’s people that make it work. Two full-time staff members–one with expertise in database management, computer hardware and software development and GIS mapping, and the other a microscopist and photographer with technical training in optics and imaging technologies–bring a wealth of experience to the Imaging Center. They support the Center’s users by training them to use unfamiliar tools and techniques. Some workshops and group training sessions are used for this purpose, but the widely varying schedules and backgrounds of the Center’s users render scheduled, “one size fits all” training sessions insufficient. To complement these offerings, the staff is developing electronic training materials that use imbedded hyperlinks to provide the background that some readers might be missing. These documents have the advantages of being readily customized and updated, allowing readers to focus their attention on those aspects of a topic that are particularly pertinent or unfamiliar. Because the documents are available to anyone with internet access, they can be used whenever and wherever the need arises.

As workers in an ever-expanding range of fields seek to express or explore ideas through expert use of images, and to find and convey meaning in large multidimensional data sets through increased visualization capability, there will be a concomitant demand for improved visual literacy. As a result, acquiring the ability to communicate and think visually will be seen as an integral part of a complete education. This realization has motivated the development of a new type of center whose impact is dramatically enhanced by recent advances in computer power and connectivity. With the Imaging Center providing a practical working model of interdisciplinarity and numerous examples of the power of visualization, Bates is well placed to take advantage of the new directions afforded by a well-deployed cyberinfrastructure.

Managed Cyber Services as a Cyberinfrastructure Strategy for Smaller Institutions of Higher Education

by Todd Kelley, NITLE

Technology and Relationships
Director of the National Science Foundation, Arden Bement, recently stated that “At the heart of the cyberinfrastructure vision is the development of virtual communities that support peer-to-peer collaboration and networks of research and education.”[1] Just as Bement emphasizes networked relationships as an essential component of cyberinfrastructure, I would like to address how small to mid-sized institutions might meet some of the critical challenges of this vision.I propose that in order to realize the cyberinfrastructure vision, colleges and universities reconsider how they approach technology and technology management, which have become just as important as constructing and maintaining the physical facilities on campus. Providing Internet access, for example, should be seen as a key infrastructure asset that needs to be managed. A robust connection to the Internet is necessary for a successful local cyberinfrastructure; however, it is by no means sufficient. The new cyberinfrastructure should include cyber services that enhance existing organizational relationships and make new ones possible–on a national and global basis. However, for some institutions, deploying and sustaining sophisticated organization-wide tools and infrastructure are complex and risky activites.  Smaller institutions often simply cannot implement, sustain and support these initiatives on their own.Cyber Services
While colleges and university libraries were pioneers in using the Internet to provide access to scholarly resources, rarely have they used it to access enterprise technology tools. Instead, most campuses have tried to meet these needs through combining their own hardware infrastructure with (mostly) proprietary software systems that are licensed for the campus, such as Blackboard, ContentDM and Banner. This approach to learning management systems, repository and administrative services may have made sense at a time when the Internet was still in its early stage. It may still make sense for large institutions that have a degree of scale and deep human resources where the organizational benefits of locating all technology services on campus outweigh the costs.

However, smaller, teaching-centered colleges and universities require an attractive alternative to locating all hardware, software, and the attendant technical support on campus, without the onus of locating and selecting application service providers, negotiating licenses and support agreements. They also need to avoid becoming trapped by contractual relationships with new vendors or Faustian bargains with technology giants Google or Microsoft. One option for these institutions is to obtain managed services from organizations such as NITLE, which provide a broad array of professional development and managed information technology services for small and mid-sized institutions. Through using such managed services, institutions are reporting that they lower their technology risk and increase the value proposition for technology innovations.

Lowering Technological Risk Encourages Innovation
Typically, there is a high degree of risk of failure for smaller colleges and universities when they deploy a new technology system. This is because the technical resources and organizational processes required are just not part of the primary focus of these organizations. Typically the risk might be mitigated through devoting significant technological resources and organizational focus to altering the infrastructure in the hope that the institutional culture and processes will adjust to it. But this does not appear a wise approach.

When smaller colleges and universities need organizational technology they often:
1) work to identify the most appropriate vendor and negotiate to obtain the technology they need;
2) focus on how the technology works and on how the technical support for it will be provided; and
3) create organizational processes and procedures that attempt to connect the technical work to the perceived need and the promised benefits.

The focus in this process is often the technical or procedural aspects of a project when the institution would be far better served if the emphasis were on the substantive innovations, relationships, and other benefits that technology can provide. Relationships that are about technical issues per se are off-focus, distracting, and ultimately unproductive, relative to organizational mission.

The continuing development of more sophisticated and complex technologies and the increased dependence on them by these institutions will only increase the potential risk of failure for those that do not make a significant commitment to hiring technology specialists. Increased risk thwarts any interest in using technology to innovate, so technology becomes much less interesting and viable as a route to organizational strength and sustainability. The challenge for smaller colleges, then, is to have dependable, secure and innovative cyber services while reducing the risks and resources traditionally associated with creating new technology systems on campus.

Managed Cyber Services
What do managed cyber services look like and how do they work? In the case of NITLE, it aggregates the cyber services needs of smaller colleges and universities and provides managed services via the Internet so that each individual institution does not have to replicate the hardware, software and technical support on campus for each enterprise application that is needed. NITLE does the legwork, finding reliable and cost-effective hosting solutions and negotiating agreements with applications service providers for services and support. Open source applications are used wherever it is viable. Individual campuses do not have to become involved with these processes, as the goal is to provide an easy on-ramp without legal or contractual agreements with participating campuses. There are also opportunities to test services and experiment with them before participants commit to beginning a new service. In addition, NITLE provides professional development opportunities for campus constituents to learn about the functionality and features of the software in the context of campus needs. Moreover, it encourages campus representatives to participate in communities of practice that it supports.

NITLE currently offers four managed cyber services. The criteria used for selecting cyber services include: participants’ needs; the technology benefits; the development path for the technology (including reliability, scalability, and security); and the expectation and understanding that when adopted by peer institutions, the technology will support the learning communities on campus and peer-to-peer collaboration among campuses.Advantages of Open Source
Colleges are advised to consider open source software (OSS) whenever possible, because OSS offers distinct advantages. The first is the cost savings, as there are no annual licensing fees, and many OSS applications require less hardware overhead, thus helping contain hardware expenditure costs. Second is the support that OSS can provide: a common infrastructure, readily accessible to all, can enable institutions to collaborate more effectively and to focus together on the substantive activities that technology supports.

As a case in point, NITLE provides a repository service using the open source DSpace repository software. The twenty-five colleges and universities that participate in the repository program share their experience and expertise about how the software helps them meet their individual and common goals. Their stated goals include:
1) creating a centralized information repository for information scattered in various difficult-to-find locations;
2) moving archival material into digital formats and making it accessible from one easy-to-access location;
3) bringing more outside attention to the work of students and scholars and thus to the campus;
4) providing the service as a catalyst to help faculty and students begin to learn about and use new forms of publishing and scholarly communication, including intellectual property, open access and publishing rights;
5) preserving digital information.

According to one participating organizational representative to NITLE,

“the open-source approach is definitely helpful in terms of cost. Having [a dependable vendor] administer the hardware and software has been wonderful, since we can concentrate on the applications and not worry about the technical end….Having colleagues from similar schools work on this project has been beneficial, since we can play off of their strengths and weaknesses. They have also given us some good ideas for projects.”*

Another participating organizational representative has added,

“The open source nature of the software is important to us because we know that we are not locked into a closed proprietary system whose existence depends upon the livelihood of a software company. Furthermore, we wouldn’t have gotten started with d-Space on our own because of the infrastructure we’d have to provide to get it going. We don’t have the staff with the skills needed to handle the care and feeding of the server or to customize the software to our needs through application development. Having that part out of the way has given us the opportunity to focus on creating the institutional repository rather than being mired in technical detail of running the software.”*

Open technologies are more than a path to cost savings. They are a critical condition for innovation, access, and interoperability. Many colleges are using OSS for important critical operations, including email (Sendmail), web serving (Apache), and operating system (Linux) applications. This use of OSS suggests that there is a growing acceptance and adoption of OSS. The use of OSS can leverage economies of scale, support network effects, and dramatically increase the speed of innovation.

There is, however, still resistance to making consideration of OSS the de facto approach to meeting organizational software needs. There are several reasons for this opposition, including the view of OSS as hacking, the historical lack of accessible technical support and the paucity of documentation which has complicated the learning curve. Many have long recognized the potential of OSS, but they were reluctant to pursue it because of the increased need for specialized technical support on campus. Thus, for every OSS system, an institution would need to find and hire a technical specialist to support it. This approach certainly is not scalable and smaller institutions were right not to do it.

Multipoint Interactive Videoconferencing (MIV)

Another example of cyber service that institutions should consider is Multipoint Interactive Videoconferencing (MIV). MIV systems enable participants to communicate visually and aurally in real time through the use of portable high-resolution (and inexpensive) cameras and microphones attached to their computers. Participants can see and hear each other, not only on a one-to-one basis, but one-to-many as well. MIV is not a completely new technology; however, its enhanced level of functional maturity, the reduction in costs to provide it, and the need for such systems, have made MIV a technology that is on the verge of widespread adoption and use in a variety of settings.

In the winter and spring of 2007, a dozen participant colleges agreed to evaluate the use of MIV on their campuses and provide NITLE with feedback on the application and their perceptions of its utility. During this evaluation period, participant institutions discovered many types of needs for this technology, both for on-campus and off-campus communications. Uses included guest lectures, meetings between faculty working remotely and connecting with students studying abroad. Since this assessment, NITLE has used MIV for:

1) facilitated conversations led by one or two practitioners among a group of practitioners in an area of common interest, such as incubating academic technology projects or the application of learning theory to the work of academic technologists;
2) presentations by individuals who are using technologies of interest in their classrooms or other campus work to groups of others interested in whether and how they might do something similar, such as historians using GIS;
3) presentations by experts on topics of interest to others in their professional field, such as the academic value of maps;
4) technology training for the participants and users of the cyber services that NITLE offers.

The experience of MIV service participants suggests that the adoption of MIV may be most successful when placed in the context of next steps, developing relationships, individual experience and expertise, and common goals and objectives. This premise suggests learning and collaborative environments that include the use of MIV as part of a range of learning and communications options. Through the pilot study, the many positive benefits that participants have experienced have been documented. However, these benefits are a fraction of what can be realized when many more institutions participate because of network effects and because participants may use the MIV service to collaborate with other organizations outside of the opportunities organized by NITLE.

The “Open” Movement
The promise of information technology can not be met when only large, powerful, and for-profit IT organizations are in control. Open access, open courseware and open source initiatives point toward a world where there is a level playing field for individual learning and organizational innovation by not-for-profit institutions. Where just a few years ago it was difficult to name more than a few organizations that provide technical support for open source applications, the number of service providers is growing. Identifying these providers, selecting the best ones and negotiating agreements–these are the important challenges for managers of cyber services. Providers report that it is often financially unfeasible for them to market to and negotiate with individual institutions for providing cyber services. Creating a reliable and scalable approach to cyber services that works for colleges and providers alike would seem to be an important advance for smaller institutions, both individually and as an important segment of higher education.

The open movement is not about software tools alone, as Arden Bement noted in his comments about the importance of virtual communities. Success depends upon achieving a balance among essential human, organizational and technological components. The potential benefits of the open movement will accrue to colleges and universities that collaborate through using a common set of tools, actively participate in peer information networks and make a priority of mission-focused knowledge and skills. Many institutions know that the value of peer-to-peer communities for individual institutions will increase proportionally to their equal investment in all three of these components. The question may ultimately center on how to support these activities in a systematic and sustainable fashion. This is where small and mid-sized institutions may want to innovate in their approach to technology management.

Collaborative Relationships Foster Organizational Strength and Learning
Technology that supports wide-spread virtual collaboration among smaller colleges and universities such as the repository and MIV services described above demonstrates the potential power of cyber services to enhance organizational innovation, learning and productivity. These peer communities of practice allow campuses to: 1) exchange information about usage, technical issues and support; 2) learn from one another; and 3) synchronize their efforts to use technology to promote shared goals and processes. Having campuses work together and share knowledge as they engage with enterprise systems is a crucial part of the equation. The community of smaller colleges and universities needs a robust organization for that collaboration to happen. Organizations such as NITLE can help fill this need, while also providing opportunities for community participation and encouraging institutions to play lead roles in needs identification, service development, and training and education. As one participant has stated, participation in a managed cyber service is “an opportunity for a group of us to make a leap forward and learn from each other along the way. In addition, [our participating college] saw it as an opportunity to overcome our geographic isolation…I think we have the potential to achieve something tremendous that we will all be proud of.”*
Summary
Technology seems to be much more compelling to smaller colleges and universities–and more cost-effective as well–when it provides substantive benefits while the procedural and instrumental aspects of technology innovation are kept under control. This is not to say that technical expertise at smaller institutions is not necessary or that all cyberinfrastructures should be moved off campus. These extreme changes would be neither productive nor prudent. By working collectively, smaller colleges can use managed services to more effectively apply advanced technologies. Bringing institutions with common needs together in a shared organizational network and aggregating many of their common technology needs through cyber services seems to be a powerful idea. Participating campuses can then provide the scope and scale of programs and services that larger institutions provide while retaining their intimacy and sense of community, and also controlling costs. At the same time, a strong foundation is created both technologically and organizationally for the type of cross-institutional endeavors and learning communities that can help smaller institutions promote scholarship that is vital and attractive to students and faculty alike. When common goals are met in cost effective ways, mission is strengthened for all.

[1]”Shaping the Cyberinfrastructure Revolution: Designing Cyberinfrastructure for Collaboration and Innovation,”  First Monday, volume 12, number 6 (June 2007)http://firstmonday.org/issues/issue12_6/bement/index.html. Accessed September 26, 2007.

* Responses to a survey administered by the author to a subset of NITLE participating organizations during July of 2007.

Cyberinfrastructure and the Sciences at Liberal Arts Colleges

by Francis Starr, Wesleyan.edu

Introduction
The technical nature of scientific research led to the establishment of early computing infrastructure and today, the sciences are still pushing the envelope with new developments in cyberinfrastructure. Education in the sciences poses different challenges, as faculty must develop new curricula that incorporate and educate students about the use of cyberinfrastructure resources. To be integral to both science research and education, cyberinfrastructure at liberal institutions needs to provide a combination of computing and human resources. Computing resources are a necessary first element, but without the organizational infrastructure to support and educate faculty and students alike, computing facilities will have only a limited impact. A complete local cyberinfrastructure picture, even at a small college, is quite large and includes resources like email, library databases and on-line information sources, to name just a few. Rather than trying to cover such a broad range, this article will focus on the specific hardware and human resources that are key to a successful cyberinfrastructure in the sciences at liberal arts institutions. I will also touch on how groups of institutions might pool resources, since the demands posed by the complete set of hardware and technical staff may be larger than a single institution alone can manage. I should point out that many of these features are applicable to both large and small universities, but I will emphasize those elements that are of particular relevance to liberal arts institutions. Most of this discussion is based on experiences at Wesleyan University over the past several years, as well as plans for the future of our current facilities.

A brief history of computing infrastructure
Computing needs in the sciences have changed dramatically over the years. When computers first became an integral element of scientific research, the hardware needed was physically very large and very expensive. This was the “mainframe” computer and, because of the cost and size, these machines were generally maintained as a central resource. Additionally, since this was a relatively new and technically demanding resource, it was used primarily for research rather than education activities.

The desktop PC revolution started with the IBM AT in 1984 and led to the presence of a computer on nearly every desk by the mid 1990’s. The ubiquity of desktop computing initiated tremendous change to both the infrastructure and uses of computational resources. The affordability and relative power of new desktops made mainframe-style computing largely obsolete. A computer on every desktop turned users into amateur computer administrators. The wide availability of PCs also meant that students grew up with computers and felt comfortable using them as part of their education. As a result, college courses on programming and scientific computing, as well as general use of computers in the classroom, became far more common.

Eventually, commodity computer hardware became so cheap that scientists could afford to buy many computers to expand their research. Better yet, they found ways to link computers together to form inexpensive supercomputers, called clusters or “Beowulf” clusters, built from cheap, off-the-shelf components. Quickly, the size of these do-it-yourself clusters grew very large, and companies naturally saw an opportunity to manufacture and sell them ready-made. People no longer needed detailed technical knowledge of how to assemble these large facilities; they could simply buy them.

This widespread availability of cluster resources has brought the cyberinfrastructure needs full circle. The increasing size, cooling needs, and complexity of maintaining a large computing cluster has meant that faculty now look to information technology (IT) services to house and maintain cluster facilities. Maintaining a single large cluster for university-wide usage is more cost effective than maintaining several smaller clusters and reduces administrative overhead. Ironically, we seem to have returned to something resembling the mainframe model. At the same time, the more recently developed desktop support remains critical. As technology continues to progress, we will doubtless shift paradigms again, but the central cluster would appear to be the dominant approach for at least the next five years.

Hardware resources
The cluster is the central piece of hardware–but what makes up the cluster? How large a cluster is needed? Before we can address the question of size, we should outline the key elements. This becomes somewhat technical, so some readers may wish to skip the next five paragraphs.

First, there is the raw computing power of the processors to consider. This part of the story has become more confusing with the recent advent of multiple core processors. In short, a single processor may have 2, 4 or, soon, 8 processing cores, each of which is effectively an independent processor. This does not necessarily mean it can do a task faster, but it can perform multiple tasks simultaneously. Today, I think of the core as the fundamental unit to count, since a single processor may have several cores, and a single “node” (physically, one computer) may have several processors. For example, at Wesleyan, we recently installed a 36-node cluster, each node having 2 processors and each processor having 4 cores. So while a 36-node cluster may not sound like much, it has packed into it 288 computing cores.

This high density of computing cores has several advantages: it decreases the footprint of the cluster; decreases cooling needs; and decreases the number of required connections. For the moment, let’s focus on connectivity. The speed of connections between computers is glacial in comparison to the speed of the processors. For example, a 2-GHz processor does one operation every 0.5 nanoseconds. To get an idea of how small amount of time this is, consider that light travels just about 6 inches in this time. The typical latency–the time lost to initiate a transmission–of a wired ethernet connection is in the range of 0.1-1 milliseconds, or around 2000 clock cycles of the processor. Hence, if a processor is forced to wait for information coming over a network, it may spend a tremendous number of cycles twiddling its thumbs, just due to latency. Add the time for the message to transmit, and the problem becomes even worse. Multiple cores may help limit the number of nodes, and therefore reduce the number of connections, but the connectivity problem is still unavoidable. So what to do?

The answer depends on the intended usage of the cluster. In many cases, users want to run many independent, single process, or serial, tasks. In this case, communication between the various pieces is relatively unimportant, since the vast majority of the activity is independent. Ordinary gigabit ethernet should suffice in this situation and is quite cheap. If the usage is expected to include parallel applications, where many cores work together to solve a single problem faster, it may be necessary to consider more expensive solutions. However, given that it is easy to purchase nodes containing 8 cores in a single box, these expensive and often proprietary solutions are only needed for rather large parallel applications, of which there are relatively few.

All this processing power is useless, however, without a place to store the information. This is most commonly achieved by hard disks that are bundled together in some form, though for the sake of simplicity, they appear to the end user as a single large disk. These bundles of disks can easily achieve storage sizes of tens to hundreds of terabytes, a terabyte being 1000 gigabytes. The ability to store such large amounts of information is particularly important with the emergence in the last decade of informatics technologies, which rely on data-mining of very large data sets.

The last, and sometimes the greatest challenge, is housing and cooling the cluster. Even with the high density of computing cores, these machines can be large and require substantial cooling. A dedicated machine room with supplemental air conditioning is needed, typically maintained by an IT services organization. Fortunately, most IT organizations already have such a facility, and with the decreasing size of administrative university servers, it is likely that space can be found without major building modifications. However, do not be surprised if additional power or further boosting of cooling is needed. The involvement of the IT organization is critical to the success of infrastructure. Accordingly, it is important that IT services and technically-inclined faculty cultivate a good working relationship in order to communicate effectively about research and education needs.

OK, but how big?
Given these general physical specifications for the key piece of hardware, the question remains, how big a cluster? Obviously the answer depends on the institution, but I estimate 3 or 4 processing cores for each science faculty member. An alternate and perhaps more accurate way to estimate is to consider how many faculty members are already heavy computational users and already support their own facilities. I would budget about 50 cores for each such faculty member, though it is wise to more carefully estimate local usage. Part of the beauty of a shared facility is that unused computing time that might be lost on an individual faculty member’s facility can be shared by the community, reducing the total size of the cluster necessary to fulfill peak needs.

Software needs tend to be specialized according to the intended uses, but it is important to budget funds for various software needs, such as compilers and special purpose applications. The Linux operating system is commonly used on these clusters and helps to keep down software costs since it is an open source system. For many scientific computing users, Linux is also the preferred environment regardless of cost.

The cluster itself is of limited use without the human resources–that is, the technical staff–to back it up. At a minimum, a dedicated systems administrator is needed to ensure the smooth operation of the facility. Ideally, the administrator can also serve as a technical contact for researchers to assist in the optimal use of the cluster facility. However, to make the facility widely accessible and reap the full benefit for the larger university community, a more substantial technical support staff is needed.

The human element: resource accessibility
The presence of a substantial cluster is an excellent first step, but without additional outreach, the facility is unlikely to benefit anyone other than the expert users who were previously using their own local resources. Outreach is key and can take a number of forms.

First, faculty who are expert in the use of these computer facilities need to spearhead courses that introduce students to the use and benefits of a large cluster. This will help build a pool of competent users who can spread their knowledge beyond the scope of the course. This effort requires little extra initiative and is common at both liberal arts and larger universities.

Second, it is particularly important in a liberal arts environment to develop and sustain a broad effort to help non-expert faculty take advantage of this resource for both research and educational purposes. Otherwise, the use of these computers will likely remain limited to the existing expert faculty and the students whom they train.

Outreach across the sciences can also take the form of a cross-disciplinary organization. At Wesleyan, we established a Scientific Computing and Informatics Center, with the goal of both facilitating the use of high-performance computing and supporting course initiatives that use computational resources. The center is directed by a dedicated coordinator, who is not burdened with the technical duties of the systems administrator, and is assisted by trained student tutors.

The first goal of the center, facilitating cluster use, is primarily research-oriented. That is, the center serves as a resource where faculty and students can seek assistance or advice on a range of issues–from simple tasks like accessing the resources to complex problems like optimization or debugging complex codes. In addition, the center offers regular tutorials on the more common issues, making broader contact across the institution.

The second goal–educational outreach–is particularly important for liberal arts institutions. Educational outreach deals with all aspects of computational activities in the curriculum, not just cluster-based activities. For example, if a faculty member wishes to make use of computational software, the center staff will offer training to the students in the course, thereby leaving class time to focus on content. The center staff will also be available for follow-up assistance as the need arises. This eliminates the problem of trying to add or include training for computational resources in existing courses.

But efforts should not stop at this level. While we are still in the early stages of our experiment at Wesleyan, I believe that such a support organization will not have a significant impact if it simply exists as a passive resource. The center must actively seek out resistant faculty and demonstrate through both group discussions and one-on-one interactions how computational resources can enhance their teaching activities.

To maintain the long-term vitality of this kind of center, it is important to maintain a group of trained and motivated student tutors. To do this, we have chosen is to offer students summer fellowships to work on computationally demanding research projects with faculty. Some of these students then serve as tutors during the academic year. Combined with this summer program are regular lecture and tutorial activities. These tutorials may also be expanded to reach beyond the bounds of the university to other institutions as workshop activities.

Cross-institutional collaboration
Sometimes, all of these goals can be met by a single institution. But even if this is possible, there are still benefits to looking outside the institution. And for smaller institutions, pooling resources may be the only way to develop an effective cyberinfrastructure.

While high-speed networks now make it technically possible to establish inter-institutional efforts across the country, it is important to be able to gather together a critical mass of core users who can easily interact with each other. In my own experience, this happens more easily when the users are relatively nearby, say no more than 100 miles apart. It means that institutions can share not only the hardware resources over the network, but also the technical support staff. Of course, day-to-day activity is limited to interaction within an institution or virtual communications between institutions, but frequent and regular person-to-person interaction can be established at modest distances.

Balancing individual institutional priorities in such a collaboration is obviously a delicate process, but I envision that the institution with the most developed IT services can house and maintain the primary shared hardware resource, thereby reducing the administrative needs across several institutions. Adequate access to facilities can be guaranteed by taking advantage of the fact that most states maintain high-speed networks dedicated for educational usage. In addition, there are many connections between these state networks, such as the New England Regional Network. Personal interactions can be facilitated by regular user group meetings where users can share their questions and concerns with an audience that extends beyond their institution. In addition, new electronic sharing tools, such as wikis and blogs, can help foster more direct virtual communications.

Summary
To have a successful cyberinfrastructure in the sciences, it is essential to develop both hardware and human resources. Personal support and outreach to faculty and students is crucial if the benefits of the infrastructure are to serve a wider clientele. For liberal arts institutions, the presence of state-of-the-art infrastructure helps them to compete with larger institutions, both in terms of research and in attracting students interested in technology. At the same time, emphasizing outreach is of special importance to achieve the educational goals that make liberal arts institutions attractive to students.

Acknowledgments
I wish to thank Ganesan Ravishanker (Associate Vice President for Information Technology at Wesleyan University) and David Green for their assistance preparing this article.

College Museums in a Networked Era–Two Propositions

by John Weber, Skidmore College

To begin, let’s take it as a given that the “cyberinfrastructure” we are writing about in this edition of Academic Commons is both paradigmatically in place, and yet in some respects technologically immature. The internet and the intertwined web of related technologies that support wired and wireless communication and data storage have already altered our ways of dealing with all manner of textual and audiovisual experience, data, modes of communication, and information searching and retrieval. Higher education is responding, but at a glacial pace, particularly in examining new notions of publishing beyond those which have existed since the printed page. Technologies such as streaming and wireless video remain crude, but digital projectors that handle still image data and video are advancing rapidly, and the gap between still and video cameras continues to close. Soon I suspect there will simply be cameras that shoot in whatever mode one chooses (rather than “camcorders” and “digital cameras”), available in a variety of consumer and professional versions and price points. Already, high definition projectors and HD video are a reality, but they have yet to permeate the market. They will soon, with a jump in image quality that will astonish viewers used to current recording and projection quality.

For museums, network and CPU speed, as well as screen and projection resolution, are key aspects of these technologies. Only recently have digital images caught up with analog film in resolution and potential color accuracy (which, lest we forget, was never a given with film, either). The digitization of museum collections and their placement on higher education or public networks is undoubtedly a meaningful teaching asset, but the impact of this shift is, I suspect, largely a matter of ease and logistics, wherein the information provided replicates existing resources without fundamentally changing the knowledge gained from them. In other words, slide collections and good research libraries already provided much of the museum collection information now present on the internet. Yes, we should all be documenting our collections and making it easier for faculty and students to use those images, but no, that activity alone will not change our world in and of itself. Combined with an aggressive program to foster collection use by faculty and students, it can accomplish a great deal for a college museum, but we can and should aim even higher.

With that preface in place, let’s consider the museum, the internet, and the college curriculum as structuring conditions governing the nature of human experience that can occur within their boundaries. Museums are traditionally and fundamentally concerned with unique objects and notions of first-hand experience tied intrinsically to one specific place. There is only one Mona Lisa, and one Louvre where you can see it. There is only one Guggenheim Bilbao, and to see it you must go to Spain. In contrast, the internet is fundamentally about the replication and distribution of whatever it touches or contains, made available all the time, everywhere. And the internet continues to extend its reach, now arriving in phones, cafés, hotel rooms, airports, and no doubt soon on plane flights: ubiquitous computing, 24/7. The two conditions could not seem to be more distinct, disparate, and opposed.

Now let’s consider the nature of the college curriculum, briefly, as a structuring condition for experience and learning. Like the internet, it relies fundamentally on the reproducibility and distributability of the knowledge it seeks to offer each new generation of students. Courses are offered more than once. Books are read again and again. Disciplines must be taught in a way that adequately reproduces accepted standards and thereby transfers credits, reputations, and ultimately knowledge from grade to grade, classroom to classroom, and institution to institution, across time. The notion of a unique, one-time course is at best a luxury, at worst a foolish expenditure of time and effort–for faculty, if not for students. Shortly after becoming director of the Frances Young Tang Teaching Museum and Art Gallery at Skidmore College, for example, I overheard a tenured, senior faculty member remark over lunch that no one could compel him to create a course he would be able to teach just once. To him, the notion was absurd and counterproductive. His point was obvious: since it always takes more than one attempt to get a course developed and refined for a given student community and college culture, creating courses you can offer only once is simply not an intelligent way to teach, even if the demands of establishing a consistent curriculum would allow it, which of course they don’t.

As a new college museum director and recent emigrant from the world of the large, urban museum, it was an instructive moment, and as someone who had periodically taught at the college level, it made perfect sense. Yet at the same time, I was mildly taken aback: museums routinely create “courses” (i.e. exhibitions) that they “teach” (i.e. present) only once. The one-time special exhibition is, in fact, arguably our bread and butter. Even museums with world-class collections (e.g. New York’s Metropolitan Museum of Art or the Museum of Modern Art) rely on special, one-time exhibitions to drive attendance, increase membership, build revenue, and underwrite their economic survival. How, then, can college museums effectively link a program of changing exhibitions to the rhythm of a college curriculum?

How, in essence, can we “teach” the exhibition after it has left the gallery? How can we marry the one-time encounter with a set of unique objects to the cyclical, repeating demands of curriculum? These are central questions for college museums as they are asked increasingly to play a more significant role in the teaching efforts of the institutions that house and foster them. In fact, they may well be thecentral questions, since without answering them college museums are unlikely to achieve a new degree of relevance and support within their institutional context.

Paradoxically (in view of how I started this article) I’m going to argue that the best available answers are to be found in the creative use of new technologies and the internet. Networked multimedia technologies and the maturing cyberinfrastructure can’t fully reproduce the one-time experience offered by the museum space and the museum exhibition, but they can go much farther toward capturing its unique, spatial, temporal, multimodal, three-dimensional and temporal impact than any previously available publication method or recording device we have had.

Now let’s examine the nature of exhibitions and museum installations themselves; I’m using art museums as my test case, but much or most of what I’m saying should apply to other kinds of institutions and subject matters. Museum exhibitions exist in space, and by that I mean three-dimensional space. They house and assemble discrete groups of objects, arranged by curators to create or emphasize meaning through juxtaposition, sequence, and context. Wall texts, lectures, publications, docent and audio tours have been the primary means of sharing with museum visitors the curatorial intentions driving exhibitions and the insights gleaned in the course of assembling them. Over the past decade and more, museums have experimented increasingly with interactive kiosks, websites, and more recently podcasts as ways to share insights, ideas, and background information relevant to the work on view. College museums have participated in this exploration, but only rarely led it.[1] I suspect this has to do in part with the relatively small size of education departments in college museums, combined with an orientation toward “scholarship” that finds its preferred outcome in printed matter, i.e. the scholarly catalogues valued by faculty curators, rather than in “visitor outreach” so conceived as to motivate and underwrite digital programming.

An additional factor slowing digital innovation in college museums may be the fact that the IT and academic computing staff on college campuses–which could in theory assist museums in the creation of digital learning programs–are generally beset by huge demands from across campus. Only rarely can they devote extended blocks of time and significant resources to their resident museums. Yet the presence of such theoretically available staff makes it difficult for college museum directors to argue for dedicated, in-museum staff devoted to digital matters. As a result, we attempt to piece together project teams from existing staff, work-study students and interns, a mix that seldom attains the degree of hands-on experience, longevity, or programming expertise needed to create truly new, exceptional programs. This Catch 22 is, I suspect, not a trivial issue.

In contrast, large urban museums such as the National Gallery, London, the Minneapolis Institute of Art, the San Francisco Museum of Modern Art, New York’s Museum of Modern Art, and recently Los Angeles Museum of Contemporary Art, among many others, have created groundbreaking interactive educational programming by hiring dedicated staff and devoting significant fiscal resources to their efforts. Generally, those institutions have relied on a balance of in-house staff and high-powered (but modestly compensated) outside programming and design firms.

Yet despite the relative lack of resources that college museums devote to their digital education efforts, the potential rewards of doing so are significant. In particular, I believe that college museums also have a vested interest in exploring an area of digital programming that remains largely untouched by their civic counterparts, namely, the creation of rich multimedia documentation and multilayered, interactive responses to exhibitions themselves, after they have opened. Such programming would focus not only on the basic content of the exhibition (i.e. the individual images and objects in it) but on the physical exhibition itself as a carefully considered form of content and utterance. Such programming would take full advantage of the completed exhibition as the arena for both documenting and interrogating the set of propositions, insights, and ideas expressed in its physical layout and checklist. It would survey curatorial, scholarly, and lay responses to the completed show, allowing insights gained in the final installation and post-facto contemplation of the exhibition to emerge over time. Finally, such an approach would offer real-time walk-throughs of the exhibition, as well as high-quality, 360-degree still images, providing future virtual visitors a strong, visceral sense of what it felt like to be in the galleries with the work, looking from one object to another, moving through space, and getting a sense of the way the curator used the building and its architecture.[2]Although simple and straightforward, this practice has rarely been explored due to the mandates and pressures governing digital education programs in large museums. With a few laudable recent exceptions [3], major museums create interactive programs designed to provide visitors to upcoming exhibitions with background information on the basic content to be on view. They create their programs to be ready on opening day. Once the exhibition is open, the harried staff moves on to the next project for the next show. In short, such institutions ignore the exhibition as a finished product and focus on its raw content, a practice that makes sense given their audiences and economics. A quick survey of museum websites demonstrates that few museums are even in the practice of posting extensive images of their shows or galleries online, regardless of the extensive databases of collection images they may maintain.

For college museums that seek to create new ways to encourage faculty to teach their content and bring classes to their galleries, the potential benefits of creating experientially gripping and idea-rich responses to exhibitions should be obvious: digital technologies can allow us to teach an exhibition after it closes, and that would be a fundamentally new step for the museum world.

The second thing I’d like to discuss is the potential relevance of museum-based teaching and learning to generations of students (and soon faculty) for whom the structured but non-linear, highly visual as well as verbal, multimodal information world offered by the cyberinfrastructure is second nature. Highly textual, the World Wide Web in particular is also routinely and compulsively visual. It is a domain that is designed. Pictures are used as building blocks in enterprises created to argue, inform, archive, entice, sell, and distract. Rarely do we now encounter a text-only website; instead, text-image juxtapositions prevail, and websites now typically offer a mix of static graphics, sound, and animated graphics or video clips. Effective graphic design, or “information design” if you will, is essential. Students today grow up in this world and live there. Significantly, their cyberworld is a social world of self-projection and at times fantasy (i.e. blogs, Facebook pages, and social gaming) as well as a realm of entertainment and research.

As higher education considers the “digital native,” “net generation” students now entering the academy, the question of how to teach what is variously referred to as visual literacy, information literacy, twenty-first-century literacy, or expanded literacy comes increasingly to the forefront. I share the conviction that unless colleges and universities find a way to expand their text-based notions of literacy, analysis, and critique to include the domains of the visual and the moving image, we are not equipping our students adequately to enter either the future academic world or the workplace. Quite simply, the tools that empower and govern human expression have changed, and the academy needs to decide how it will respond.

As I have argued elsewhere[4], museums can potentially play an intriguing role in fostering forms of visual literacy and expanded literacy suited to the digital, networked era. Like the internet, the museum space is structured, yet non-linear. You move through museum galleries laterally from object to object in a largely self-determined path, much like motion from webpage to webpage. Both experiences are highly but not exclusively visual. Along with looking, museum visits generally encompass reading, listening, talking to friends and family members or museum personnel, and making decisions about how long to linger in any given place. Museum visits, like many web visits, are infused by random user choices made within spatial structures that are highly designed and planned by their builders.

Teaching within the museum space forces faculty and students alike to make different choices about how to structure time, how to do research, and, one hopes, about how to present their ideas, analysis, and conclusions. In pushing the visual dimension of experience and analysis to the forefront, museum exhibitions of all kinds force participants to use their eyes and link what is seen to what is said and written.
Notions of proof and argument evolve in new ways when first-hand, three-dimensional visual artifacts rather than texts are the subject of analysis. For example, a professor I know begins a class by bringing her students to the museum and showing them everyday ceramics and pottery from the American southwest. Without the benefit of library research, she asks them to deduce everything they can about the people that produced the artifacts from the visual evidence in front of them, unaided by others’ insights. Allowing students to work with visual evidence similar to the material confronted by working archeologists, and forcing them to use only their eyes and brains demands that students both look and think for themselves, expressing their own conclusions in their own words.

As another example of the intersection of visual and analytical learning in the museum environment, Molecules That Matter, a special exhibition on view this year at the Tang Museum, was originated by a longtime Skidmore organic chemistry professor, Ray Giguere. Investigating ten organic molecules that influenced twentieth-century history (aspirin, isooctane, penicillin, polyethylene, nylon, DNA, progestin, DDT, Prozac, and buckyball), the exhibition brings together a wide variety of artworks and objects of material culture with a set of huge, specially commissioned, scientifically accurate molecular models. Reaching into fields as diverse as women’s studies (progestin is the molecule responsible for oral contraception), economics, psychology, engineering, medicine and nutrition, technology, environmental studies, and of course art and art history, it offers a wealth of ways, visual and otherwise, for faculty and students to engage its subject matter. Crucially, the show seeks to function as a starting point for wide-ranging investigations, research projects, and responses. Far too broad to sum up the many topics it points to, Molecules That Matter offers specific, highly-stimulating and revealing artifacts as visual bait to lure non-scientists and future scientists alike to reconsider how organic chemistry runs through their everyday lives in unnoticed ways.

Working on an extended website for the show with a group of students, Susi Kerr, the Tang’s senior educator, Ray Giguere, myself, and the rest of the exhibition team had to ask the students and ourselves again and again how we could not simply say but show the ideas we sought to convey. In both the museum and on the internet, words alone simply don’t entice or suffice. Furthermore, in both domains, not all visual experiences are created equal–some pictures, objects, and images are more powerful and academically appropriate than others, and learning to distinguish between them is a key skill that students (and first-time faculty curators) need to learn. I have also found that museum writing (for intro texts, extended object labels, and even catalogue essays for non-specialist audiences) bears more in common with writing for the web than does the traditional academic paper. Museum writing is inherently public, for one thing, and meant to be read by people who can walk away the minute they lose interest. That said, all three forms of writing (museum, web, and academic) need to be succinct, grammatically correct, pleasingly well-crafted, and intellectually sound.

To sum up, the two propositions outlined here argue for (1) the importance of networked digital technologies to the particular mission of the college museum, and (2) the potential importance of the college museum in teaching forms of visual literacy suited to the internet era in innovative and appropriate ways. I take it as a given that museums and the materials they hold and display are valuable to their particular subject domains and academic disciplines. That should be obvious and beyond dispute, and for that reason alone college museums deserve a place on their campuses. However, if we are to play an even more essential and intriguing role in higher education, museums of all varieties must explore how we can function as a core aspect of the overall teaching effort of our institutions, and how we can regularly address multiple disciplines in our exhibitions. At that moment, our intersection with the cyberinfrastructure and the largely unexploited teaching potential of digital technologies takes on a new significance.
NOTES

[1] One exception I can think of is American Visions, The Roy L. Neuberger Collection, an excellent, early interactive CD-ROM published by SUNY Purchase in 1994. Tellingly, the art historian who worked on it was Peter Samis, who soon became head of interactive educational technologies at SFMOMA and pioneered our efforts to develop SFMOMA’s award-winning interactive programs.

[2] See, for example, the brilliant use of QuickTime VR in Columbia University’s Real?Virtual, Representing Architectural Time and Space, which stunningly documents Le Corbusier’s Ronchamp Church of Notre-Dame-Du-Haut.

[3] New York MOMA’s recent Richard Serra retrospective was accompanied by an admirable video walk-through of the completed exhibition, narrated insightfully by the artist himself. In Los Angeles, the Museum of Contemporary Art created an extensive site that visually documents the WACK exhibition on the history of feminist art, and brings to bear the voices of many artists and scholars who spoke at the museum while the show was on view. Audio of the artists and other speakers was complemented by images of them with their audiences, and by a list-serve allowing others to comment. Together, these programs brought the exhibition itself to life, adding texture, voice, and personality rarely seen in the “big museum” world.

[4] See “Thinking Spatially: New Literacy, Museums, and the Academy,” EDUCAUSE Review Online, January-February, 2007, pp 68-69.

Museums, Cataloging & Content Infrastructure: An Interview with Kenneth Hamma

by David Green, Knowledge Culture

Ken Hamma is a digital pioneer in the global museum community. A classics scholar, Hamma joined the Getty Trust in 1987 as Associate Curator of Antiquities for the Getty Museum. He has since had a number of roles there, including Assistant Director for Collections Information at the Getty Museum, Senior Advisor to the President for Information Policy and his current position, Executive Director for Digital Policy and Initiatives at the Getty Trust.

David Green: Ken, you are in a good position to describe the evolution of digital initiatives at the Getty Trust as you’ve moved through its structure. How have digital initiatives been defined at the Getty and how are they faring at the institutional level as a whole, as the stakes and benefits of full involvement appear to be getting higher?
Ken Hamma: Being or becoming digital as short-hand for the thousands of changes institutions like this go through as they adopt new information and communication technologies has long been discussed at the Getty from the point of view of the technology. And it did once seem that applying technology was merely doing the same things with different tools when, in fact, we were starting to embark upon completely new opportunities. It also once seemed that the technology would be the most expensive part. Now we’ve learned it’s not. It’s content, development and maintenance, staff training, and change management that are the expensive bits.

About 1990 it seemed to me (without realizing the impact it would cause) that it was the Getty’s mission that would and should largely drive investments in becoming digital. That it would require someone from the program side of the house to take more than a passing interest in it. I know that sounds impossibly obvious, but it wasn’t nearly so twenty years ago when computers were seen by many as merely expensive typewriters and the potential of the network wasn’t seen yet at all. Needless to say, the interim has been one long learning curve with risks taken, mistakes made, and both successes and failures along the way. Now, we’ve just got to the point at the Getty where–with a modicum of good will–we can think across all programs with some shared sense of value for the future. We now have a working document outlining the scope and some of the issues for digital policy development at the institution that would cover things like the stewardship and the dissemination of scholarship, digital preservation and funding similar activities elsewhere. Within this scope, we’ll be considering our priorities, the costs and risks involved, and some specific issues such as intellectual property and scholarship, partnerships and what kind of leadership role there might be for the Getty.

Do you see the Getty, or some other entity, managing to lead a project that might pull museums together on some of these issues?
There’s only a certain amount that can be done from inside one institution and there are some fundamental changes that can’t be made and probably need to be made. One of the big problems about technology is its cost. For so many institutions it’s still just too expensive and too difficult. There’s a very high entry barrier–software license and maintenance fees as well as technology staff, infrastructure development and professional services–in short, the full cost of owning technology. The result isn’t just a management problem for museums but an opportunity cost. We’re falling behind as a community by not fully participating in the online information environment.

There was a technology survey in 2004 of museums and libraries that pointed out that although small museums and public libraries had made dramatic progress since 2001, they still lagged behind their larger counterparts.[1] While almost two-thirds of museums reported having some technology funding in the previous year, 60% said current funding did not meet technology needs and 66% had insufficiently skilled staff to support all their technology activities. This problem is complicated by a gap between museums’ community responsibilities and the interests of the commercial museum software providers–notably the vendors’ complete disinterest in creating solutions for contributing to aggregate image collections. There was a similar gap between library missions and OPAC (Online Public Access Catalog) software until OCLC grew to fill that gap in the 1980s.

Can you imagine any kind of a blue-sky solution to this?
Well, imagine a foundation, for example, that took it upon itself to develop and license collection-management and collection-cataloging software as open source applications for institutional and individual collectors. It might manage the software as an integrated suite of web applications along with centralized data storage and other required infrastructure at a single point for the whole museum community. This would allow centralized infrastructure and services costs to be distributed across a large number of participating institutions rather than being repeated, as is the case today, at every institution. Museums could have the benefits of good cataloging and collection management at a level greater than most currently enjoy and at a cost less than probably any individual currently supports.

Managing this as a nonprofit service model could create cataloging and collection management opportunities that are not just faster, better and cheaper, but also imbued with a broader vision for what collecting institutions can do, both individually and as a community in a digital environment. If we could do this by providing open source applications as well as web services, it would build value for the community rather than secure market advantage for a software vendor. A service model like this could also assume much of the burden of dealing with highly variable to non-existent data contributions that have plagued previous attempts to aggregate art museum data. And I think it could do it by supplying consistent metadata largely by enabling more easily accessible and better cataloging tools.[2] This problem of aggregating museum data has a relatively long history and its persistence suggests that though current schemes are certainly more successful, what the community needs is a more systemic approach. One of the problems is that there just isn’t a lot of good museum data out there to be aggregated. So talking about what it would be like to have aggregated repositories other than those that are hugely expensive and highly managed (like ARTstor), it’s unlikely to happen anytime soon. There’s not enough there there to aggregate with good results.

Cataloging seems to be the key to this future, as far as museums’ resources are concerned. Would this scenario would be a first step in producing some good common cataloging?
Well, yes. It’s not enough to say to institutions, “You have to be standards-compliant, you have to use thesauri, you have to use standards, you have to do this and do that.” There are a lot of institutions that aren’t doing anything and aren’t going to do things that are more expensive and time consuming. So it’s not going to help to say that collection managers should be doing this. They’re just not going to do it unless its easier and cheaper, or unless there an obvious payoff and there isn’t one of those in the short term.

So such a project, if it were ever undertaken, would be about providing infrastructure, about providing tools?
Yes, as well as thinking about how we maintain those tools and how we provide services. Because most cultural heritage institutions don’t have IT departments and probably never will, how can we think about sharing what’s usually thought of as internal infrastructure? I mean, choose a small museum with a staff of three; you can’t say ‘you can’t have a finance guy because you need IT,’ or ‘you can’t have a director because you need to do cataloging.’ That’s just not going to happen.

There’s a related model that you have been working on that provides a technical solution both to cataloging and to distribution. If I’m right, it’s not about creating a single aggregated resource but rather about enabling others to create a range of different sources of aggregated content, all using metadata harvesting.
Yes, it’s still in its formative stages, but the essential idea is to put together a system that is lightweight, easily implemented by small institutions, doesn’t require huge cataloging overhead and that supports resource discovery. A problem today is that if you wanted to ask for, say, an online list of all Italian paintings west of the Mississippi, that presupposes that all collections with an Italian painting are participating. But we’re so far from that. It’s the rich and well-funded that continue to be visible and the others are largely invisible. So can we come up with a protocol and a data set that would allow for easy resource discovery that would have a low bar for cataloging and metadata production for unique works?

In this project, we’ve gone through a few rounds now, using the recently developed CDWA Lite as the data standard, mapping that to the Dublin Core in the Open Archives Initiative Protocol for Metadata Harvesting (OAIPMH). Dublin Core, as we’ve all learned, is a bit too generic so we’ve applied some domain knowledge to it and have additionally included URL references for images. We’ve collaborated with ARTstor and have done a harvesting round with them. Getty’s paintings collection is in ARTstor not because we wrote it all to DVD and mailed it to New York, but because ARTstor harvested it from our servers. Just imagine we get to the point where all collections can be at least CDWA-Litely cataloged–say just nine fields for resource discovery. Then these can be made available through an exchange protocol like OAIPMH and then interested parties such as an ARTstor (who might even host an OAI server so not every collecting institution has to do that) could harvest them. If we could get that far and we imagine that other aggregators like OCLC might aggregate the metadata even if they didn’t want the images, it could be completely open. The network would support collection access sharing and harvesting that would be limited only by the extent of the network. Any institution (or private collector) could make works available to the network so any aggregator could collect it. A slide librarian at a college, with desktop harvesting tools, could search, discover and gather high-quality images and metadata for educational use by the teachers in that school. Or perhaps intermediate aggregators would do this with value-added services like organizing image sets for Art 101 at a cost that might suggest a different end-user model.

How far away is this from happening?
The protocol exists and will likely very shortly be improved with the availability of OAI-ORE. The data set exists but is still under discussion. That will hopefully be concluded in the next months. And the data standards exist, along with cross collection guides, like CCO, that’s Cataloging Cultural Objects, on using them. The tools should not be too hard to create. The problem again is the institutional one, the usual one when we’re talking about content. Most museums are not willing to enter into such an open environment because they will want to know who is harvesting their collection. It’s the reaction that’s usually summed up by “we’re not sure we can let our images out.” These are those expected nineteenth-century attitudes about protecting content along with the late twentieth-century attitudes that have been foisted on the museum community about “the great digital potential”–generating revenue based on that content as long as they control it and don’t make it accessible. How sad.

The recent NSF/JISC Cyberscholarship Report[3] discusses the importance of content as infrastructure, and how any cyberscholarship in a particular discipline is grounded until that part of cyberinfrastructure is in place. Museums are clearly far behind in creating any such content infrastructure out of their resources. What will it take to get museums to contribute more actively to such an image content infrastructure? Is there a museum organization that could coordinate this or will it take a larger coordinating structure? Will museums be able to do this together or will they need some outside stimulus?
If it isn’t simply a matter of waiting for the next generation, I don’t really know. It would really be helpful if there were, for example, a museum association in this country that had been thoughtfully bringing these issues to the attention of the museum community, but it hasn’t been true for the last twenty years. And museums are different from the library community with respect to content-as-cyberinfrastructure in that they’re always dealing with unique works. This changes two things: first, one museum can’t substitute a work in the content infrastructure for another one (the way in which a library can supply a book that another library cannot); and, secondly, for these unique works there’s a much greater sense of them as property (“its mine”). This, in a traditional mindset, raises the antenna for wanting to be a gatekeeper, not just to the work but even to information about it. You can see this in museums talking about revenue based on images of the works in their collections, or the need for museums to be watching over “the proper use” (whatever that is) of their images. Not that we don’t need to be mindful of many things like appropriate use of works under copyright. So there is still the sense that there’s got to be something (financial) gained from these objects that are “mine,” whereas most of these collections are supported by public dollars and there must be some public responsibility to make them freely available.

‘You’ve talked elsewhere about the “gatekeeper” mentality among many museum professionals, perhaps especially curators. How do you imagine the forward trajectory of this? How will this gatekeeper mentality play out?
Yes, it’s been very frustrating, but I think it is changing. Even over the past few years I think there’s been significant change in how people think about their gatekeeper role. Today–different from ten years ago–I would say curators are less and less gatekeepers, and directors are being caught off-guard by curators proposing greater openness of the sort that will take advantage of network potential. The Victoria & Albert Museum, the Metropolitan Museum and others are now making images available royalty-free for academic publishing.[4] And along with these changes there is a change in the tenor of the discussion. We want to keep the conversation going as much as possible in hopes that we can move toward a world where objects, especially those in the public domain, can become more fluid in this environment. Many of the attitudes toward intellectual property can be summed up in focusing more on maintaining appropriate attribution for work rather than asserting “ownership,” rather than saying, “it’s mine, you have to pay me for it.” If we’re honest we have to admit that there’s really not a lot of money in the whole system around these kinds of resources. In fact, the real value of these items lies in their availability, their availability for various audiences but especially for continued scholarship and creativity.

That’s a good point. Not too long ago the Yale art historian Robert Nelson said in an interview here that whatever is available online is what will be used, what will create the new canon. He made the analogy to JSTOR. In teaching he notices that the articles he cites that are in JSTOR are the ones that get read; the others don’t.
Yes, that’s absolutely true. And it will take one museum or one major collecting institution to have the imagination to see that and to see that in addition to people coming into the gallery for a curated exhibition, that this other experience of network availability and use has extraordinary value. And if there were two or three really big collections available, literally available as high-quality public domain images, not licensed in any way, one could imagine there would be significant change in attitudes pretty quickly.

You’ve described the open quality of the digital environment as threatening to many in institutions. Could you elaborate a little on that?
The extent to which the opportunities bundled here for realizing mission in non-profits are perceived as threats derives largely from confusing traditional practice with the purpose of the institution. The perception of threats, I think, clearly has been decreasing over the last few years as we become more comfortable with changes (perhaps this is due to generational shift, I don’t know). It is decreasing also as we continue with wide ranging discussions about those traditional practices, which were well suited to business two decades ago but act as inappropriately blunt instruments in the digital environment. These include, for example, the use of copyright to hold public domain hostage in collecting institutions; notions of “appropriate” cataloging, especially for large volume collections that are more suited to a slower paced physical access than they are to the fluidity of a digital environments; and assumptions that place-based mission continues alone or would be in some way diminished by generous and less mediated online access.

In your ACLS testimony back in 2004 on the challenges for creating and adopting cyberinfrastructure, you argue that the most important work for us all ahead is not the technology or data structures but the social element: the human and institutional infrastructure. Is this the weakest link in the chain?
I’m not sure that I would still describe institutions and people as the weakest link, but rather as the least developed relative to technology and the opportunities it brings. This too seems to have changed since the start of the work of the ACLS Commission. We can do plenty with the technology we now have on hand but we’ve frequently lacked the vision or will to do so. One of the most startling examples of this became visible several years ago when the Getty Foundation (the Grant Program) was awarding grants under the Electronic Cataloging Initiative. Many Los Angeles institutions received planning and implementation grants with varied results. One of the most successful would have been predicted by no one other, I suppose, than the hard-working and ingenious staff who are employed there namely, the Pacific Asia Museum. Greater than average success from an institution with, to all appearances, less capacity and fewer resources than other participants was not based on access to better software or on an IT manager who would only accept a platinum support package. It was based on the will and the imagination of staff and the institution.

So would you cite that museum as one that is successfully redefining itself for a digital world?
Yes. You know, there are lots of museums that are doing really good work, but it’s going to take time and the results will show up eventually. If all the effort over the next ten years or so is informed by more open attitudes about making content more available–seeing content as cyberinfrastructure–then it will be all the better. It really is a question of attitude in institutions and a willingness to see opportunities. Almost never believe “we haven’t got the money to do it.” In scholarly communication there are millions of dollars going into print publications that, for example, have a print run of several hundred, for heaven’s sake. You just need to take money out of that system and put it into a much more efficient online publication or collection access system.

It’s about attitude and a willingness to invest effort. The Pacific Asia Museum is a good example. It doesn’t have the budget of the other large institutions in LA and yet it was among the most successful in taking advantage of this opportunity from the Getty’s electronic cataloging initiative. They were very clear about the fact that they wanted to create a digital surrogate of everything in their collection, do some decent cataloging and documentation and make it available. That just sounds so perfectly obvious. But that there are so many institutions that don’t seem to get something so basic, that don’t understand some aspect of that, is just completely astounding to me.
NOTES 

[1] Status of Technology and Digitization in the Nation’s Museums and Libraries (Washington, DC: Institute of Museum and Library Services, 2006),http://www.imls.gov/publications/TechDig05/index.htm.

[2] Recent aggregating efforts include ARTstor and, in recent history, AMICO, both of which look back to the Getty’s Museum Educational Site Licensing project and the earliest attempt to coordinate art museum data at the point of cataloging in Museum Prototype software from the Getty Art History Information Program.

[3] William Y. Arms and Ronald L. Larsen, The Future Of Scholarly Communication: Building The Infrastructure For Cyberscholarship, report of a workshop held in Phoenix, Arizona, April 17-19, 2007,http://www.sis.pitt.edu/~repwkshop/NSF-JISC-report.pdf.

[4] See Martin Bailey, “V&A to scrap academic reproduction fees,” The Art Newspaper 175 (Nov 30, 2006), http://www.theartnewspaper.com/article01.asp?id=525.; The Metropolitan Museum, “Metropolitan Museum and ARTstor Announce Pioneering Initiative to Provide Digital Images to Scholars at No Charge,” press release, March 12, 2007; and Sarah Blick, “A New Movement to Scrap Copyright Fees for Scholarly Reproduction of Images? Hooray for the V & A!,” Peregrinations 2, no. 2 (2007), http://peregrinations.kenyon.edu/vol2-2/Discoveries/Blick.pdf.

Cyberinfrastructure: Leveraging Change at our Institutions. An interview with James J. ODonnell

by David Green, Knowledge Culture

James O’Donnell, Provost of Georgetown University, is a distinguished classics scholar (most recently author of Augustine: A New Biography), who has contributed immensely to critical thinking about the application of new technologies to the academic realm. In 1990, while teaching at Bryn Mawr College, he co-founded the Bryn Mawr Classical Review, one of the earliest online scholarly journals, and while serving as Professor of Classical Studies at the University of Pennsylvania, he was appointed Penn’s Vice Provost for Information Systems and Computing. In 2000 he chaired a National Academies committee reviewing information technology strategy at the Library of Congress, resulting in the influential report, LC21: A Digital Strategy for the Library of Congress. One of his most influential books, Avatars of the Word (Harvard, 1998) compares the impact of the digital revolution to other comparable paradigmatic communications shifts throughout history.David Green: We’re looking here at the kinds of organizational design and local institutional evolution that will need to happen for liberal arts (and other higher-education) institutions to take advantage of a fully-deployed international cyberinfrastructure. How might access to massive distributed databases and to huge computational and human resources shift the culture, practice and structure of these (often ancient) institutions? How will humanities departments be affected–willingly or unwillingly? Will they lead the way or will they need to be coaxed forward?
James O’Donnell:
I think the issue you’re asking about here boils down to the question, “What problem are we really trying to solve?” And I think I see the paradox. The NSF Cyberinfrastructure Report, addressed to the scientific community, could assume a relatively stable community of people whose needs are developing in relatively coherent ways. If wise heads get together and track the development of those needs and their solutions, you can imagine it would then just be an ordinary public policy question: what things do you need, how do you make selections, how do you prioritize, what do you do next? NSF has been in this business for several decades. But when you come to the humanities (and full credit to Dan Atkins, chair of the committee that issued the report, for saying “and let’s not leave the other guys behind”) and you ask “what do these people need?” you come around to the question (that I take it to be the question you are asking of us) “Are we sure these people know they need what they do need?”In the humanities, it’s more that for a long time a bunch of people have been able to see, with varying degrees of clarity, a future, but that hasn’t translated to a science-like community of people who share that need, recognize it and are looking around for someone who will meet those needs–if not in one way then another. With the sciences, it’s almost like a natural market. So it’s easy enough for the CI group to say “This is what forward-looking, directionally-sensible humanists need.” But then we look around the institution and say: “Hello, does anyone around here know they need this stuff? And if so, why aren’t people doing more about this?” And we’re all a little puzzled by the gap and trying to put an interpretation on it. Is this a group of people who are burying their heads in the sand and will be obsolete in three to five years? Or is it a group of people who are not seduced by fashion and gimmickry and are just sticking with their business, undistracted and doing darn good work? Or is it somewhere in between?I’m curious about the differences between what we’re told is coming, the next wave of radically magnified networking and computing power, and the first wave, when the Internet hit in the mid-90s. Before that you had a fairly small but robust set of networks that had been built for a relatively tiny number of scientists. Then with the Web and the government white papers, the Internet hit in a pretty big way. Some members of the humanities community realized there were tools and capabilities here that could change the way they do business and a tiny minority proceeded to work in this way. Now, how will things go this time around? Will it just be a repeat: a few innovators declaring rather insufficiently that this will radically alter the way we do business in the humanities and the vast majority skeptically watching and waiting–for who knows what? And within the institutions–will the big changes happening in the sciences “trickle down?” How much interaction is there between the two cultures?
Let me start by trying to characterize the two waves. First a story: When I was at Penn, I took over the networking in 1995 and one of the stories I got was about Joe Bordogna, Dean of the Engineering School, who in 1984 really believed in this networking thing and he wanted to get the campus backbone built and connected to the Internet. Nobody much agreed with him, there was no money for it and no one believed it would happen. He finally got them to agree to build it on the mortgage plan (a fifteen-year loan). Three years after I got there, the mortgage was paid off and we had something like a million dollars a year we could spend on something else (even though, while the cables and wiring were all there, all the electronic equipment attached to it was long gone by the time the mortgage was paid off). That was visionary and it was challenging. But it was clear, in retrospect, that that was what you had to do: you had to build network infrastructure and had to figure out how to make it work. I came into the IT business partly due to the crunch of the mid-90s. Without anyone planning it, this electrifying paradigm shift occurred. The physical form of it was Bill Gates releasing Windows 95 on August 28, 1995, three days before students returned to campus, all demanding it be loaded onto their machines while all the guys in IT support hadn’t had time to figure out how it worked. So there was a real crunch time as we had to figure out how to get all these machines installed, all designed for the new network paradigm (Windows 95 had the new Internet Explorer browser bundled with it). So we were all suddenly moving into this new space. Nobody had planned for it and no one understood it. But what everyone did know was that you had to connect your machine to the network and that’s the paradigm that’s remained fairly stable ever since. You have a basic machine-it’s shrunk to a laptop now (essentially none of our students have “desktop” computers any more)-and you connect to the network, but nothing else has substantially changed. The under-the-hood browser environment is more complex than it used to be, but nobody’s had to take lessons, the change has been seamless. So my concern is that today there’s no high-concept transition. We’ve had to (a) build networks and (b) connect machines to networks. There’s nothing so clear to face now as what we had fifteen to twenty-five years ago.There’s wireless and WiFi that’s exploding, then there’s the continuing miniaturization, and the iPhone. Is that all incremental change?
Yes, and it all feels incremental. The place where there is real change is invisible. It’s in the handset and all the things it can do now and, though the browser on my Blackberry is pretty crippled, I can get core critical information that way and when I’m really bored in a meeting I can read Caesar’s Gallic Wars in Latin on my handheld. It also gets me through an overnight trip now. I don’t lug a laptop around with me quite so much.

And then there’s the additional bandwidth. It’s also incremental but its pretty fast now.
You know, I must have dozed off for a while, because I never noticed the point at which basic web design started assuming broadband.

And assuming video.
Right. But for a long time, basic web design was geared to deliver the best experience over a 28K dialup connection. Now we’re past that. If we go back to the average humanities academic, he’s talking on his cell phone, doing email, and web-surfing every morning. When I read Arts & Letters Daily with my orange juice and I see a piece I like, I hit “Print” and it’s waiting for me at my office when I get there 30 minutes later.

It’s making things faster, but it’s not changing too much?
Yes, this is automation carried to a point where there is a change in kind as well as in degree. I’m reading more kinds of stuff; I’m a better informed person across a wider range of topics than I was. I am a different person because I do this. But it’s an incrementally different kind of person. Nothing substantial has changed.

Now, although I’d like to pursue the social networking route, I also want to ask if you think there are any pull factors at work on humanities faculty. What would entice faculty to really deeply engage with networking? It’s certainly not collaboration, in itself at least.
There’s the real question of whether most academic behavior is really driven by the content of our enquiry versus how much of it is the need to perform. “Published by Harvard University Press” is still a superior form of performance to any form of electronic publication that you can now imagine.

So the intensity of a social intellectual life that might be increased through collaborative engagement online is of no comparison to that kind of imprimatur?
For many that is correct. I mean, I may be writing better articles because I’m in touch with more people. (I just checked the number of emails in my Gmail account over the last 6 months and it’s a mind-boggling number, something like 1500, so compared to the total number of people I ever met, spoke to on the phone or wrote paper letters to back in the day, it’s half an order of magnitude.) But it’s not getting us to a tipping point where instead of doing x I’ll decide to do y. Instead, I’m just running faster in place. And that’s interesting.

So now I’m provosting, I believe in this future. I’ve written about it. I think we can get somewhere. I think it’s exciting. But has my own personal practice changed that much? Not that much.

Could one tipping point be when the majority of the resources you use are in digital form? I know that would vary dramatically across disciplines.
Well it makes it easier for a humanities scholar-provost to write books while provosting. It means I can carry an amazing library on the train and read through stuff I wouldn’t otherwise be able to get to.

Put another way: does the format of one’s resources affect the format of how one will eventually produce or publish one’s work?
Not to my knowledge. I’m still writing “chapters”–and that’s interesting to me. Even at my level, the obvious rewards are for writing in traditional formats rather than for doing something digital–even down to dollar rewards. I mean, if you’re a scholar wanting to break through to a new audience, you do that through a trade publisher in New York.

At Georgetown you work with science departments that are engaged in cyberinfrastructure projects, so you’re quite conversant with what they’re doing and how. And our big question, where we started tonight, is whether there’s any connection between this activity in the sciences at Georgetown and the humanities. Will the humanities and social sciences always be the poor neighbors who might get to see what the sciences are up to and, if they’re lucky, might occasionally benefit from trickle-down effects?

That’s one extreme position–and it’s an external and judgmental one. An internal extreme position is “Well, we’re doing just fine, thanks!” And those two may be somewhat congruent. In between is a more hopeful and responsible position that says “Look, we are moving forward, developing things gradually.” You saw the piece in today’s Chronicle of Higher Education about Stephen Greenblatt’s[1]new course he’s teaching at Harvard? Almost the most important thing about that, by the way, was that they mentioned Stephen Greenblatt by name–because he’s truly famous and writes famous books and if he does this kind of thing then it must be OK.

And so this is the kind of thing that we need, only much more of it? But how old is Ed Ayers’ complaint that despite all of the really substantial and revolutionary work many have done in creating and using digital resources, as a community we were not moving forward, that real cyberscholarship is still-born?[2] He has pushed as hard as anyone and is as prominent as can be. For his pains, they’ve made him first a dean and now a president. But there’s the tendency for people to sit back and say “Look at that Ed go, isn’t he marvelous,” and that’s the puzzle. I’ll come back to say that the core issue for me is still the one of defining the problem that we have to solve persuasively enough that we get enough people interested in solving it.What’s the role of librarians in this? They seem to be leading in pushing for the provision of digital resources.
Librarians are very good inside and outside agitators in this regard. A logical way to make progress happen is to substantially support them in what they do. I have to say at Georgetown every time we do something digital in the library, foot traffic in the building and circulation of physical material goes up. For example, we offer more transparent web access to the library catalog with more links on it, letting you order stuff from other libraries–and foot traffic goes way up. We can’t stop them coming in (and sure aren’t trying to!).

So the building will be around for a while?
Let me be provostial here and say not only will the building be around, but in five years we’ll have to seriously renovate and consider building an extension. And this for many of our stakeholders (board members and donors) is at first glance counterintuitive: “I thought all that stuff was digital now.” But students are going in more and more, and going in collaboratively-to see and talk to each other. I’m left figuring how to budget for it.

You’re clearly deeply engaged by the present. Do you spend much time going the John Seely Brown route[3] and thinking through what the university of twenty years hence will look like?

Well, that’s kind of my day job. We’re about to kick off a formal curriculum review process at Georgetown that will take years to enact. My task is to have my colleagues challenge themselves to think about the abstract questions of what the goals might be for bringing people together in one place for four years and how we might get there. Can we even get to challenging ourselves about the four-year-ishness, the academic-term-ishness? That’s going to be very hard. As long as that is so powerfully the model and so powerfully the business plan and so universal the expectation that even breaking up student time so they can spend a month on a project is really, really hard. Now this has nothing to do in itself with digital, but there are things you can imagine with new empowering technologies that would be really, really cool if they could do that.

Will there be opportunities for serious consideration of totally discontinuous change?
Definitely. But we’re just beginning and we have to acknowledge that anytime you go anywhere near a faculty meeting, you get what I call the Giuliani Diagnosis of NYC traffic: gridlock is upon us and the natural behavior of everyone around us is go get a bigger SUV and lean on the horn some more. Now, that’s not a good thing and wisdom in such a situation is not about reinventing spaces for living together but consists of the first emergency response level of striping certain intersections and changing the timing on the stoplights because everything is so entangled and interwoven. That’s why I say getting students to get a four-week period to sit together to collaborate and do something truly new and different together is extremely hard. Again, for reasons that have nothing to do with electronic technology but everything to do with institutional structures we have chosen with certain kinds of assumptions in place. (Giuliani left New York before they did more than the emergency response, of course.)

The university is a highly evolved form, so it’s hard to suddenly change direction, or grow a new limb.
Yes, so any academic looking at this has to have pessimistic days in which you say “survival will go to the institution that can start afresh.” I’m reading a report by a colleague on “Lifelong Learning in China” and my question for him will be, “Do you think this vision for lifelong learning in China, where they don’t have such a vast installed base as we have, will/could/should be as exclusively associated with the kind of four-year institutions of learning we have in this country, or will the model get created not in rich first-world institutions but in places where productivity and output matter, where people will invent forms that are genuinely creative and more productive and efficient than we have now?”

Will that kind of conversation enter the curriculum review?
I’m chairing it, so we’ll see. But I have no illusions about my ability, resource-challenged as the institution is, simply to grasp the helm and do hard-a-lee and steer off in a different direction. You have to get a whole load of folks shoveling coal in the engine room to get buy-in before you can do that.

I’d like to make an observation: Theodore Ziolkowski, who wrote the book German Romanticism and Its Institutions[4]–how the zeal of the Age of Wordsworth and Goethe turned into bourgeois Victorianism–makes an important point about the university. Von Humboldt had a choice about the research institution that he had in mind. He didn’t have to take over a university and animate it; it could have been any other kind of educational institution–an Institut, a Gymnasium, an Akademie–but he did and there were costs in doing that. (You know the joke about why God created the Universe in only six days? No? Because He didn’t have to make it compatible with the installed base.) Von Humboldt chose to make his university compatible with the installed base and it was a good idea and it worked. But it carried with it the cost of associating the high-end research enterprise with all of that teaching of an increasingly mass audience. It also carried with it all the benefits of associating research with that kind of teaching.

Now this is an ‘I wonder:’ I wonder if we’re not at the tipping point where that cost-benefit ratio isn’t working anymore. And where, therefore, new institutional forms will need to emerge, if money was there to make new institutional forms emerge or if an institutional form emerged with a business plan–and the University of Phoenix doesn’t look like it.Can you imagine any foundations venturing seriously in this direction? They generally seem quite constrained in their thinking.
Well, have you ever read Thorstein Veblen? They should make us memorize his The Higher Learning in America in Provost school. These institutions are a lot about transmitting social and cultural capital and less about academic performance than we might think. There’s a young scholar I know, Joseph Soares, who’s passionate about demonstrating that the best predictor of performance in college by prospective students is not the SAT but class rank: people who have climbed to whatever heap they’re sitting in will go climb to the top of the next heap.[5] People with good test scores can get good test scores, but there’s no telling what will happen when they get out into the world. But this is unfashionable and it connects well with the fact that these institutions are bound up in the creation, preservation and transmission of cultural capital from one generation to the next. That’s a piece of the function of this tiny but trend-setting group of institutions that transmit their trends out to a wider public in remarkable ways. And that function makes institutions full of creative, innovative, iconoclastic people into bastions of conservatism. Good thing for me I love navigating the tensions that result.

NOTES

[1] Jennifer Howard, “Harvard Humanities Students Discover the 17th Century Online,” Chronicle of Higher Education 54, no. 9 (October 26, 2007) A1.

[2] Edward L. Ayers, “Doing Scholarship on the Web: 10 Years of Triumphs and a Disappointment,” Chronicle of Higher Education 50, no. 21 (January 30, 2004) B24-25.

[3] In for example, the chapter “Re-Education,” in John Seely Brown and Paul Duguid, The Social Life of Information (Harvard Business School Press, 2000).

[4] Theodore Ziolkowski, German Romanticism and Its Institutions (Princeton University Press, 1992).

[5] Joseph Soares, The Power of Privilege: Yale and America’s Elite Colleges (Stanford University Press, 2007).

The Future of Art History: Roundtable

by Jennifer Curran, Academic Commons

Introduction: David Green
Principal, Knowledge Culture

Three art historians were invited to think about how their discipline, and their teaching and research within that discipline, might evolve with access to a rich cyberinfrastructure.

Participants were encouraged to think through what might happen to their practice of art history if:
–they had easy access to high-quality, copyright-cleared material in all media;
–they could share research and teaching with whomever they wanted;
–they had unrestricted access to instructional technologists who could assist with technical problems, inspire with teaching ideas and suggest resources they might not otherwise have known about.

What would they do with this freedom and largesse? What kinds of new levels of research would  be possible (either solo or in collaborative teams); what new kinds of questions might they be able to answer; how would they most want to distribute the results of their scholarship; who would the audience be; and would there be a new dynamic relationship with students in and out of the classroom?

Panelist 1: Guy Hedreen, Professor of Art History, Williams College
On The Next Generation of Digital Images Available to Art Historians

Panelist 2: Dana Leibsohn, Associate Professor of Art, Smith College
On the Technologies of Art History

Panelist 3: Amelia Carr, Associate Professor of Art History, Allegheny College
Overcoming the Practice of Visual Scarcity

Panelist Responses:
Guy Hedreen
Dana Liebsohn
Amelia Carr

 

Panelist 1: Guy Hedreen
Professor of Art History, Williams College

On The Next Generation of Digital Images Available to Art Historians

The extent to which “easy access to high-quality, copyright-cleared material in all media” might transform, as opposed to just speeding up, the work I do as an art historian depends on what is meant by “high quality.” The subdiscipline of ancient Greek painted pottery committed itself long ago to the comprehensive photographic documentation of pots. The Corpus Vasorum Antiquorum (CVA) now numbers hundreds of volumes. The photographic documentation of Greek painted vases is also extensively digitized. The Beazley Archive lists over 100,000 Athenian painted vases in its electronic database, and provides illustrations of over half of them. The Beazley Archive also makes available electronic facsimiles of volumes of the CVA no longer in print. But the parameters defining the photographic documentation of Greek painted pottery have not fundamentally changed since the development of digital technology. They still reflect the constraints imposed by the cost of print publication (the need for relatively few photographs of each object, small in size, black and white rather than color). Like recently developed electronic facsimiles of primary literary sources and secondary scholarship in classical studies (TLG, JStor, L’annee philologique, Perseus, etc.), existing electronic visual resources make it possible to work more quickly and efficiently, and from locations other than research libraries. But they have not fundamentally altered the questions I ask about the artifacts, because they essentially replicate the paper libraries in which I have always worked.

The experience most generative of new questions for me–direct, unmediated observation of the artifacts themselves–is both prohibitively costly in terms of time and money and also not easily effected through digital technology. But there is reason to hope that digital technology could make available enough images to provide a facsimile of the experience. My colleagues’ lectures are often illustrated with slides or digital images not available in printed publications or from museums: color images, shot in daylight, sometimes in direct sunlight, from angles not seen in published photography, or of details not usually illustrated. Thanks to the sensitivity, flexibility, and capaciousness of digital cameras as well as the speed and simplicity of digital storage and transmission, modern photographic technology makes it possible for scholars and students to create and share a much richer collection of reproductions than currently available. A very rich virtual collection of reproductions might afford, to a limited degree, a level of visual access to the objects that would generate new questions. In the study of Greek painted pottery, style, composition, narration, inscriptions, ornament, color, and shape are often studied independently, as discreet areas of investigation, in part because of the limited nature of the reproductions and documentation currently available. Much more difficult to investigate, but arguably central to the study of painted pottery, is how all those things interrelate on these often very complex works of art. That question can only be addressed effectively through direct observation of the artifacts–or through the development of electronic facsimiles of them.

Panelist 2: Dana Leibsohn
Associate Professor of Art, Smith College

On the Technologies of Art History

Let’s start with a simple juxtaposition: cell phones and art history. A few months ago I was listening to a colleague discuss Japanese architecture when, ten minutes into the lecture, a student raised his hand. Could he use his cell phone to take pictures of the slides being projected? A little perplexed by this proposition, my colleague nevertheless assented. And so opened another DIY wrinkle in that perennial art historical dilemma: after a fleeting encounter, how does one recall certain images for later contemplation?

The techno-savvy may wince at art history students who (still) act upon such impulses. My artist friend, who specializes in artworks made with cell phone cameras, would probably express contempt. Yet what impresses me is the tactical panache—a student hijacking one technology to get the upper hand over another (and, from his perspective) more fleeting one.

The fact that the projected slides (yes, slides!) were magenta and dirty bothered the student not at all. That these images could be posted on Flickr in their decrepit state within an hour? My colleague could not have cared less. What mattered to the folks in that room was the visual culture of Kyoto, not whether anyone had posted high-res, controlled-vocabulary-tagged images, with pristine copyright-papers online.

When I contemplate cyberinfrastructure (a word I have never actually heard uttered aloud), it is sociological rather than technological questions that seem most pressing. For instance, to follow the cell phone to its logical conclusion: how much conceptual space–not merely fiber bandwidth–can an infrastructure create for the unpredictable, the unexpected, the stuff that unfolds on-the-fly?

No art historian I know craves fewer images, of poorer quality that are harder to find. Yet procuring (or merely locating) an image that will suffice, let alone surprise, still takes some cleverness. Many are easy–searching for a Rembrandt image, Google brought me 300,000 to choose from. Mathew Barney is a little less so, folios from Mexican codices almost impossible. Impromptu image-capture thus has an incredible ability to satisfy. Whether such photographs are born of creative inspiration or tactical desperation, the on-demand image is no trifle.

I would be the first to admit that certain art historical occasions demand highly polished imagery; we would be sorely mistaken, however, to underestimate the image which is “good enough.” Indeed, any disciplinary turn (or cyber-structuring) that privileges refined imagery, yet under-values the untamed, seems a dubious prospect. It can, and should, be resisted.

Even more fundamental is what all this cyberinfrastructure–be it comprised of bandwidth and metadata, smart classrooms or smart people–might be good for. With all due respect to Levi-Strauss, is it good to think with?

In Everything is Miscellaneous (Times Books, 2007) David Weinberger develops the idea of a “third order of order,” wherein digital media spawns new practices for organizing and categorizing data. Neither the media nor the tools create ideas; they do, however, facilitate innovative patterns of thinking, particularly the collaborative production of knowledge. Weinberger’s detractors are strident, but what if he is right? What if the most compelling ideas are increasingly produced through (if not also because of) digital technologies and cyber-structures?

If so, art history could be a player. We would, however, have to embrace DIY as more than a passing fad. Google is going to scan like crazy (will Google Museums follow Google Books?). Even so there will never be enough pixel serfs to capture all the pictures or sounds or words we now need or will create. Another generation of art historians–the ones for whom Web 2.0 could actually and productively mean something–will no doubt find such talk about digital self-sufficiency quaint. To shun this challenge now, however, is to accept the role of art historical cyber-consumer, awaiting others who can serve up materials (if not also ideas). Hardly a promising disciplinary trajectory.

This tension between consumption and production has another side, namely the seemingly intractable dichotomy between content and technology. Many in the humanities–including me–currently use bandwidth daily, but as little more than a mode of display. PowerPoint and websites have replaced slide projectors and photocopies. This is true of teaching, and also our research. We enlist (some might say consume) various technologies in order to produce written scholarship. In this world then, art history (i.e., “content”) occupies one epistemological realm, the technology to deliver it another.

Yet scholars in fields as diverse as anthropology, the history of science and feminist criticism (for instance, Marilyn Strathern, Bruno Latour, Lorraine Daston and Donna Haraway) have shown how highly porous the boundaries among technologies, bodies and social institutions can be. And colleagues in the sciences rely upon cyberinfrastructures to think seriously about the ontology of objects and their replication, to craft wholly new materials and kinds of simulacra.

Could art historians likewise experiment? We seem so tolerant of certain “verities” about art, what it can and cannot be. What might it mean–to us, today, to others, in the past–to think anew about when art and its history is (or is not) technology? Could such a re-thinking allow art historians to create something more interesting than books? Would it be a loss, or a gain if art historians and scientists could, truly, negotiate knowledge with each other? Such projects would not simply depend upon an openness to DIY, but also ambitious collaboration.

So, at the very minimum, a cyberinfrastructure is useless if it cannot revolutionize image access and metadata management–art history’s most anxiety-producing fetishes. At its best, a cyberinfrastructure can help us to think newly and deeply about vision and objects, and how the one gets entangled with and contests the other.

Regardless of how this all plays out, I am taking notes on whether students continue to pass art-images around like emoticons. Impressive, creative discoveries surface in unlikely settings. Whether any cyber-infrastructure can open itself, or us, to the pleasures of such serendipity remains to be seen.

Panelist 3: Amelia Carr
Associate Professor of Art History, Allegheny College

Overcoming the Practice of Visual Scarcity

We do indeed live in an era of visual saturation (and our students are probably the better for it), but our discipline functions on visual scarcity. Our techniques involve isolating individual images and scrutinizing them in detail and without distraction. Despite very legitimate and successful efforts to embrace high/low/popular culture and diversity of all sorts, art history still functions on a Canon of Masterpieces.

Many image owners share the philosophy of scarcity. The price of our commitment to a few iconic images is that we are at the mercy of the market in acquiring them. There are alternate paradigms. When art is considered “patrimony” (a store of wealth that belongs to a people or a nation), there have been greater government efforts to make images available to all. Or, art historians could demand placement fees from museums in exchange for the implicit advertising of works that are featured in our courses. But in our capitalist economy that thrives on Visual Scarcity, We the People, and our educational institutions, are just another market niche.

In addition, the field of Art History itself wants to control access to images, defined in its broadest sense of availability of the physical objects, availability of reproductions, having metadata, and owning interpretations. Gaining access to images is part of our credentialing process, and limiting reproduction is merely one method to assure that only a few can own art in an age of mechanical reproduction. We must also spend time and money to study and travel, pass a series of examinations, and know the right people. One of my colleagues who works in a foreign archive described the necessity of “paying your dues” and “earning your chops” through years of labor to uncover new and unknown material that would create her academic reputation. But she also praised the mentors found and friendships forged in those rooms that cannot exist in cyberspace. Total access to everything through a cyberinfrastructure would undermine the present system, and we would lose that community of real people, as well as the knowledge that comes with handling physical material.

I have to admit that the concept of Visual Scarcity pervades my own teaching. To counter everyday visual inundation, my classroom is a shrine to focused attention on a few sacred objects. It is dark and mystic, with my specially chosen “Greatest Hits” glowing at the front. The tone of the class can be one of exclusivity, claiming that students are being initiated into some higher level of arcane knowledge about images, which confers a sense of privilege and class status. The required image list and the structure of the accompanying metadata is under my exclusive and somewhat arbitrary control. So, what happens when teachers and students gain access to a nearly infinite image database? How does my function as high priestess/gatekeeper change? Am I less worthy of my salary?

The Canon will have to stretch even further, with practical consequences for the classroom. If I have time to discuss just one work by an artist, what do I possibly choose-and why? The ideologies of “Masterpiece” and “Great Artist” have been challenged repeatedly over the past decades, but limited availability of images still made it a little easier to maintain the Canon and allow a few high-profile collections to stay at the top of the hierarchy. The reality of image availability puts pressure on me as a teacher to explain my choices in the context of my pedagogical purposes. Platitudes and easy categorization fail when more examples and counter-examples are at hand.

I am not really worried about my status as a professional teacher, however. Having access to images and metadata should not be confused with Knowledge, even if that’s the way it might appear at the art history survey level. Perhaps some of my classes hinge on a pedagogical trick of producing an unknown or startling image that disrupts student assumptions. But, in the classroom as well as scholarly forums, I believe that I have more to bring to the discussion than simply the ownership of an image that others don’t have. A rich image database available to students should take some pressure off of the classroom lecture, because choices about what small number of images to present are less burdensome if students can easily explore others.

The fact of the matter is, in my current position in a small liberal arts college with a part-time support staff of primarily student work-study, I have spent an enormous amount of time in the past few years trying to locate and create images for a basic set of courses. As a teacher, I almost never feel that I have exactly what I want in terms of images. I’m struggling with an image-delivery and database technology that is “good enough” but not ideal. To have a rich cyberinfrastructure in place would give me that most precious gift: time. Time to focus on pedagogy and scholarship. Time to do what art history has done best: foster appreciation of individual art objects and promote interpretations of them that help us understand our past and enrich our present.

PANELIST RESPONSES:

Guy Hedreen Response:
The sort of reproductions that I envision advancing my work is essentially the cell-phone snapshot described by Dana Leibsohn–although I wish the student owned a better camera. Standing in the way of an electronic proliferation of such reproductions, produced by many different spectators with divergent points of view, however, are two important issues.

One is the control over the production, distribution, and reproduction of images exerted by museums and photo archives–what Amelia Carr calls the market.  It is often necessary to work around that institutional framework in order to acquire the images one needs to teach and write. Carr’s description of the effort entailed in building an image library at a liberal arts college is familiar. With my first paychecks, I purchased a professional SLR camera (and a copy-stand, but the device, useful as it was for working up new classes quickly, did not get around the limitations inherent in the corpus of published illustrations of Greek art). Who had the time or money to build an image library in a new field out of slides purchased or borrowed directly from museums, even with the support of a good college or university slide library? On every trip to Europe or a major American city, I have taken my SLR, set aside hours for the museum(s) or local monument(s), and shot digital images (until recently slides) of anything important, interesting, or amusing. Of the greatest interest are images of objects, or aspects of them, that are unavailable through the market. Some of the images make their way into my lectures or publications, but all of them reside in slide boxes or memory cards, a microcosm of my particular visual interests. My art-historical utopia is rooted in the knowledge that hundreds of classical archaeologists and art historians similarly possess boxes of slides or digital-image cards of works of art, artifacts, and monuments, from the well-known to the very obscure. To have access to all those already extant images electronically is one way that digital technology might address the fundamental limitations on visual access to works of art imposed by time, space, money, and the market.

The second potential impediment to the proliferation of digital images produced by scholars, students, tourists, and the like is touched on by Leibsohn when she wonders “if art historians and scientists could, truly, negotiate knowledge with each other?” Like many scholars, I am accustomed to guarding my insights until they can be presented in a manner that preserves my identity as author of them–in professional papers or print publications. Am I willing to make my often hard-won photographic images of ancient Greek (and other) art available via the internet for other people to use without acknowledging my authorship? Arguably the most influential American review journal in classical studies is now a journal (Bryn Mawr Classical Review) that circulates exclusively via email or the internet. The success of the ejournal suggests that classical scholars are willing to move important aspects of their work to an all-electronic platform. But whether we are willing to move our image libraries from our individual slide boxes and laptop computers to an electronic archive such as the Stoa Consortium or ARTstor remains to be seen.

Dana Leibsohn Response
A few weeks ago my mother asked me to introduce her to YouTube. The first clip she chose was the Vote Different/Hillary Clinton video. Her initial curiosity turned on the video’s creator and political bite, topics blogged earlier this spring (i.e., http://www.huffingtonpost.com/phil-de-vellis-aka-parkridge/i-made-the-vote-differen_b_43989.html), but this didn’t last. What seemed most odd to my mother: having to choose which of several versions of the video to watch. Why, she wanted to know, wasn’t one enough?

Why, indeed. As I write, at least 15 postings of the Vote Different video can be found on YouTube—some “original,” some remixed–as can the “source” Ridley Scott ad for Apple Mac. This may not seem the daily stuff of art history, yet I would suggest otherwise. This is not because Ridley Scott’s commercial work is so compelling or YouTube so remarkable, but because cyber-infrastructure(s) have changed what we, as interpreters of images, must do.

Amelia Carr and Guy Hedreen are certainly correct. Consistent access to good images, at the right resolution, in an appropriate medium, constitutes a central obsession of art history. So, too, do the implications of image-ownership and access. The grounds of debate, however, are shifting. While there is much yet to be said about the scarcity of certain kinds of technological resources, and the plethora of others, we can now take as given these facts: digital media will not soon supplant the need (or desire) to work with physical, material objects; the quantity and kind of imagery available via digital technologies will increase, although not only as we might most wish; sustained access to images and information will depend, evermore, upon both local and far-flung collaborative labor.

What, then, still needs debate and resolution–particularly if we wish to effect change because of, not merely in spite of, our labors? Here are two issues that come to my mind:

* What constitutes a viable image–one worthy of study or pedagogy? This touches upon Andrew Keen’s quality vs. garbarge arguments and copyright policies of museums and digital libraries, but “policing” the boundaries of an image and its connotative energies has become increasingly fraught. Not only are images more porous, they travel more quickly. We have seen them become viral. What are the implications of this  for art history, for the development of cyberinfrastructures, for work with images more broadly?

* What is collaboration, what merely redundancy? Are all those Hillary remixes original works, or copies, or…? And what if YouTube takes them down? While I don’t often think of the Google-owned site as my collaborative partner, perhaps I should. As the debate on immaterial labor and the market unfolds (see, for instance the discussion at iDC), perhaps the most pressing question becomes not, “what do we, as art historians, want from a cyberinfrastructure?” or even “who needs so many Hillarys?” but rather, “what kind of image work is the work that matters most?”

Amelia Carr Response:
The differences between my colleagues’ approaches to digital material brought to mind a distinction made by Yvonne Spielmann in a recent Art Journal article between a “technology” and a “medium:”[1]A technology is a mechanism for performing a task whose purpose has already been defined. We can understand a digital image as a quicker way to make a photograph, which itself is meant to capture experience. A medium recognizes and revels in its own physical characteristics, asks its own questions, and moves beyond inherited understandings. I want the new cyberinfrastructure to serve in both capacities.

Guy Hedreen and I share the desire for a technology that will imitate as closely as possible the “direct, unmediated observation” of art objects. In my classes, especially, I want a presentation vehicle that effaces itself sufficiently to make me think that I’m in the same space as the art. However, when I read his statement, I found myself thinking that, in fact, I want my pictures to be much better than direct observation! Maybe he has connections that I lack, because I routinely find my actual museum or site experience somewhat less than ideal. There’s a glare on the plexiglass, or the book is only open to one page, or the sculpture is too high on the building, or it’s raining. Sad to say, I have rarely been able to handle the objects I study. I have come to expect pictures to show me microscopic details, aerial views, reconstructions, and x-ray images that I could never “see” with my own two (failing) eyes.

If a cell-phone image of art is “good enough,” then maybe a bad museum visit will also suffice to create a legitimate art-historical experience, whatever that may be. But for me, the ideal experience of art is something more than merely being in its direct vicinity. It is a highly mediated experience, enhanced and contextualized by technologies of language and imagery.

In that I desire and can sometimes get more information out of pictures than in situ observation, I’ve come to see how the “virtual” presence of an object is a new and different thing. To the extent that digital technologies invent their own rules of order and their own surprising insights, they become the creative mediums that Dana Leibsohn so craves. Like her, I yearn for something “more interesting than books.” But I actually think we’re starting to get those new non-books, in the digital databases and in other creative presentation of material. I wouldn’t want to insist that classicists are in the forefront here, but I’m very aware of how they are successfully creating digital resources that creatively reconstruct sites and material culture. Something like the Theban Mapping Project (to mention a site that I used recently) allows users to explore and interact with material in a way not possible in a book.

Online collaboration of far-flung scholars has also changed the shape of art-historical careers in ways much more profound than simply facilitating communication through email. I’m now working with a musicologist in Virginia on a book, and we’re experimenting with using wiki software as a way of discussing, writing, and editing our material online. At the last Ancient Studies New Technologiesconference (2004), classicists and medievalists shared projects ranging from a collaborative online translation and annotation of the 10th-century Byzantine Suda, a database of the Egyptian Antiquitiesin Croatian Museums and Private Collections (you might be surprised!), to the Digital Gallery at Canada’s Royal Ontario Museum, proclaimed as the classroom of the future!

The websites for archeological digs allow material and preliminary findings to be shared immediately, answering the old criticisms about how long it takes field work to find any kind of publication outlet. Online exhibitions not only bring the holdings of the big museum onto every desktop, they allow the material from out-of-the-way smaller university and private art collections to enter into the conversation. Perhaps these projects can still be categorized into traditional categories of scholarly writing, editing texts, cataloguing objects, and pedagogy, and simply use a new technology of delivery. But these products of academic work are taking increasingly non-traditional forms, and are finding new audiences.

Are art historian scholars getting proper credit and job promotion for these new creations? For me, the thoughts of James O’Donnell in Avatars of the Word are still insightful here–scholars can incorporate these new art-historical mediums into academe if we want to, but we need to champion the new forms and create the peer review mechanisms to give them credibility. Rather than letting traditional publications provide the sole definition of how art-historical knowledge should be presented (and thereby letting editorial boards of university presses shoulder significant responsibility for tenure decisions, as O’Donnell puts it), more of us might be willing to review and critique the online exhibitions, databases, and websites. I am drawn to O’Donnell’s vision of a world of online “pre-publication” to which “the journal’s peer review and stamp of approval will come after the fact of distribution and will exist as a way of helping identify high-quality work and work of interest to specific audiences.”[2]

I’m in favor of a broadening the definition of how technologies and mediums function in our profession. But we have to take critical responsibility for digital material and the new cyberinfrastructure. In our evaluations, we will need not only our scholarly sense of what is accurate and useful, but a Web 2.0 savvy about good, bad and ugly technologies, and an awareness of the different constituencies we serve.

1 Yvonne Spielmann, “Video: From Technology to Medium.” Art Journal. New York: College Art Association Fall 2006 (65:3): 54-69.

2 James, J. O’Donnell, Avatars of the Word: From Papyrus to Cyberspace. Cambridge: Harvard Univ. Press (1998): 60.

We welcome your comments to this Roundtable.

Cyberinfrastructure as Cognitive Scaffolding: The Role of Genre Creation in Knowledge Making

  • warning: Invalid argument supplied for foreach() in /home/commons/drupal/includes/file.inc on line 553.
  • warning: Invalid argument supplied for foreach() in /home/commons/drupal/includes/file.inc on line 553.

Information infrastructure is a network of cultural artifacts and practices.[1] A database is not merely a technical construct; it represents a set of values and it also shapes what we see and how we see it. Every time we name something and itemize its attributes, we make some things visible and others invisible. We sometimes think of infrastructure, like computer networks, as outside of culture. But pathways, whether made of stone, optical fiber or radio waves, are built because of cultural connections. How they are built reflects the traditions and values as well as the technical skills of their creators. Infrastructure in turn shapes culture. Making some information hard to obtain creates a need for an expert class. Counting or not counting something changes the way it can be used. Increasingly it is the digital infrastructure that shapes our access to information and we are just beginning to understand how the pathways and containers and practices we build in cyberspace shape knowledge itself.The advent of the computer has made possible an event that has happened only a few times in human history: the creation of a new medium of representation. The name “computer” fails to adequately convey the power of this medium, since a machine that executes procedures and processes vast quantities of symbolic representation is not merely a bigger calculator. It is a symbol processor, a transmitter of meaningful cultural codes. The advent of the machinery of computing is similar to that of the movie camera or the TV broadcast. The technical substrate is necessary but not sufficient for the process of meaning-making, which also depends on the related cultural process of inventing the medium. Cyberinfrastructure is an evolving creation. It is both technical and cultural, constrained and empowered by human skills and traditions, and possessing the same power to shape and expand the knowledge base that the print infrastructure has maintained for the past 500 years, and that the broadcast and moving image infrastructures have for the past 100 years.[2]

It is useful to think of a medium as having three characteristics: inscription, transmission, and symbolic representation. Inscription concerns the physical properties (the mark in the clay, the ink on the page, the current through the silicon); transmission concerns the logical codes (alphabet, ASCII, HTML). Representation is the trickiest part because it is a combination of logical codes like the alphabet and indeterminate cultural codes like words themselves. A logical code can be mechanically deciphered; it always means the same thing. But a cultural code is arbitrary, shifting, and context dependent. There is no reason why the particular sounds of the word “pottery” should refer to ceramics, or why “Pottery Barn Kids” should refer to a place without ceramics, a barn, and usually without kids as well. Representation relies on the negotiation of conventions of interpretation, on symbolic systems that acquire meaning through logical systems and through familiarity with customs. These conventions tell us how to interpret a store name or a street address or a URL. They set up the framework for receiving new information as variations on familiar patterns. When these conventions coalesce into complex, stable, widely recognizable units, we have genres.

Genre creation is how we use a new inscription and transmission medium to get smarter. For example, the printing press allowed us to put words on a page in a standardized manner and to distribute the words in multiple portable copies. This is the technical substrate. But the scientific treatise did not appear until two centuries later, because it required the invention of new representational conventions such as the learned essay, the scientific diagram, the specification of an experiment, and new social conventions such as the dispassionate tone of argumentation and presenting evidence based on careful observation.[3] The spread of reading also furthered habits of humanist introspection and fostered sustained, consistent storytelling, leading to the development of the genre conventions of the confessional autobiography and the psychologically detailed novel.

Just as the novel developed different expressive conventions from the prose narratives of earlier eras, scientific essays developed different expressive structures from the philosophical essays and practical craft diagrams that preceded them. In fact, all of our familiar representational genres from scholarly journal articles to TV sitcoms make sense to us because they draw upon centuries of evolving representational conventions, from footnotes to laugh tracks. Each of these conventions contributes to a familiar template that allows us to take in new information more efficiently because it comes to us as a variant on an intuitively recognized pattern. In terms of signal processing, genre conventions form the predictable part of the signal, allowing us to perceive the meaningful deviations as carrying the information. In cognitive terms we can think of representational genres as schemas–conceptual or perceptual frameworks that speed up our mental processing by allowing us to fit new stimuli into known patterns. In computational terms, we can think of genres as frames, recursive processing strategies for building up complexity by encapsulating pattern within pattern.

Genre creation links strategies of cognition with strategies of representation. For example, titled chapters are a media convention that makes it easier to follow the sustained argument of a book. It chunks a longer argument into memorable sections. The invention of the book involved standardizing the convention of naming chapters within the genre of persuasive or explanatory writing. Books increased our ability to focus our individual and shared attention, allowing us to sustain and follow an argument too long to state in oral form and to elaborate and examine an argument together over time and across distances. The proliferation of book-based discourse led to the growth of domains of systematic knowledge, represented by shelves of books in standardized arrangements that collocate works on the same topic. We can understand one another across time and place when we refer to a domain of investigation because we have the shelf full of books to refer to.[4]

The invention of a genre, therefore, is the elaboration of a cognitive scaffold for shared knowledge creation. When we build up conventions of representation like the labeling of scientific diagrams or the visual and auditory cues that indicate a flashback in a movie, we are extending the joint attentional scene that is the basis of all human culture: we are defining a more expressive symbolic system for synchronizing and sharing our thoughts.[5] The work of the designer in inventing and elaborating genre conventions allows us to focus our attention together; the invention of more coherent, expressive media genres goes hand in hand with the grasping and sharing and of more complex ideas about the world.

Genre and Knowledge Creation in Digital Media
The digital medium is a capacious inscription technology with a wealth of formatting conventions and logical codes for reproducing many kinds of legacy documents, but its native genre conventions are still inadequate to allow us to focus our attention appropriately and to exploit the new procedural and participatory affordances of the medium.

An electronic spreadsheet is a good example of a legacy genre–the fixed paper spreadsheet that has changed with translation into the digital medium, gaining processing power and manipulability so that it represents not just a single financial situation but a range of possibilities within a common set of constraints. In order to move the paper spreadsheet into electronic form, conventions had to be invented for writing formulas and titling columns and rows. Lack of competition has fossilized these conventions, which may be refined into greater ease of use if competing products appear. But the basic genre of the electronic spreadsheet is established and it scaffolds our understanding of budgeting by allowing us to change entries in a single cell and see the results propagate as other cells change. An electronic spreadsheet is therefore a good example of a new genre with cognitive benefits. It does not merely make it easier for us to add up columns of numbers; it offers us a different conceptualization of a budget. But inventing the electronic equivalent of a paper spreadsheet was mostly a matter of implementing mathematical functions, which are purely logical codes. It is easier to figure out than the electronic extension of the scientific journal article, the documentary film, or the novel, which are far more culturally dependent genres.

Other representational genres are changing more slowly by a process of experimentation. The World Wide Web is mostly an additive assembly of legacy media with few procedural simulations and explorable models, and many cluttered diagrams and Powerpoint slide shows. Much of the information we receive is in the form of scrolling lists, the oldest form of information organization in written culture. Too often these listings lack adequate filtering and ordering. At the same time the encyclopedic capacity of digital inscription raises our expectations, creating what I have called the “encyclopedic expectation” that everything we seek will be available on demand. [6]

Even at this early stage, however, the computer has already brought us a limited number of new symbolic genres. The most active genre design has been in the development of video games, which exploit the procedural and participatory power of the computer to create novel interaction patterns, new ways of acting upon digital entities and receiving feedback on the efficacy of one’s actions. Will Wright, the inventor of Sim City, the Sims, and other simulation games, has called computer games “prosthetics for the mind.” His simulation worlds are perhaps the most successful implementations of the affordances that Seymour Papert first pointed out in Mindstorms[7]: the ability of computers to create worlds in which we can ask “what if” questions, in which we can instantiate rule systems and invite exploratory learning. Sim City works as a resource allocation system in which we make decisions about zoning and power plants and watch a city grow according to the parameters we have chosen. Sim City is a toy but it uses some of the assumptions of professional urban planning simulations. Similar simulation systems are in use in scientific and social science contexts, and they are increasingly used to simulate emergent phenomena that could not be captured in any other way. These are specialized tools and they do not necessarily work across disciplines or related domains.

Tim Berners-Lee[8] is the foremost advocate of a more powerful procedural strategy for meaning-making. For Berners-Lee, much of human knowledge is awaiting translation into a logical code structure, a structure that is open to change but requires social compacts within and across disciplines to succeed. His vision of the semantic web would give shared resources on web pages the coherent form of databases and would allow multiple procedures to be applied to these standardized data. The semantic web is the most ambitious vision of the development of large data resources into new knowledge. But in his recent reappraisal of the idea, Berners-Lee laments the reluctance of knowledge communities to come together to establish interoperability in all but the most trivial exchanges and, more significantly, to do the difficult work of inventing common tagging vocabularies based on common knowledge representations (ontologies). To Berners-Lee, the hard sciences are a good example of domains in which there is a desire to establish common terminologies and knowledge representations. But even in the sciences there has been reluctance to devote resources to standardization and resistance to imposing conformity. It would seem even less likely that humanists could be persuaded to come up with common representations of concepts.

In fact, the thrust of humanist involvement in the digital arena could be characterized as the antithesis of the semantic web, as a “rhizome of indeterminacy,” epitomized by the title of a widely celebrated artistic work, Talan Memmott’s “Lexia to Perplexia,” whose indeterminate structure is the subject of an admiring monograph by N. Katherine Hayles, one of the leading scholar-critics of electronic literature. Memmott’s text is purposely perplexing in its subverting of common conventions of meaning-making. This lexia, or screen of text, is from the section called “The Process of Attachment”:

The inconstancy of location is transparent to the I-terminal
as its focus is at the screen rather than the origin of the
image. It is the illusory object at the screen that is of interest
to the human enactor of the process — the ideo.satisfractile
nature of the FACE, an inverted face like the inside of a
mask, from the inside out to the screen is this same
<HEAD>[FACE]<BODY>, <BODY> FACE </BODY>
rendered now as sup|posed other.
Cyborganization and its Dys|Content(s)
Sign.mud.Fraud [9]

Hayles finds significance in the disorienting process by which Memmott’s hypertext dissolves meaning:

To the extent the user enters the imaginative world of this environment and is structured by her interactions with it, she also becomes a simulation, an informational pattern circulating through the global network that counts as the computational version of human community.[10]

For Hayles, the genre creates the reader; the reader fuses with the cyberinfrastructure. Her delight in the frustrations of Memmott’s witty instantiation of perplexity, in the elusiveness of meaning within his text, contrasts markedly with Berners-Lee’s pursuit of a more coherent, inclusive logical code. With the html link as the crucial enabling technology, there are in fact many divergent web genres currently in the process of formation, and multiple expressive communities engaged in inventing them.

Humanists are likely to resist attempts at creating a common ontology as a resurrection of the totalizing ideologies and culturally imperialistic hegemonies that the work of the late twentieth century exposed and repudiated. In fact, the early embrace of hypertext by literary scholars was based in part upon its power to subvert the organizational power of the book. However, there has also been a consistent strand of celebration of the possibility of hypertextual and hypermedia environments to bring together large bodies of information and to create multivocal information structures.[11] In fact, the most promising area of overlap between the scientific pursuit of self-organizing data on the one hand and the humanist pursuit of procedurally generated ambiguity on the other, is the mutual affirmation of gathering multiple points of views and multiple kinds of information in a common framework. For scientists, the process is seen as the accumulation of a common dataset. This is the next step in the process of shared witness to experimentation: the amassing of a common pool of information that is so well collected that its various discrete data points form meaningful patterns. Humanists are pursuing common archives as well in the interest of preserving cultural heritage, such as the collection of all surviving papyrus texts, or the recording social history by StoryCorps Project of the Library of Congress. In every discipline the encyclopedic nature of the digital medium is leading to a massive archiving effort, and the aggregation of these archives is motivating the creation of common means of access and common formats of contribution.

There is also a large community of practice growing up among lay users of electronic resources who are creating and navigating vast archives of media. Current strategies for sense-making of large data sources have had limited success but they point to the kinds of strategies that, over time, hold promise for creating a richer shared representation. Search engines still return much unnecessary information and miss key information; folksonomies provide uneven tagging of large resources. But to the extent that Google and Flickr and del.icio.us are useful to us, it is because they leverage the efforts of many distributed annotators. Google owes its success to a key insight that the syntax of links is itself semantic. By using anchor text–the words that are used as clickable links to other pages–as a collectively created index to web content, the inventors of the most successful search engine of the early twenty-first century captured a more reliable representation of what is most important about the linked pages than the other technologists captured by relying on full text search of the pages themselves. Google can be thought of as an exemplar of the evolving genre of the search engine portal, and its presentation of listings with brief excerpts and its use of marginal advertising are important conventions of the genre.

Although we can now collect more data than anyone could possibly hope to examine, that does not mean that the answer to understanding the data lies in teaching the computer new algorithmic tricks so it can do the reading for us. Too much of what we need to understand can only occur with the focused attention of a reliable human collaborator. What we need is computer-assisted research, not computer-generated meaning. We need structures that will allow us to share focus across large archives and to retrieve information in usable chunks and meaningful patterns. Just as the segmentation and organization of printed books allows us to master more information than we could through oral memory, better conventions of segmentation and organization in digital media could give us mastery over more information than we can focus on within the confines of linear media.

I would suggest that the best way to create knowledge that exploits the vast new resources now migrating into digital form or increasingly being “born digital,” may not be through “data-driven” automation, since much of human meaning will always be contextual. The meaning of any data set, no matter how vast or well-organized, cannot be logically inferred from the data alone because we have yet to find a way to encode the full experiential context of the data. The data itself may be treated as a logical code, but the kinds of information that have been captured, the selection of what to pay attention to and what to ignore, and the framing of questions to ask of the data as well as the inevitable omission of other questions all depend on values, assumptions, and the wider cultural context.

Numbers and other logical codes are always lagging behind our understanding of the world as expressed in the cultural code of language; and language is always lagging behind what we take in as experience. The computer alone can foreground unnoticed relationships among logical units. But it cannot replace the new conceptualizations that come from experience through language. Instead of mining for knowledge in digital archives, we should see the computer as a facilitator of a vast social process of meaning-making.

The new participatory web genres associated with Web 2.0, such as media sharing sites, online social networks, and contributory information resources are steps in this direction. Wikipedia provides a useful example of a self-organizing information structure, dependent on coordinated distributed efforts rather than automated knowledge creation. Its limitation is that the coverage of topics and the quality of entries is uneven. But it owes its usefulness to the successful definition of the genre of a wikipedia entry, including guidelines for tone and attitude as well as guidelines for structure. Most of all, wikipedia is successful because it exploits pre-existing disciplinary taxonomies and media conventions. Other web 2.0 genres such as media sharing sites are considerably less well organized than wikipedia. Most rely on the voluntary collective elaboration of tagging structure, often called a “folksonomy” to differentiate it from the top-down, authority-driven taxonomies of librarians and professional information architects. The best organized sites draw upon existing structures such as music genres, but without such preexisting structures folksonomy sites are full of arbitrary labels, inconsistencies, and redundancies.

New genres of knowledge creation will arise from a combination of all three kinds of efforts exemplified by Berners-Lee, Memmott, and Web 2.0: the top-down logical standards-maker, the self-consciously artistic outlier, and the sloppy but motivated mass users. It will also draw on the most sophisticated design practice and conversation currently taken place, which is coming out of the new genre and new discipline of Game Studies. One of the most important insights of Game Studies scholars is the procedural nature of the medium.[12] Computational environments are characterized by the execution of rule systems. Games have been such successful applications of digital media because they draw on a continuous, ancient tradition of rule-making. Games represent the world as a rule-based domain, where actions have predictable consequences. We have a collaborative technology for exploring rule systems in print, but are just beginning to develop a similar technology for exploring rule systems as executable environments.

One of the most promising examples of procedural genre creation is the Simile (Semantic Interoperability of Metadata and Information in unLike Environments) Project of the MIT Libraries which builds widgets based on Berners-Lee’s semantic web technologies. For example, they have created a timeline widget that takes in any kind of information and allows for presentation, browsing, and facet-driven sorting. Widgets like these should replace the browser and search engine interface to information, creating something similar to a library or a television channel as a standard format for containing and transmitting information. But the container would no longer be fixed and linear. It would be dynamic and procedural.

For a collectively created knowledge structure to work it will have to include ways of implementing collectively created rule sets as well as collectively created annotations. We need ways of creating simulations that interact with one another and also ways of sharing the task of annotating texts. Part of this process may be the creation of automated tools, but the tools can only implement shared understandings that arise among different communities. We cannot generate the understandings or the rules from the data alone, nor can we leave it to self-organizing open societies to create organization out of many discrete actions. We cannot impose ontologies upon large communities because they require too much investment in social organization and because they are antithetical in particular to the humanist frame of mind. And yet, we keep accumulating media and annotations and commentaries upon it. Are we destined to drown in our own knowledge creation, unable to know anything because we are paying attention to too much?

A modest, concrete beginning: Juxtapositions
So far I have argued that we need more complex representational patterns to take advantage of the potential of the computer for helping us to become smarter and to communicate with one another about more complex understandings of the world. Yet this effort has been stymied by the resistance to imposing common understandings on shared datasets, and the most widely advocated approach–the automated generation of meaning from large data sets–seems unlikely to be accepted by humanists. At the same time the humanist enjoyment of perplexity and the popular delight in posting and annotating is unlikely to produce the equivalent of the encyclopedia or the library shelf of collocated texts. If we assume that there will come a time when we will have more powerfully organized networked information structures that serve the multi-vocal, ambiguity-seeking needs of humanists as well as the conformity and clarity needs of the data-collecting disciplines, how do we go about inventing them?

I would suggest that we focus on the core task of juxtaposition. In the old book-based knowledge culture, we spoke of collocation, of putting like books together on the same shelves. The digital medium poses several challenges to this traditional strategy:

  • too many items to browse, no common “shelving” system
  • no reliable catalog or index; we cannot get everything that is relevant and only that which is relevant.
  • segmentation by book is no longer valid; we have knowledge in multiple formats and multiple size segments and want intellectual access to content at different levels of granularity and across media
  • the same book can be “shelved” in multiple places since it only exists as bits rather than as paper, raising the expectation that an item will be discoverable under every relevant category.

We need to work toward creating new genres that accomplish what we currently accomplish through shelving according to well-developed library classification systems, but that will create these knowledge-based juxtapositions at the right scale and granularity for the giant multi-media archives of the twenty-first century.

Ted Nelson, the visionary technologist who is credited with coining the word “hypertext,” has long pointed to juxtaposition as a key underexploited affordance of digital environments. He finds the current World Wide Web inadequate largely because of its limited ability to allow a user to place one thing beside another, to compare versions side by side, or to bring together related instances of the same object.

Film art is one discipline that offers a particularly appropriate opportunity to shape scholarly discourse in a way that produces new knowledge by supporting juxtapositions that have not been apparent or representable before. A number of projects have explored this area, but copyright restrictions and the formidable legal defenses of the entertainment industry have prevented humanists from coming up with a genre for an electronic edition of a film. One solution is The Casablanca Digital Critical Edition Project[13], a prototype project locating shared resources on a web server available only to those with a legal copy of the film, which brings together a classic American film with the originating play script, shooting script, detailed production reports and memos, and an authoring environment for expert commentary. This prototyped model would allow studios to control copyright and scholars to have access to very precise, semantically segmented sequences in the film with the same precision of reference we expect to have over print materials. It would also give them the ability to juxtapose auxiliary materials like scripts, memos, outtakes, and commentaries with precise moments in the film.

To do this effectively, however, scholars will need transparent interfaces with well-established conventions for creating and following such juxtapositions. These are examples of the many design elements that go into formulating a new genre. If there are to be digital editions of films, then we will need design solutions at many levels: to guarantee copyright, to provide access to scholars and film buffs, to provide for many kinds of segmentation by authorized and private users, to provide for multiple layers of commentary at varying degrees of formality and authority. If we could establish a common format for film study, then we could also start making connections between films.

The first explorations of hypermedia were focused on the simple linking of documents that retained their legacy formats of pages, movie clips, separate images. The next design effort will focus on the creation of born-digital formats with segmentation and juxtaposition conventions that will lead to the formation of new genres. The Casablanca Digital Critical Edition Project is different from a print or DVD edition of a work of art because it is not just an artifact but an open-ended system, comprised of search tools, authoring tools, and display interfaces. It is part of the collective process of re-imagining older knowledge genres like the variorum text, the production archive, the critical edition, and perhaps the scholarly journal. As a critical edition, it is meant to live within the wider information landscape of a complete digital archive of films.

A More Ambitious Approach: Parameterized Narrative Structures
Juxtaposition of semantically segmented multimedia resources is an extension of the structures of argumentation that have always been a crucial part of knowledge creation. Narrative, like argumentation, is a basic rhetorical structure and one that seems to be among the oldest elements of human cognition. As I have argued elsewhere, new narrative structures carry the promise of expressing knowledge about the world that was not expressible or not as easily expressed in linear format. The new digital medium offers the promise of allowing us to create causal chains of events (narratives) with explanatory power that exist not as a single version but as a set of possibilities. Digital formats, such as games and simulations, let us create a world as a set of parameterized possibilities and run through multiple versions of events by changing the parameters and replaying the scenario.

We use the cognitive structure of the replay simulation in many ways already: in scientific models of the earth as an ecosystem, in videogame entertainments with multiple “lives,” in movies like Back to the Future or Groundhog Day, in military training exercises, in stock market models, and in our everyday thinking about how to spend our money or what is happening in our social relationships. Thus far, games have provided the only open-ended means of exploring parameterized situations and so we tend to think of all such frameworks as games. But it is useful to emphasize the parameterized story as a separate genre, with many overlapping features of games, because the function of a story is to explore chains of causation.

Parameterized stories, like archives of disparate texts, require semantic segmentation and strategies of juxtaposition to allow us to focus our attention appropriately. These strategies can be present in linear media. For example, in Rosencrantz and Guildenstern Are Dead, the playwright Tom Stoppard gives us signposts, such as a snippet of a famous soliloquy, to let us know where we are in the canonical text of Shakespeare’s Hamlet while we watch the parallel story told from the viewpoint of Hamlet’s foolish school friends whose unimportant offstage deaths in the original play gains poignancy and existentialist weight in the twentieth-century retelling.

Since digital environments are participatory, juxtaposition can be interactive; the viewer can choose which item to place next to another or which sequence to follow in viewing a narrative. It is the author’s job to present meaningful possibilities, to create opportunities for revealing juxtapositions that are initiated by the viewer.

The design challenges in this emerging, experimental form are similar to those underlying the more practical tasks of film study: how do we keep users aware of the context of each segment as well as focusing them on the immediate juxtaposition? How do we segment temporal media so that we can create juxtapositions that help us to grasp something that was out of our cognitive reach in traditional media? How do we allow scholars to build upon one another’s reasoning by bringing relevant information together in clearest juxtapositions? Since humanistic knowledge is concerned with contextualized, ambiguous verbal and visual artifacts more often than it is with logical datasets, we need our own genres of representation. They will be of use to other disciplines as well, however, since commentary on temporal media, argumentation by citation across media, close analysis of visual objects, and more complex narrative forms will serve analytical discourse in general. When we think of cyberinfrastructure we have to include these discourse and media analysis tools as well as the number crunchers, optical cables, and compression algorithms. Because media serve to focus our common attention in productive ways, we must exploit all the affordances of this new medium of representation, to improve the depth, breadth, and commonality of our focus. Inventing the devices that provide the technical underpinnings for a new medium is often ascribed to a single person like Gutenberg or a single moment like the display of the first film by the Lumiere brothers in December 1895. But the invention of a genre, like the movie, is a process of collective discovery usually without such clear moments of demarcation. The necessary ingredients for a humanities-friendly cyberinfrastructure will be clearer looking backwards than they are looking forwards. They will be hastened, however, by approaching the task of designing humanities projects as a collective project of genre-creation.

Notes
[1] G. C. Bowker and S. L. Star, Sorting Things Out: Classification and its Consequences (Cambridge: MIT Press, 1999).
[2] Inventing the Medium is the name of a manuscript textbook I am writing under contract to MIT Press. Some of the following argument derives from that work in progress.
[3] S. Shapin, S. Schaffer, et al., Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life(Princeton: Princeton University Press, 1985).
[4] E. Svenonius, The Intellectual Foundation of Information Organization (Cambridge: MIT Press, 2001).
[5] M. Tomasello, The Cultural Origins of Human Cognition, (Cambridge: Harvard University Press, 2001); J. H. Murray, “Toward a Cultural Theory of Gaming: Digital Games and Co-evolution of Media, Mind and Culture,” Popular Communication 4, no. 3 (2006).
[6] J. H. Murray, Hamlet on the Holodeck: The Future of Narrative in Cyberspace (New York: Simon & Schuster/Free Press, 1997).
[7] S. Papert, Mindstorms: Children, Computers, and Powerful Ideas (New York: Basic Books, 1999).
[8] Tim Berners-Lee, J. Hendler, et al. “The Semantic Web,” Scientific American (May 2001); T. Berners-Lee, N. Shadbolt, et al., “The Semantic Web Revisited,” IEEE Intelligent Systems 21, no. 3 (2006), 96-101.
[9] T. Memmott, Lexia to Perplexia (2000), http://tracearchive.ntu.ac.uk:80/newmedia/lexia/.
[10] N. K. Hayles, Writing Machines (Cambridge: MIT Press, 2002), 49.
[11] G. Landow, Hypertext 2.0. (Baltimore: Johns Hopkins University Press, 1997).
[12] E. J. Aarseth, Cybertext: Perspectives on Ergodic Literature (Baltimore: Johns Hopkins University Press, 1997); K. Salen and E. Zimmerman, Rules of Play: Game Design Fundamentals (Cambridge: MIT Press, 2003); T. Fullerton, C. Swain, et al., Game Design Workshop: Designing, Prototyping, and Playtesting Games (New York & Lawrence: CMP Books, 2004); I. Bogost, Unit Operations: An Approach to Videogame Criticism (Cambridge: MIT Press, 2006); I. Bogost, Persuasive Games: The Expressive Power of Videogames (Cambridge: MIT Press, 2007).
[13] J. Murray, “Here’s Looking at Casablanca,” Humanities 26, no. 5 (2005), 16-23, http://neh.gov/news/humanities/2005-09/casablanca.html. The Casablanca Digital Critical Edition Project is an NEH-funded collaboration, designed by the author and Nick DeMartino of the American Film Institute (AFI), between the AFI, Warner Home Video, and Georgia Tech.

Open Access and Institutional Repositories: The Future of Scholarly Communications

by Greg Crane, Tufts University

Institutional repositories were the stated topic for a workshop convened in Phoenix, Arizona earlier this year (April 17-19, 2007) by the National Science Foundation (NSF) and the United Kingdom’s Joint Information Systems Committee (JISC). While in their report on the workshop, The Future of Scholarly Communication: Building the Infrastructure for Cyberscholarship, Bill Arms and Ron Larsen build out a larger landscape of concern, institutional repositories remain a crucial topic, which, without institutional cyberscholarship, will never approach their full potential.

Repositories enable institutions and faculty to offer long-term access to digital objects that have persistent value. They extend the core missions of libraries into the digital environment by providing reliable, scalable, comprehensible, and free access to libraries’ holdings for the world as a whole. In some measure, repositories constitute a reaction against those publishers that create monopolies, charging for access to publications on research they have not conducted, funded, or supported. In the long run, many hope faculty will place the results of their scholarship into institutional repositories with open access to all. Libraries could then shift their business model away from paying publishers for exclusive access. When no one has a monopoly on content, the free market should kick in, with commercial entities competing on their ability to provide better access to that freely available content. Business models could include subscription to services and/or advertising.

Repositories offer one model of a sustainable future for libraries, faculty, academic institutions and disciplines. In effect, they reverse the polarity of libraries. Rather than import and aggregate physical content from many sources for local use, as their libraries have traditionally done, universities can, by expanding access to the digital content of their own faculty through repositories, effectively export their faculty’s scholarship. The centers of gravity in this new world remain unclear: each academic institution probably cannot maintain the specialized services needed to create digital objects for each academic discipline. A handful of institutions may well emerge as specialist centers for particular areas (as Michael Lesk suggests in his paper here).

The repository movement has, as yet, failed to exert a significant impact upon intellectual life. Libraries have failed to articulate what they can provide and, far more often, have failed to provide repository services of compelling interest. Repository efforts remain fragmented: small, locally customized projects that are not interoperable–insofar as they operate at all. Administrations have failed to show leadership. Happy to complain about exorbitant prices charged by publishers, they have not done the one thing that would lead to serious change: implement a transitional period by the end of which only publications deposited within the institutional repository under an open access license will count for tenure, promotion, and yearly reviews. Of course, senior faculty would object to such action, content with their privileged access to primary sources through expensive subscriptions. Also, publications in prestigious venues (owned and controlled by ruthless publishers) might be lost. Unfortunately, faculty have failed to look beyond their own immediate needs: verbally welcoming initiatives to open our global cultural heritage to the world but not themselves engaging in any meaningful action that will make that happen.

The published NSF/JISC report wisely skips past the repository impasse to describe the broader intellectual environment that we could now develop. Libraries, administrators and faculty can muddle through with variations on proprietary, publisher-centered distribution. However, existing distribution channels cannot support more advanced scholarship: intellectual life increasingly depends upon open access to large bodies of machine actionable data.

The larger picture depicted by the report demands an environment in which open access becomes an essential principle for intellectual life.The more pervasive that principle, the greater the pressure for instruments such as institutional repositories that can provide efficient access to large bodies of machine actionable data over long periods of time. The report’s authors summarize as follows the goal of the project around which this workshop was created:

To ensure that all publicly-funded research products and primary resources will be readily available, accessible, and usable via common infrastructure and tools through space, time, and across disciplines, stages of research, and modes of human expression.

To accomplish this goal, the report proposes a detailed seven-year plan to push cyberscholarship beyond prototypes and buzzwords, including action under the following rubrics:

  • Infrastructure: to develop and deploy a foundation for scalable, sustainable cyberscholarship
  • Research: to advance cyberscholarship capability through basic and applied research and development
  • Behaviors: to understand and incentivize personal, professional and organizational behaviors
  • Administration: to plan and manage the program at local, national and international levels

For members of the science, technology, engineering, and medical fields, the situation is promising. This report encourages the NSF to take the lead and, even if it does not pursue the particular recommendations advocated here, the NSF does have an Office of Cyberinfrastructure responsible for such issues, and, more importantly, enjoys a budget some twenty times larger than that of the National Endowment for the Humanities. In the United Kingdom, humanists may be reasonably optimistic, since JISC supports all academic disciplines with a healthy budget. Humanists in the US face a much more uncertain future.

css.php