- While cyberinfrastructure was initially seen as support for scientific and engineering research, scholars in nearly every discipline increasingly require the same range of support to enhance their studies.
- The nearly ubiquitous demand for cyberinfrastructure places an especially heavy burden on institutions not in the top tier of the research hierarchy.
- Defining research needs, setting priorities for research support, developing support strategies, developing a funding model, and building partnerships to support research are key steps in building research cyberinfrastructure at small/medium research institutions.
Along with teaching and service, research is a critical component of the mission at most universities. Creating and sharing new knowledge across a broad range of disciplines enhances the intellectual life of both faculty and students, and research productivity often serves as a yardstick by which university reputations are measured. At larger universities, research may be deeply embedded in the institutional culture, while at small/medium research institutions, a research agenda might require incubation, nurturing, and development of appropriate support. Small/medium research institutions might have fewer large projects, less indirect cost recovery, and fewer possible economies of scale than large universities. Nevertheless, research remains important to the well-being of those institutions, and their faculty expect and deserve the best support possible.
A 2006 ECAR study defined cyberinfrastructure as the coordinated aggregate of "hardware, software, communications, services, facilities, and personnel that enable researchers to conduct advanced computational, collaborative, and data-intensive research."1 Further, "…IT professionals who view the future of IT in research merely in terms of network speed or computing cycles are missing the boat."2 While cyberinfrastructure was initially seen as support for scientific and engineering research, scholars in nearly every discipline increasingly require that same range of support as they come to understand the power of computation to enhance their studies. This nearly ubiquitous demand for cyberinfrastructure places an especially heavy burden on institutions not in the top tier of the research hierarchy.
Listen to an interview by Anne Agee, vice provost and CIO, with College of Liberal Arts Dean Donna Kuizenga, University of Massachusetts Boston, on the research needs of humanities scholars.
Small/medium research institutions often lack dedicated staff and departmental structures to support faculty research, so they might need to take a different approach to developing an adequate cyberinfrastructure. The campus culture at small/medium research institutions could result in inappropriate infrastructure to support research, inability to actively promote support for research, conflicting priorities for cyberinfrastructure funding, reduced agility in providing needed computing resources to researchers, and lack of awareness by researchers of the limits of institutional infrastructure. These realities at small/medium research institutions can result in dissatisfied researchers and delays in conducting research activities.
One of the first challenges for a discussion about building research cyberinfrastructure at small/medium research institutions is defining a "small/medium research institution." A loose definition is any institution with significant research activities that isn't classified as RU/VH: Research University (very high research activity) in the Carnegie Basic Classification. "Research cyberinfrastructure" also means different things to different people. A 2008 ECAR research study found varying levels of adoption of the five research cyberinfrastructure categories that the study considered.3 Both factors make it hard to present a single method for building research cyberinfrastructure at small/medium research institutions. Instead, this article presents five key steps for building research cyberinfrastructure.
To build a respectable cyberinfrastructure, the IT organizations at small/medium research institutions need to use creativity in discovering the needs of their researchers, setting priorities for support, developing support strategies, funding and implementing cyberinfrastructure, and building partnerships to enhance research support. This article presents the viewpoints of four small-to-medium-sized research universities who have struggled with the issue of providing appropriate cyberinfrastructure support for their research enterprises. All four universities have strategic goals for raising the level of research activity and increasing extramural funding for research.
Table 1. University Profiles
|Miami University(Ohio)||Oakland University(Michigan)||University of Massachusetts Boston||University of Wisconsin–Milwaukee|
|Research and grant funding||$16 million||$12 million||$42 million (FY09)||$38.4 million (FY10)|
|Special characteristics||Uses a teacher-scholar model that encourages undergraduate involvement in research; strong focus on undergraduate education, with graduate programs in select areas||Supports undergraduate research experience and opening a medical school||Sees its research enterprise as a vital part of its urban mission, with a focus on "use-inspired basic research"4 that will benefit local communities and industries||Demonstrates a strong organization around a growing research enterprise|
Our four universities employ a variety of approaches based on diverse institutional experience, but with similar challenges in supporting research. Action steps for creating a research support structure include:
- Define research needs
- Set priorities for research support
- Develop support strategies
- Develop a funding model
- Build partnerships for research support
Defining Research Needs
Institutions trying to create a research culture will need a process of requirements discovery and elicitation to uncover and define the research requirements of the university community. In well-established research support operations, large research teams either support themselves or have departmental or college-level support. Less research-intensive institutions often lack that departmental support, hence researchers must rely on the central IT organization for support — but the culture has not developed clear processes for obtaining research support. Additionally, the institution might still be working on establishing an overall strategy for its research enterprise. So, in addition to discovering the needs of individual researchers and research clusters, the IT unit must understand the broad strategic directions the institution has in mind for its research programs. Because research needs constantly change as the institution emphasizes different discipline areas and as the disciplines themselves evolve, the discovery and definition of research needs must be an ongoing process, not a one-time exercise.
Miami University restructured the IT organization and hired its first CIO in 2003. One of his first initiatives was a strategic planning exercise during which faculty identified the need for more IT support for faculty research. This resulted in the formation of a Research Computing Support (RCS) group within the IT organization and the purchase of a small high-performance computing (HPC) cluster. At the same time a faculty group had received an internal grant for activities to advance the use of computing in faculty research. This faculty group provided important input into the structure of the RCS group, which initially aimed to provide HPC programming, general scientific programming, statistical support, and support for the use of databases in research. As the RCS group worked with faculty to further understand their needs, the support focus expanded. For example, Miami only has a limited number of graduate programs, so many students involved in research are undergraduates or master's students who might only be involved in a project for a year. The RCS group found that providing personalized training could reduce the time it took for students to become productive members of a research team, and management of software developed for a project could help smooth the transitions as students started and completed their work on a project.
Listen to an interview by David Woods, Assistant Director for Research Computing, with Dr. Jim Kiper, Associate Dean for Research, School of Engineering and Applied Sciences, and Professor, Department of Computer Science and Software Engineering, Miami University, on the value of research-focused support within the IT organization and faculty access to a high-performance computing cluster.
Oakland University made an effort to engage faculty in discussion through a series of discovery meetings, but a lack of understanding about the goals and poor attendance led to cancellation of the meetings. Instead, the university IT organization, University Technology Services (UTS), requested from the Office of Grants, Contracts, and Sponsored Research a list of faculty who received National Science Foundation grants. A senior IT staff member familiar with a broad range of IT services was assigned to interview each faculty member. The interview questions focused on research information:
and understanding the IT service needs of researchers that UTS could address. Also, the CIO met with the Research Office and faculty groups to determine other support needs, including administrative support and purchasing functions, that faculty members had identified as cumbersome.
As part of its strategic planning process, UMass Boston established a committee on research and graduate studies to create a vision and recommend goals for a comprehensive plan to enhance the research enterprise. The final report, issued in 2007, included recommendations to improve a number of support services that university researchers found lacking, including enhanced network bandwidth for data-intensive activities, expansion of central data storage, and increased support for multiple operating systems. Additionally, in 2006 the university had commissioned a study from the Battelle Memorial Institute to recommend potential focus areas for strategic expansion of the university's research enterprise. In 2008, four working groups prepared more detailed reports about specific research clusters the university wanted to emphasize: urban health and public policy, STEM education, computational sciences, and developmental sciences. Each of these working group reports identified specific staffing and technology needs for each area. Among the most consistently mentioned staffing needs were database support, statistical analysis support, and storage engineering. These two sets of reports provided the IT unit with a better understanding of the university's overall research strategy and a starting point for planning how to meet these research needs. Additionally, following the advice of some peer institutions with more established research support structures, the IT unit developed a short set of questions for new faculty, to obtain information about their specific research needs and expectations including networking requirements, security requirements, and research equipment that they might be planning to bring with them to the university.
Listen to the interview by Anne Agee, vice provost and CIO, with Dr. Bala Sundaram, chair of the Physics Department, University of Massachusetts Boston, about the kind of support researchers and scholars need from IT.
Efforts at UWM started with creation of a vision statement by the CIO that was shared with and endorsed by two faculty governance committees, the Research Policy Committee and the IT Policy Committee. Results of a campus-wide survey established broad categories of cyberinfrastructure needs, but a deeper understanding of researcher needs was believed necessary. Therefore, one of the first activities for the newly established director of cyberinfrastructure was to collect information from campus researchers regarding technology needs across a broad range of disciplines. The CIO and director of cyberinfrastructure met with the deans and other appropriate individuals in each school to discuss their school's significant research directions and goals. At these meetings, the deans were solicited for their recommendations of faculty who should be interviewed regarding computing support for their research and scholarly activities. Each of the interviews with the recommended faculty focused on three goals: understanding the faculty member's research, analyzing the individual faculty member's cyberinfrastructure needs, and collecting information to help inform the campus dialogue regarding campus-wide needs. The interviews led to a number of successful outcomes, including:
- Identifying existing resources useful to researchers about which they had not previously been aware
- Facilitating collaborations between researchers with similar or complementary research interests
- Assisting the campus with identifying common cyberinfrastructure needs to support research
- Generating good will toward the central IT organization and "high touch" interaction opportunities for IT with faculty
Setting Priorities for Research Support
Given limited resources for research activities at small/medium research institutions, establishing funding priorities for larger scale or centrally provided research activities assumes great importance. However, the wide variety of research and scholarly activities carried out at higher education institutions can make obtaining consensus on priorities challenging.
At Miami, the RCS group's first priority was simply to find faculty projects on which to work, while today the group has a full workload. The university's strategic goals include involving students in research, increasing the level of scholarly accomplishments, contributing to larger scale communities, and maximizing resources. These goals translate into a focus on expanding the number of faculty with whom the RCS group works, helping students learn to use research cyberinfrastructure, providing support for collaborations with researchers at other institutions and in industry, and supporting grant applications. The RCS group also works to transfer skills and knowledge to faculty and students that allow them to expand their use of cyberinfrastructure resources with limited support from RCS.
Following up on what was learned in the private interviews, Oakland University established priorities through the existing University Senate Academic Computing Committee, the faculty committee charged with issues of academic and faculty IT. Further review of priorities was done by the Academic Council, a leadership group with deans and other academic area leaders, led by the provost. Using existing university governance structures supported the goal of building communications lines that expose and prioritize research needs within the university culture. Priorities from these organizations emphasized a need for improved research administration as well as ongoing capacity for network growth and operational stability.
At UMass Boston, knowing that all of the needs identified could not be met at once, the IT leadership team took the directions given in the reports, plus input from faculty researchers and the vice provost for research, and developed a five-year plan for improving research support. Having a written plan gave the CIO a vehicle for discussion and a structure for making budget requests to implement the plan. Two of the highest priorities were increasing available storage and providing more research support staff. As a result, one major initiative in the plan is enhancing storage; another is gradually building a team within IT dedicated to research support, starting with a budget request for a position to coordinate research support and give research faculty a single point of contact. An ongoing governance structure for research support is still in the planning stages.
UWM formed a cyberinfrastructure working group comprised primarily of faculty to identify and prioritize services needed to support campus research and scholarly activities, as well as to recommend best practices for funding cyberinfrastructure services. The working group presented its recommendations to the chancellor, provost, CIO, and vice chancellor for research and economic development. As a result, the central IT organization, in collaboration with two schools, has deployed the first shared HPC cluster service for campus and is actively pursuing the other priorities identified by the working group.
Developing Support Strategies
Support strategies need to strike a balance between providing a large amount of support for a small number of projects and providing more general support to a wider range of users. Faculty input or review can be useful when developing support strategies so that faculty have a clear understanding of all the routine activities for which the support staff is responsible and how the remaining support resources will be allocated.
At Miami, the RCS group was created at the same time that the university installed its first HPC cluster, and support strategies for both resources were established simultaneously. An initial focus of the RCS group was finding faculty projects to collaborate on, so large blocks of time were allocated for these projects, with the remaining time allocated for general support of the computing cluster and research software packages. An important aspect of any large faculty collaboration involving software development is identifying a method for developing the skills of faculty and students on the project so that they can take on much of the ongoing support and development work after the initial development work ends. Since many of the students involved in research at Miami are undergraduates or master's students, consideration was given to picking technologies with which students would already have experience. For example, for one mechanical engineering project, a complex parallel simulation framework developed using C++ linked with simulation code written in MATLAB. With this tool, the simulation code, which will need to change as research progresses, is written in a language that mechanical engineering students learn as part of their course work. Therefore ongoing support by the RCS group will be limited to support for the simulation framework and Matlab integration, which should not need to change very often. Another challenge has been that successful collaborations never end — they just lead to more projects. However, the group needs to respond to requests from other faculty members as well. This challenge has been addressed by having the RCS group staff member handle the high-level software and algorithm design while students work on implementation and testing.
At Oakland University, faculty members most frequently stated that their network connection, e-mail account, and desktop computer (or in a few cases a local mini-tower) were their most important computing resources. Some specialized areas in engineering, chemistry, and physics reported using slightly more resources, including data center hosting for research clusters. Faculty did not actively state security requirements, so IT staff members had to probe more about security. Oakland determined that the central IT organization could emphasize five support areas:
- Edge client: Providing consistent support for network connectivity, e-mail, and desktop
- Collaboration: Promoting available shared collaborative document systems including both the campus implementation of Xythos and off-campus tools in Google Apps, and determining when use of either service is appropriate
- Hosting: Offering to host servers in the central data center and reviewing possible cloud opportunities for storage or processing services (see Figures 1–3)
- Security: Providing assistance or implementation of special security requirements
- Administrative: Identifying and implementing an improved research support system
Establishing connections with research faculty based on those support areas provides a foundation for building additional support later. Interviews with research faculty indicate that trust in IT services only extends to one service failure.
Figure 1. New Electrical and UPS Upgrades in the Oakland University Data Center
Figure 2. Working on the Last Cooling Upgrades at Oakland University, Spring 2010
Figure 3. Preparing the Oakland University Data Center for the Next Generation of Research Systems (New Electrical, UPS, HVAC, Floor Tiles, and Server Racks)
Since UMass Boston had very little IT support available for researchers, the IT organization started with some very basic steps — providing physical space for collocated servers and housing a small HPC cluster; both depended on refurbishing the existing data center, as detailed in the next section. IT also developed a collaborative workspace through Xythos that met at least some needs for shared online research materials. More recently, with assistance from the vice provost for research, IT added a modest storage array (4 terabytes) to support research data. The IT organization also worked with faculty to determine priorities for adding software licenses and modestly increased the availability of software to support research, adding, for example, licenses for NVivo, a tool that supports qualitative research. Adding more storage space and creating at least one position dedicated to research support are the next planned strategies, to be implemented as the budget allows. It is also worth noting that the Healey Library at UMass Boston has taken a lead role in increasing the availability and accessibility of electronic resources for researchers and in developing a digital repository that will make it easier for researchers to share either completed research or research in progress. These initiatives, which go beyond hardware and software, can also be key components in building a respectable cyberinfrastructure and a culture of research support.
UWM is developing a research cyberinfrastructure support model based on leveraging personnel from both college-level IT groups and the central campus IT organization as appropriate. The newly formed HPC cluster service is supported by a hybrid support structure, with use of cluster resources based on the level of resource investment in the cluster service by each college. Funding for cluster system administration resources are provided at the college level and supervised through the central IT organization. Cyberinfrastructure facilitators, who assist researchers with developing and running codes on the cluster, are provided by participating colleges and are affiliated either with colleges or the central IT organization.
Funding and Implementing Cyberinfrastructure
"Seize the day!" should probably be the motto for supporting research at small/medium research institutions, particularly state-funded institutions. Extra state dollars for infrastructure are few and far between, so the enterprising IT unit must be prepared to make the most of any such opportunity.
At Miami, funding for RCS staff is part of the base budget for the IT Services organization. Funding for Miami's initial HPC cluster came from central university funds. Funding for the recent HPC cluster replacement was discussed between IT, the provost, and the deans of the two academic units with the largest number of cluster users, and final funding was provided by IT Services and the two academic units. Several faculty members have included budget line items for "Research Computing Support" in grant proposals, but none of these requests have been funded so far. It is expected that any funds received through grants would be used for replacement of or upgrades to cluster hardware.
Oakland recognized that an updated data center was needed to best support research hosting services. The university has relied on incremental change through allocation of some small portion of the annual budget and occasional support in terms of one-time funds made available for specific projects. With a long-term plan that spanned multiple years, the data center was upgraded with a new fire suppression system, new HVAC units and better cooling plans, a new UPS, updated electrical and security infrastructure, leak mitigation, and standardized racks and floor design. With this investment, the facility is more attractive to researchers. Also, strong presentations for one-time funds resulted in the implementation of Xythos for storage of work-in-progress research papers and data files. A partnership with the university library resulted in the implementation of DSpace for research presentation and data file archiving. Faculty have been active in seeking research support funding for specific technologies.
UMass Boston managed to squeeze a data center upgrade out of a state building maintenance project and took advantage of the upgrade to add a caged collocation space where researchers could house servers in a secure and reliable environment. After years of leaking ceilings and inadequate power and cooling in the data center, researchers were understandably reluctant to trust their servers in that space (see Figure 4). However, after a multi-million dollar overhaul, the Information Technology Services Division (ITSD) finally had a space that could effectively handle at least some researchers' needs (see Figure 5). The 700 square foot collocation space offers lockable data cabinets and power feeds protected by UPS and a dedicated generator as well as a fault-tolerant cooling infrastructure. PIN access to the space and video surveillance provide security. As researchers request access to the space, IT staff meet with them and help assess whether the collocation area can meet their needs. Researchers manage their own systems in the collocation space and have 24-hour secure access. By early 2010, the collocation space housed 30 academic servers and associated back-up hardware.
Figure 4. UMass Boston Data Center Before Upgrade
Figure 5. UMass Boston Data Center After Upgrade
The new data center also provided space for the university's first HPC cluster, a 32-node machine with about 1 TB of storage. The IT unit provides support in the form of a half-time position to maintain the cluster and some troubleshooting support for users. The HPC is primarily used by researchers in physics (where faculty start-up money funded the cluster), but other faculty can get at least some computing cycles from it. The relatively small size of the cluster definitely limits the amount of research supported, and the research community is pursuing grant opportunities and partnerships with other institutions to expand the existing cluster and provide more computing cycles (see the next section).
UWM's CIO reallocated internal staffing resources to support cyberinfrastructure and brought in external revenue to fund the director of cyberinfrastructure position and a second data center. The new HPC cluster service was developed through a collaboration between the Office of the Provost and Vice Chancellor for Academic Affairs and the College of Engineering and Applied Science (CEAS). Initial purchase of the cluster hardware and renovations to the data center space to house the cluster were funded using CEAS faculty start-up funds and funds from Academic Affairs. Funding for personnel resources to support the cluster is provided by the participating colleges and the central IT organization. In addition, the provost and the vice chancellor for research and economic development recently agreed to allocate five percent of campus indirect funds per year to the library and central IT to support research.
Building Partnerships for Research Support
Developing an ongoing relationship with stakeholders within the institution as well as with outside agencies is a key to successfully providing and supporting research cyberinfrastructure for all institutions, but especially for those in the early stages of building a culture of research support. The challenges to building relationships will vary as much as the researchers involved, but the goal in all cases should be for IT to be seen as a productive partner or collaborator in the research activity of the institution.
Building partnerships has been the most important factor in the success of research support efforts at Miami. These partnerships have resulted in several interesting projects, but more important have been the efforts that the faculty partners have made to tell their peers how research support has advanced their research programs. While the research support group at Miami has made many efforts to inform faculty about available research support services, referrals by existing faculty partners have been the most effective way of expanding the reach of these services. Faculty partners have also been a key asset in helping department chairs, deans, and IT leaders understand how research cyberinfrastructure services have advanced research efforts at Miami. Partnerships can go beyond just efforts related to faculty research — the opportunity to discuss the research support available at Miami has been a notable factor in several recent faculty searches and candidate interviews.
The importance faculty members place on the network reinforced Oakland University's participation in the state-wide network operated by Merit Network. Ongoing participation allows the university to utilize high-performance research networking capability suited to higher education. Faculty appreciate a quality data center facility, with supporting staff, to which they have access.
Listen to an interview by Brian Paige, Executive Director, Networking and Technology, University Technology Services, with Dr. George Martins, Associate Professor, Physics Department, Oakland University, about partnering with IT in developing research infrastructure.
At UMass Boston, collaboration has been a key to enhancing support for research. The vice provost for IT and the vice provost for research both share the goal of helping the institution develop a long-term plan for research support. Another element of cyberinfrastructure that requires enormous collaboration is information security. Researchers expect to have secure access and secure sharing of their data, even though the data is almost never under the direct control of IT. Besides taking steps to achieve network security, UMass Boston has also put together an Information Security Council that helps the academic units better understand and implement security measures for their research and other data that requires protection.
In addition, the UMass Boston CIO has worked with the CIOs of the other UMass institutions to facilitate collaborative efforts within the system to provide better support for all UMass researchers. Several UMass schools are jointly developing a virtual computing lab infrastructure that they can leverage to provide enhanced computing resources to researchers as well as better access to computing resources for students and faculty. Additionally, UMass Boston joined a consortium of Boston-area higher ed and health care institutions to facilitate data sharing and collaboration among bio-med researchers. Finally, the University of Massachusetts system joined forces with other Massachusetts institutions to develop a regional high-performance computing center that has the potential to provide enhanced resources for the smaller institutions in the consortium.
Collaboration has been and will continue to be key to UWM's research cyberinfrastructure support. Partnerships involving the Office of the Provost and Vice Chancellor for Academic Affairs, the Office of the Vice Chancellor for Research and Economic Development, the central IT organization, the UWM Libraries, college deans, and the faculty provide vital support. In addition, UWM is partnering with other institutions throughout southeast Wisconsin to form a campus grid that will link computing resources across multiple entities.
As the competition for students increases and the budget climate becomes more restrictive, small/medium research universities are looking to increase research activities to raise campus prestige and obtain higher levels of extramural funding. It is important to understand the challenges that small/medium research institutions face when planning effectively for building and maintaining an appropriate campus research cyberinfrastructure.
Investments in cyberinfrastructure can also help small/medium research institutions attract faculty with high-quality research programs. Effective cyberinfrastructure includes many elements, from hardware and software to staffing. Appropriately skilled and trained staff who can facilitate the use of technology by researchers are key to success. These individuals must not only possess IT knowledge and skills but also must be able to match researchers' needs to the appropriate technology services. Data security and library support are also part of this equation and should not be overlooked. Additionally, rather than investing in expensive, specialized research equipment and lab facilities for use by a small number of faculty, schools can invest in commodity computing hardware to support research in a wide range of disciplines.
Given the greater resource constraints, additional effort towards collaboration and resource sharing across multiple campus areas will likely be necessary. Although individual principal investigators or schools might be unable to provide sufficient resources for a given research activity, combining resources across multiple researchers or schools and units may yield adequate or superior resources. Additionally, "above-campus" resources, such as services offered by other higher education institutions, consortia, or public cloud vendors, should be leveraged to supplement locally provided services.
As with many other activities in IT, the most important and challenging aspects of building research cyberinfrastructure have little to do with technology and more to do with building trust and establishing good communications with the constituents. Effective outreach from central IT to faculty might be needed. The "build it and they will come" approach is not generally effective. Time must be invested in listening to faculty needs and building a line of trust for and openness about central IT facilities and systems. Priority setting is essential and may require new governance structures specifically responsible for research technologies.
Each of our institutions has made progress in establishing and advancing its state of cyberinfrastructure by nurturing relationships and working with the appropriate campus constituencies. Although our approaches differ, common strategies and themes exist. We believe that other small/medium research universities can benefit from our experiences to achieve success in their own research cyberinfrastructure efforts.
- Harvey Blaustein, with Sandra Braman, Richard N. Katz, and Gail Salaway, "IT Engagement in Research" (Roadmap) (Boulder, CO: EDUCAUSE Center for Analysis and Research, July 2006), p. 2.
- Harvey Blaustein, with Sandra Braman, Richard N. Katz, and Gail Salaway, IT Engagement in Research: A Baseline Study, (ECAR Research Study, Volume 5) (Boulder, CO: EDUCAUSE Center for Analysis and Research, August 2006), p. 32.
- Mark Sheehan, Higher Education IT and Cyberinfrastructure: Integrating Technologies for Scholarship (ECAR Research Studies, Volume 3) (Boulder, CO: EDUCAUSE Center for Analysis and Research, June 2008).
- Battelle Technology Partnership Practice, "Research Reenvisioned for the 21st Century: Expanding the Reach of Scholarship at the University of Massachusetts Boston," Battelle Memorial Institute, 2006, p. 1.
© Anne Agee, Theresa Rowe, Melissa Woo, and David Woods. The text of this article is licensed under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 license.
Ideas for Defining Research Needs
- Try to understand the broad institutional strategy for research as well as individual research needs.
- Consider creating governance structure, such as a standing faculty advisory committee, focused on cyberinfrastructure.
- Conduct individual interviews with researchers to assist with discovery of previously unrequested support needs.
- Take advantage of opportunities such as strategic planning or accreditation activities to gather information.
- Conduct a general survey of faculty research needs, possibly as part of a larger IT strategic planning activity.
- Prepare a questionnaire for newly hired faculty to get a handle on their research needs and expectations.
- Hold follow-up discussions or focus group meetings with key faculty members, deans, and the central research office to identify cyberinfrastructure requirements and goals.
- Include students, post-docs, and research scholars in cyberinfrastructure planning.
- Look for ways that cyberinfrastructure supports researchers spending more time working on research activities.
- Identify resources that can help bridge gaps in knowledge.
Ideas for Setting Priorities for Research Support
- Assess critical priorities coming from researchers currently using research support services.
- Establish a single point of contact for coordinating research support within the IT organization so that one person becomes very knowledgeable about research needs and can help make the case for enhancing support.
- Discover and review information about future research plans and technology trends in individual fields to help define future support needs.
- Develop a written plan for enhancing research support.
- Seek input from academic leaders (provost, deans, chairs, etc.) about new programs and departments so that research support needs can be considered while the programs are being developed.
- Involve advisory groups in balancing future plans and current requirements.
- Set research support priorities using the same methods used to set other institutional priorities.
Ideas for Developing Support Strategies
- Seek opportunities for collaboration, such as sharing high-performance computing clusters or research authoring tools.
- Identify technology management streams that involve students to leverage support.
- Emphasize full use of a high-quality data center for hosting research systems.
- Promote use of digital repositories and shared document management environments.
- Consider developing a hybrid support model by leveraging resources from various areas.
Ideas for Funding and Implementing Cyberinfrastructure
- Note that initial investment funding from central resources might be needed.
- Take advantage of ongoing or new technology or facilities projects that can be leveraged to enhance the overall cyberinfrastructure.
- Promote a research investment return-on-mission tied to the institution's strategic research goals.
- Support faculty seeking grant-funding opportunities.
- Seek collaborative opportunities with researchers and administrators working to develop plans for providing and sustaining funding.
- Develop success metrics, such as the number of faculty members supported, graduate students supported, publications supported, and technology supported.
- Recognize that many success metrics might be too simplistic. Consider incorporating faculty qualitative assessment of cyberinfrastructure into evaluations. The impact of research cyberinfrastructure is hard to see because learning to use new resources and incorporating them into a research program can be a long, gradual process.
Ideas for Building Partnerships for Research Support
- Seek opportunities to overcome negative images of IT based on previous interactions.
- Promote the value of working with IT; approaches that market a new relationship with no assumptions might work best.
- Invest time learning about what researchers are studying and how they work.
- Seek partnering ideas from advisory groups.
- Identify opportunities for introducing cyberinfrastructure that allows researchers to spend more time on research and less on technology or on negotiation with outside vendors or collaborators.
- Identify opportunities to negotiate volume licensing for software packages used in research, improve distribution and installation of the software, or bring vendor representatives in to help researchers learn about all of the features offered in a package.
- Invest in learning about resources available outside the university; help direct faculty researchers to alternative ways of meeting research needs and aid their understanding of the ramifications of using cloud computing services and similar approaches.
Cyberinfrastructure Support for Teaching and Learning
Making Research Cyberinfrastructure a Strategic Choice
Growing demands for research computing capabilities call for partnerships to build a centralized research cyberinfrastructure
By Thomas J. Hacker and Bradley C. Wheeler
The commoditization of low-cost hardware has enabled even modest-sized laboratories and research projects to own their own "supercomputers." We argue that this local solution undermines rather than amplifies the research potential of scholars. CIOs, provosts, and research technologists should consider carefully an overall strategy to provision sustainable cyberinfrastructure in support of research activities and not reach for false economies from the commoditization of advanced computing hardware.
This article examines the forces behind the proliferation of supercomputing clusters and storage systems, highlights the relationship between visible and hidden costs, and explores tradeoffs between decentralized and centralized approaches for providing information technology infrastructure and support for the research enterprise. We present a strategy based on a campus cyberinfrastructure that strikes a suitable balance between efficiencies of scale and local customization.
Cyberinfrastructure combines computing systems, data storage, visualization systems, advanced instrumentation, and research communities, all linked by a high-speed network across campus and to the outside world. Careful coordination among these building blocks is essential to enhance institutional research competitiveness and to maximize return on information technology investments.
Trends in Research Cyberinfrastructure
The traditional scientific paradigm of theory and experiment—the dominant approach to inquiry for centuries—is now changing fundamentally. The ability to conduct detailed simulations of physical systems over a wide range of spatial scales and time frames has added a powerful new tool to the arsenal of science. The power of high-performance computing, applied to simulation and coupled with advances in storage and database technology, has made the laboratory-scale supercomputer indispensable research equipment. These new capabilities can bestow a significant competitive advantage to a research group and help a laboratory publish better papers in less time and win more grants.1
Many trends and forces shape research cyberinfrastructure today in academic institutions:
- Rapid rate of commoditization of computation and storage
- Emergence of simulation in the sciences
- Increasing use of IT in the arts and humanities
- Escalating power and cooling requirements of computing systems
- Growing institutional demands for IT in an era of relatively flat levels of funding for capital improvements and research
Commoditization Trends Affecting Cyberinfrastructure
The concept of building cost-effective supercomputers using commodity parts was introduced in 1994.2 From 1994 until today, predictable trends of technology improvement and commoditization have increased the power of off-the-shelf components available for cluster designers (see Table 1). These trends include Moore's law, Gilder's law, and storage density growth.3 Downward trends in technology unit prices for storage and memory have accelerated since 1998.4
Click image for larger view.
Semiconductor memory prices have experienced a similar price reduction. Complementary to commoditization trends is the growing pervasiveness and reliability of the Linux operating system and of open-source cluster-management tools. Many vendors now offer cluster products that are relatively simple to install and operate.
The research community is actively exploiting these trends to develop laboratory-scale capabilities for simulation and analysis. The growing influence of cluster computing since 1994 is clearly demonstrated by its impact on the distribution of computer architectures in the Top500 supercomputer list.5 Large clusters have displaced all other systems to become the dominant architecture in use for supercomputing today. This trend illustrates how the forces of commoditization have come to dominate high-end computing.
Adoption of IT in the Arts and Humanities
In the arts and humanities, fundamental changes are taking place in the conduct of research and creative activities. Funding is increasing for digital content creation, synthesis of new content from existing digital works, and digitization of traditional works. A recent report from the American Council of Learned Societies on cyberinfrastructure for the humanities6 highlights these trends. The report describes significant unsolved "grand-challenge" problems of using information technology and cyberinfrastructure to reintegrate the fragmented cultural record. Addressing these grand-challenge problems will require institutional commitments to the long-term curation and preservation of digital assets and to providing open Internet access to unique institutional collections.
The digitization project by Google offers one example of this paradigm shift in the arts and humanities. The Google project aims to provide universal access to millions of volumes from research university libraries. As electronic collections grow in scale and size, new forms of creative expression and scholarship will become possible, further increasing demands for information technology infrastructure and support.
Costs of Cyberinfrastructure for Research
The expansion of power and cooling requirements for modern computers are well known. Providing adequate facilities for current and future needs is one of the largest problems facing academic computing centers today.
Unlike hardware costs, environmental and staff costs to operate a research cyberinfrastructure are not driven by the commodity market and represent large recurring expenses. In an era of flat budgets, this situation makes it difficult even for central IT providers to provide adequate facilities or professional staff to support the demand for computational clusters and research computing. These problems are compounded by the last decade of growth in digital and Web-based administrative and instructional services, which has put a strain on physical facilities and staff resources in central IT organizations.
The scarcity of central IT support and facilities for research cyberinfrastructure represents a gap between institution-wide needs and the capacity to deliver services at current funding levels. This capability gap puts the research community at a competitive disadvantage and drives individual researchers to meet their needs through the development of in-house research computing. Few researchers and scholars want to be in the business of developing their own cyberinfrastructure; they are simply seeking to remedy the lack of the cyberinfrastructure they need to support their work.7
It is sensible to leverage commoditization trends to broaden access to research cyberinfrastructure. Universities may promote or tolerate the trends of decentralization, but should understand all the costs involved in operating decentralized research computing. Some costs, such as capital expenditures for the initial purchase of equipment, are simple to quantify. Other costs, such as floor space to house equipment and depreciation, are less obvious and can represent significant hidden costs to the institution.
Case Study: Cost Factors for High-Performance Computing
To understand the tradeoffs between decentralized and centralized research computing, we can break down some of the costs for operating a computational platform, using a supercomputer as an example. Cost factors include:
- Equipment costs—costs for initial acquisition, software licenses, maintenance, and upgrades over the useful lifetime of the equipment.
- Staff costs—operations, systems administration, consulting, and administrative support costs.
- Space and environmental costs—data center space, power, cooling, and security.
- Underutilization and downtime costs—operating over-provisioned resources and loss of resources due to downtime.
Patel described a comprehensive model for calculating the costs of operating a data center.8 To compare operational costs for centralized and distributed research computing, we ask "Is it less expensive to provide operational costs (space, power, cooling, staff, and so forth) in one central location, or is it cheaper to support many smaller distributed locations?"
Comparing equipment acquisition costs in these two scenarios must take into account significant savings possible through the coordinated purchase of one very large system, compared with many smaller independent purchases. In our analysis, we assume that a large central purchase costs less than the uncoordinated purchase of a number of systems.
Patel described the true total cost of equipment ownership as the sum of the costs for space, power, cooling, and operation. We consider each in turn.
Space, Environmental, and Utility Costs. The costs for providing space depend on how efficiently the space is used (amount of unit resources per square foot of space) and on facility construction costs. Modern data centers can provide highly efficient and dense cooling and conditioned power at a lower unit cost than laboratory-scale computer rooms. This makes it feasible to host computer equipment in a central data center at a much higher density than a laboratory computer room. Furthermore, operating many small computer rooms that have over-engineered air-conditioning and electrical systems can result in greater aggregate underutilized capacity than a central data center.
In terms of cooling, there is a sizeable difference in cost per BTU between small and large computer room air-conditioning systems. Using data from the 2006 RSMeans cost estimation guide,9 installing a small 6-ton unit costs $4,583 per ton versus $1,973 per ton for a 23-ton cooling unit (commonly used in large data centers).
A recent development is the return of water cooling, which more effectively removes heat from modern computing equipment. Provisioning water cooling in a large central facility can use chilled water from a utility or a large chilling plant.
Comparing space, environmental, and electrical costs for an equal amount of computing power, we believe that a central data center is less expensive to provision and operate than several smaller decentralized computer rooms.
Operational Costs. Operational costs include personnel, depreciation, and software and licensing costs.
In a central data center, a coterie of qualified professional staff is leveraged across many systems. Although individual staff salaries exceed the costs for graduate students, the staff costs per unit of resource are fairly low.
In the decentralized case, graduate assistants (GAs) often provide support as an added, part-time responsibility. This decentralized staffing model has several inherent drawbacks. First, the GA's primary job is to perform research, teach, and work on completing the requirements for a degree, not to provide systems administration and applications consulting for their group. Second, compared with professional staff, GAs are generally less effective systems administrators. They are hampered by a lesser degree of training and expertise and must distribute their efforts over a smaller number of computers housed in the laboratory in which they work. Third, the average tenure of a GA at a university is (or ideally should be) less than the term of a professional staff member. The lack of continuity and retention add transition costs for training new graduate students to take over support functions for the laboratory computational resources.
Based on these factors, we believe that personnel costs for decentralized research computing support greatly exceed costs for a central data center. Not only are the obvious costs higher, but the redirection of productive graduate student energies into providing support represents a hidden drain on the vitality of the institutional research enterprise. It makes better sense for graduate students to focus on activities in which they are most productive—research—rather than on activities that could be provided more effectively by professional staff.
Under Use and Downtime Costs. Two hidden costs were not quantified by Patel: under use and downtime. Under use occurs when a computational cluster is not fully utilized. If a system sits idle, it delivers no productive work while consuming resources and depreciating in value. Unused time is much less likely on a central shared cluster, which should be adequately provisioned to balance capacity and demand to avoid under use or over subscription. Downtime occurs when the system is unavailable due to hardware or software failures or when the lack of a timely security patch forces a system shutdown. Downtime is much more likely in a small laboratory situation in which researchers have limited time available to keep up with security patches. Inadequate cooling and power systems can also increase the probability of system hardware failure.
Although the purely decentralized model potentially provides shorter wait times for resource access, the hidden costs and decreased research productivity borne by the institution from under use and downtime can be enormous. For example, at electric rates of $0.08 per kilowatt-hour, a 1-teraflop (TF) system consuming 75 kilowatts of electricity will generate an annual utility bill of $52,416. If 20 of these 1-TF systems are distributed over campus, the total annual utility bill will reach $1,048,320. If the total achieved availability and use of these systems reach only 85 percent, then $157,248 in annual utility costs will be wasted powering systems during the 15 percent of the time they sit idle. If a smaller 18-TF system with 95 percent availability (essentially providing the same number of delivered cycles as the 20 TF system) is supplied by the central IT organization, the university can achieve a power savings of $104,832 per year. The savings can be used to hire professional staff or purchase additional equipment.
As research computing scales up in both power and pervasiveness within the institution, the cost differential between centralized and decentralized approaches will continue to increase. Based on our analysis of the true costs of equipment ownership, we believe the purely decentralized approach to research computing is not cost effective. Moreover, the decentralized approach has significant hidden costs that can hinder institutional research efforts.
The costs described in this section are incurred to support the research activities of the institution. By nature, universities and research organizations tend to favor local or disciplinary specialization that favors decentralization. The activities and infrastructure within research laboratories are driven by research projects conducted in those labs. The costs of operating this infrastructure are borne by the institution regardless of the existence of a coordinated strategic approach for acquiring and operating this infrastructure.
Acknowledging this situation, we believe it's important to develop a purposeful strategy for guiding and shaping the flow of computational resources into the institution. The strategy should attempt to rationalize investments, eliminate redundancies, and minimize operational costs. If it is possible to reduce costs by even 5 percent, the payoff can easily justify efforts to develop and put into place a campus strategy for campus cyberinfrastructure.
A Purposeful Strategy for Campus Cyberinfrastructure
The trends and forces we have described are a major part of the impetus toward decentralized research computing. The challenge to IT organizations is to formulate a strategy to respond to these changes. Realistically, a completely decentralized or centralized model for research computing won't work. Innovation, autonomy, and discovery happen at the edges, in laboratories and studios where scholars and researchers work. At the same time, economies of scale and scope can only be realized centrally, where it is possible to leverage large-scale systems and professional staff.
A central tension separates these two models. Several questions must be considered to design an effective solution:
- What balance between the two makes the most financial sense for the institution and optimizes research productivity?
- How can institutions best leverage central resources and staff to provide a base infrastructure for research that allows individuals at the edge to focus on building on the central core to add value for their discipline?
- What impacts does a campus strategy for cyberinfrastructure have on faculty, students, and staff?
We argue that the right approach to answering these questions is to create an institutional cyberinfrastructure that synthesizes centrally supported research computing infrastructure and local discipline specific applications, instruments, and digital assets. As noted above, cyberinfrastructure combines high-performance computing systems, massive data storage, visualization systems, advanced instrumentation, and research communities, all linked by a high-speed network across campus and to the outside world. These cyberinfrastructure building blocks are essential to support the research and creative activities of scholarly communities. Only through careful coordination can they be linked to attain the greatest institutional competitive advantage. Ideally, a campus cyberinfrastructure is an ongoing partnership among the campus research community and central IT organization that is built on a foundation of accountability, funding, planning, and responsiveness to the needs of the community.
Specific needs for research computing depend on the prevalence and diffusion of computer use within a discipline. In the arts and humanities, for example, information technology only recently has begun to play a broad and significant role.10 In contrast, science and engineering have a tradition of computer use spanning half a century. Figure 1 illustrates a continuum from shared infrastructure at the bottom of the figure (Networks) up through layers of progressively more specialized components that support domain-specific activities. The transition from shared cyberinfrastructure to discipline-facing technologies operated by researchers depends on the specific needs and requirements of the domain. For example, business faculty may require a well-defined set of common statistics and authoring tools. In contrast, the particle physics community may need to directly attach scientific equipment computing and storage systems using specialized software. The transition from shared cyberinfrastructure to laboratory-operated systems will be much lower in this figure for physicists than for business faculty. Central IT providers must be sensitive to these disciplinary differences and willing to work alongside the research community to develop specific cyberinfrastructure solutions for each discipline.
Click image for larger view.
Campus Cyberinfrastructure Goals
We believe that a campus cyberinfrastructure strategy must achieve several specific goals to succeed. First, it should empower scholarly communities by reducing the amount of effort required to administer, learn, and use resources, which frees the community to take risks, explore, innovate, and perform research. To meet this goal, institutions should seek to eliminate redundant efforts across campus. They must break down silos and centralize activities that central IT organizations can most effectively provide. By reducing redundancies, local IT providers can focus energies on adding value to the core infrastructure for the research community.
To encourage resource sharing and develop centers of expertise and excellence at local levels, institutions should establish discipline-specific local cyberinfrastructure initiatives. Once a functional campus cyberinfrastructure initiative and local cyberinfrastructure initiatives are established, the next logical step is to broaden external engagement with discipline-specific research communities to create a national discipline-oriented cyberinfrastructure. An example of this approach is the U.S. Atlas project, which brings together a collaborative community of physicists to search for the Higgs boson.
Second, a campus cyberinfrastructure strategy must develop a central research computing infrastructure through consensus and compromise among university administrators and researchers. To reduce the motivation for units to develop redundant services, the central IT organization must carefully plan and fund infrastructure improvements to meet current and projected needs. Cost savings realized from centralizing base-level services should be captured and reinvested back into expanding basic shared IT facilities and infrastructure, which are essential for the ultimate success of a campus cyberinfrastructure strategy.
The final goal is realignment of existing, disjointed research-computing efforts into a harmonized campus-wide cyberinfrastructure. A crucial aspect of building a consolidated campus cyberinfrastructure is developing a common set of middleware, applications, infrastructure, and standards that are compatible with emerging cyberinfrastructure platforms at other institutions. Adopting a common platform makes it possible to build bridges from campus cyberinfrastructure to regional and national cyberinfrastructure initiatives. If a campus adopts the use of X.509 certificates for authentication and authorization, for example, the campus cyberinfrastructure can easily interoperate with other national cyberinfrastructure initiatives that use X.509.
Another concrete example of this comes from Indiana University's participation in the Sakai project. Several years ago, a strategic decision was made to transition away from several incompatible learning management systems (LMS) to a common LMS based on Sakai. The adoption of a common LMS has made it possible to partner with other institutions using Sakai and to win external funding for collaborative projects that build on the Sakai framework.
An important factor to consider is how these goals will affect how people work. For faculty, graduate students, and researchers, the desired outcome is to increase research productivity by freeing time now spent running low-value activities in their own IT shops and by improving the effectiveness of infrastructure available for their use. For IT staff, as a result of greater coordination and reduction of replicated services, more time should be available to develop and deploy new services that add value to the underlying IT infrastructure.
Building a Campus Cyberinfrastructure
Building a campus cyberinfrastructure for research is not only a technical process but also a political, strategic, and tactical undertaking. It suffers from a "which came first, the chicken or the egg?" causality dilemma. Developing political support for making big investments in central systems to start the process of building cyberinfrastructure relies on the perceived trustworthiness of the central IT shop. A dilemma arises when the central IT shop suffers from the lack of funding necessary to provide very high levels of reliability to the campus, which is a necessary first step in building trust.
As we described in the section on cost factors, the institution is already making investments in centralized or decentralized computing. We believe the institution must be willing to risk starting the process by making significant strategic investments in core computing. This section describes some steps that could be taken in building a research cyberinfrastructure. These activities are not linear; rather, they are simply areas to consider and address.
The first activity in forging a common cyberinfrastructure is to identify common elements of campus infrastructure that can be centralized. These common elements include computer networks, storage resources, software licenses, centrally managed data centers, backup systems, and computational resources. Many broadly used applications (such as Mathematica or SPSS) could be centrally sponsored and site licensed to keep costs down and guarantee consistent support.
The second activity is to adopt and create common standards for middleware, which is the software that lies between infrastructure and applications. The functions of middleware include authentication, authorization, and accounting systems; distributed file systems; Web portals (such as the Open Grid Collaboration Environment portal11); and grid computing software, such as Globus,12 PBSPro,13 and Condor.14
The middleware needs of disciplines can vary. One set of disciplines may be actively engaged in developing new middleware tools that require complete access to and control over the middleware layer for development and testing. Other disciplines might not develop new middleware, but may rely entirely on centrally supported middleware systems and services (such as Kerberos). Central IT organizations need to collaborate with these disciplines and learn to accommodate a wide range of support needs. Finding the best balance among openness, security, privacy, and stability may be the most difficult step in building common middleware.
The third activity is to identify and develop a cyberinfrastructure application layer, which relies on coordinated infrastructure and middleware layers. In many respects, this is the "face of the anvil" on which research communities carry out innovation and creative work. Finding the best balance between local and campus cyberinfrastructure depends on the characteristics of the discipline. For example, anthropologists may need significant training and central support to build new metadata models for capturing and archiving field data. Chemists, on the other hand, may only require basic infrastructure to run scientific codes used by a small research community.
One effective way to balance the tension between centralization and localization is to develop a cost-sharing model for funding specialized applications used by a small fraction of the research community. Researchers developing new applications and tools need well-supported development environments, mathematical libraries, secure authorization and authentication frameworks, source code management systems, debugging tools, and training materials. Providing stable and secure development environments for multiple platforms and programming languages frees the research community from the necessity of provisioning their own environment. This allows them to focus on creating new intellectual value in which the university has a vested interest.
The fourth activity is to focus on the social aspects of campus cyberinfrastructure. Scholarly communities form the topmost layer, which is the locus of innovation and research. Cyberinfrastructure frees members of these communities from constraints of physical location and time by facilitating collaborative activities across projects and disciplines. An example of this layer is the Open Science Grid, an open collaboration of researchers, developers, and resource providers who are building a grid computing infrastructure to support the needs of the science community.
Achieving these objectives is not necessarily a sequential process. Formulating a response to the factual trends shaping the course of research computing requires making a set of choices that carry costs and risks: the time required to build community consensus among campus constituencies; the need for leadership awareness and attention to research computing and accompanying costs; the extra effort required by IT staff to collect information for activity-based costing, balanced scorecard, and annual surveys; and the extra diligence required to proactively plan and build cyberinfrastructure (along with the risks of unforeseen change) rather than reacting to specific problems and crises as they arise. Choices that work for one institution may not be effective at others. The ultimate success of a cyberinfrastructure plan depends on organizational context and the application of leadership skills to develop a strategy and plan.
Engaging the campus community on all these levels while building campus and local cyberinfrastructure is an effective way to seek rough consensus and establish accountability between the research community and central IT organization. By working together rather than independently, the university community has the best chance of creating a working and sustainable infrastructure and support model for research computing.
Campus Cyberinfrastructure at Indiana University
Indiana University is a confederation of two large main campuses and six regional campuses serving more than 90,000 students. The main campuses are in Bloomington and Indianapolis. The Bloomington campus portfolio includes physics, chemistry, biological sciences, informatics, law, business, and arts and humanities. The Indianapolis campus provides undergraduate and graduate programs from Indiana University and Purdue University and includes the IU Schools of Medicine and Dentistry. The six regional campuses provide undergraduate and master's level programs for Indiana residents across the state.
In the mid-1990s, the IT infrastructure of Indiana University spread across eight campuses, with very little sharing of infrastructure or staff expertise. Each campus had a CIO or dean of IT who was responsible for academic and (at some campuses) administrative computing for his or her respective campus. Clearly, a major institutional intervention was required to achieve system-wide efficiency and optimal performance. In 1996, a strategic vision developed for Indiana University included a "university-wide information system that will support communication among campuses..."
In 1998, IU developed a comprehensive five-year IT strategic plan (ITSP)15 that involved nearly 200 faculty, administrators, students, and staff working together in four chartered task forces. The task forces identified critical action items and steps to address existing deficiencies in the IU IT environment. The final ITSP described 68 specific action items and established the basis for planning, redeploying existing funding and resources, and seeking new funds.
Using the ITSP as both a plan and a proposal, IU approached the Indiana Legislature to seek additional funding to make it a reality. The legislature responded by providing a small increase to IU's budget over a period of five years (the lifetime of the ITSP) specifically targeted to building IU's effectiveness and reputation through leveraging IT to enhance teaching, research, economic development, and public service.
The ITSP included a section focused on research computing support across all IU campuses. Within this section, seven specific action items were identified, one for each research computing strategic area:
- Collaboration. Explore and deploy advanced and experimental collaborative technologies within the university's production information technology environment, first as prototypes and then, if successful, more broadly.
- Computational Resources. Plan to continually upgrade and replace high-performance computing facilities to keep them at a level that satisfies the increasing demand for computational power.
- Visualization and Information Discovery. Provide facilities and support for computationally and data-intensive research, for nontraditional areas such as the arts and humanities, as well as for the more traditional areas of scientific computation.
- Grid Computing. Plan to evolve the university's high-performance computing and communications infrastructure so that it has the features to be compatible with and can participate in the emerging national computational grid.
- Massive Data Storage. Evaluate and acquire high-capacity storage systems capable of managing very large data volumes from research instruments, remote sensors, and other data-gathering facilities.
- Research Software Support. Provide support for a wide range of research software including database systems, text-based and text-markup tools, scientific text processing systems, and software for statistical analysis.
- Research Initiatives in IT. Participate with faculty on major research initiatives involving IT where appropriate and of institutional advantage.
Building IU's comprehensive cyberinfrastructure began with a comprehensive strategic plan and funding. The institution took the risk of developing core computing capabilities to support research across all IU campuses. This leads back to our central thesis: by taking the steps of assessing all the costs, developing a plan to coordinate activities, securing funding, and building political support, IU solved the chicken and egg dilemma.
Putting a cyberinfrastructure in place is one part of the solution. Building a sustainable cyberinfrastructure requires additional elements to make the vision a reality. The first element involves using the IT strategic plan as a living document. The second necessary element is accountability.
The central IT organization is a service organization that supports the institution. As such, it must be accountable to clients and customers as well as to university leadership. Accountability to university administration is accomplished through the use of four mechanisms:
- Activity-based costing
- Annual activity and performance reports on strategic plan progress
- Adhering to the strategic plan as a basis for yearly budget and planning activities
- Periodic comprehensive efficiency reviews that seek to reduce redundancies and retire obsolete services
Annual reports on cost and quality of services16 are open and available to the university community. Accountability to customers relies on the use of a comprehensive user satisfaction survey17 sent to more than 5,000 randomly selected staff, faculty, and students across all eight IU campuses. Based on survey responses and individual comments, each unit reviews and makes any necessary changes to services it provides.
The survey results ensure that the central IT organization remains responsive to needs of the university community. Based on survey results, the research computing unit maintains an annual balanced scorecard18 that provides a comprehensive overview of efficiency and user satisfaction with research computing services. These quantitative tools allow IT leadership to monitor user satisfaction, ensure cost-effective service delivery, and retire outdated services that no longer serve user needs or are not cost-effective.
Feedback from the research community to the systems and services provided to meet research needs has been positive. Detailed comments from researchers from 16 years of survey results are publicly available on the Web.19 In 2006 alone, more than 430 detailed comments were received from the user community.
One tangible example of this process is a change made several years ago in campus e-mail service. Satisfaction with text-based e-mail was declining, and an investigation determined that the community had a growing unmet need for Web-based mail. In response, the central IT organization formulated a plan and one-time budget expenditure to establish a Web-based mail system. After successful deployment of the system, user satisfaction returned to the previous high levels.
With the firm foundation of reliable services and resources in place, IU is working to build the middleware, application, and collaborative technology cyberinfrastructure layers necessary to construct an excellent campus cyberinfrastructure.20 IU's activities bridge IU campuses within the state and connect IU and national scholarly communities. The projects include Sakai, Kuali, Teragrid, and regional, national, and international networks, as well as working with communities such as the Global Grid Forum and the Open Science Grid.
Where Is Research Computing Going?
Research computing in the future will be shaped by current trends and forces, as well as by several emerging trends that will take hold over the next three years.
Commoditization trends will continue. With increasing globalization it is likely that commoditization will move down the value chain. One recent example of this is Sun Microsystem's announcement of the availability of a computing utility service over the Internet at a price of $1 per CPU per hour. Development will be driven by the home market for computing and entertainment. New technologies developed for this market (such as the use of artificial intelligence for intelligent game agents) will continue to appear on the commodity market.
Web portals, Web services, and science gateways will likely reach maturity within the next few years. They have the potential to increase the collaborative power of cyberinfrastructure and broaden access to computing for researchers.
Another emerging force is the growing awareness of the significance of data. Data-centric computing seeks to capture, store, annotate, and curate not only the results of research but also all observations, experimental results, and intermediate work products for decades and potentially centuries. An additional trend is the developing need for central IT support in the arts and humanities.
A major force shaping research computing is the tide that ebbs and flows—federal research funding. Historian Roger Geiger21 has observed 10- to 12-year cycles in federal research funding, with peaks of rapid growth followed by periods of relative consolidation. If this trend persists, the current period of decline that began in 200422 may be followed by a period of growth starting in the next few years. An encouraging sign is the recent State of the Union message, in which President Bush proposed doubling research funding for basic science research in the next 10 years. Laying the foundations of cyberinfrastructure now will help to prepare the institution for potential future growth in the availability of research funds.
We believe the most effective response to the trends and forces in science and IT that are creating tremendous demand for research computing is to build partnerships among scholarly communities and central IT providers to develop campus and discipline-facing cyberinfrastructure capabilities. A successful cyberinfrastructure strategy will help prepare the institution for the coming globalization of the academy and research and for potential future growth in federal research funding. Advances in research and creative activity in the future will most likely come from global collaboration among scholars and scientists. Universities that learn to use cyberinfrastructure effectively to support the needs of their research community will gain a competitive advantage in the race to attract excellent scholars and win external funding to support research.
1. U.S. Department of Energy, "The Challenge and Promise of Scientific Computing," 2003, <http://www.er.doe.gov/sub/Occasional_Papers/1-Occ-Scientific-Computation.PDF> (accessed December 1, 2006).
2. P. Goda and J. Warren, "I'm Not Going to Pay a Lot for This Supercomputer!" Linux Journal, January 1998, p. 45.
3. J. Gray and P. Shenoy, "Rules of Thumb in Data Engineering," in Technical Report MS-TR-99-100 (Redmond, Wash.: Microsoft Research, 1999).
4. E. Grochowski and R. D. Halem, "Technological Impact of Magnetic Hard Disk Drives on Storage Systems," IBM Systems Journal, Vol. 42, No. 2, 2003, pp. 338–346.
5. Top500 Supercomputer Sites, <http://www.top500.org> (accessed November 17, 2006). Architecture distribution over time can be accessed at <http://www.top500.org/lists/2006/11/overtime/Architectures> (accessed December 1, 2006).
6. American Council of Learned Societies, "The Draft Report of the American Council of Learned Societies' Commission on Cyberinfrastructure for Humanities and Social Sciences 2005," American Council of Learned Societies, New York, pp. 1–64, <http://www.acls.org/cyberinfrastructure/acls-ci-public.pdf> (accessed December 1, 2006).
7. K. Klingenstein, K. Morooney, and S. Olshansky, "Final Report: A Workshop on Effective Approaches to Campus Research Computing Cyberinfrastructure," sponsored by the National Science Foundation, Pennsylvania State University, and Internet2, April 25–27, 2006, Arlington, Virginia, <http://middleware.internet2.edu/crcc/docs/internet2-crcc-report-200607.html> (accessed December 1, 2006).
8. C. Patel and A. Shah, "Cost Model for Planning, Development, and Operation of a Data Center in HPL-2005-107(R.1)" (Palo Alto, Calif.: Hewlett-Packard Internet Systems and Storage Laboratory, 2005).
9. RSMeans, Building Construction Cost Data 2006, Vol. 64 (Kingston, Mass.: RSMeans Construction Publisher, 2006).
10. American Council of Learned Socities, op. cit.
11. D. Gannon et al., "Grid Portals: A Scientist's Access Point for Grid Services (DRAFT 1)," GGF working draft Sept. 19, 2003 <http://www.collab-ogce.org/nmi/index.jsp> (accessed March 29, 2006).
12. I. Foster, "Globus Toolkit Version 4: Software for Service-Oriented Systems," in IFIP International Conference on Network and Parallel Computing (Berlin: Springer-Verlag, 2005), pp. 2–13.
13. "Altair Computing Portable Batch System," 1996, <http://www.altair.com/software/pbspro.htm> (accessed November 17, 2006).
14. D. Thain, T. Tannenbaum, and M. Livny, "Distributed Computing in Practice: The Condor Experience," Concurrency and Computation: Practice and Experience, Vol. 17, No. 2–4, pp. 323–356.
15. University Information Technology Committee, "Indiana University Information Technology Strategic Plan," 2001, <http://www.indiana.edu/~ovpit/strategic/> (accessed May 2006).
16. "Indiana University Information Technology Services Annual Report on Cost and Quality of Services," <http://www.iu.edu/~uits/business/report_on_cost_and_quality_of_services.html> (accessed April 2006).
17. "Indiana University Information Technology Services User Satisfaction Survey," <http://www.indiana.edu/~uitssur/> (accessed November 17, 2006); and C. Peebles et al., "Measuring Quality, Cost, and Value of IT Services," EDUCAUSE Annual Conference 2001, <http://www.educause.edu/ir/library/pdf/EDU0154.pdf> (accessed November 14, 2006).
18. "Indiana University Research and Academic Computing Balanced Scorecard," 2005, <http://www.indiana.edu/~rac/scorecard/2005/racscorecard_2005.html> (accessed November 17, 2006).
19. See <http://www.indiana.edu/~uitssur/> and Peebles, op. cit.
20. Klingenstein, Morooney, and Olshansky, op. cit.
21. R. Geiger, Research and Relevant Knowledge: American Research Universities since World War II, transaction series in higher education (New Brunswick, N.J.: Transaction Publishers, 2004), pp. xxi, 411.
22.American Association for the Advancement of Science Guide to R&D Funding Data—Historical Data, 2006, <http://www.aaas.org/spp/rd/guihist.htm> (accessed November 14, 2006).
Thomas J. Hacker (email@example.com) is Assistant Research Professor, Discovery Park Cyber Center, at Purdue University in West Lafayette, Indiana. Bradley C. Wheeler is the Chief Information Officer at Indiana University and an Associate Professor of Business.
IT Funding Cyberinfrastructure High-Performance Computing (HPC)