old black and white photo of large computer


SCRiM research, education, and outreach activities were supported by a variety of desktop computers, cloud-based services, and locally-hosted high-performance computing resources. Research computing activities included mix of data processing, statistical/empirical downscaling, simple earth system modeling, climate modeling with fully-couple ocean-atmosphere GCMs, statistical analysis (MCMC, distributing fitting, etc.), emulation, and multi-objective optimization. These activities were undertaken on two separate high-performance computing (HPC) systems as described below. Although hosted at Penn State, these resources were available to all SCRiM participants, regardless of their institutional home.

Meteorology Computing Facility

SCRiM’s initial HPC resources were hosted and managed by the Penn State Department of Meteorology and Atmospheric Science beginning in May 2013. The system, which was finally decommissioned in November 2020, consisted of the following components:

Computing

  • woju (woju.scrim.psu.edu), a Silicon Mechanics iServ R420 system with four 8-core 2.2 GHz Xeon E5-4620 processors and 256 GB RAM; primarily for data processing and general interactive use
  • napa (napa.scrim.psu.edu), a six-node, 96-core cluster; each node was a Silicon Mechanics iServ R331.v4 system with two 8-core 3.3 GHz Xeon E5-2667v2 processors and 256 GB RAM; targeted at small-scale parallel computing tasks
  • mizuna (mizuna.scrim.psu.edu), a Dell PowerEdge R710 system with two 6-core 2.4 GHz Xeon E5645 processors and 120 GB RAM; supported various web services and also provided compute capacity for non-SCRiM projects (purchased with institutional funds)

Storage and Backup

  • /mizuna/s0, a 146 TB RAID60 filesystem (via Silicon Mechanics Storform D59J.v2 with 44 4-TB drives; attached to mizuna)
  • /woju/s0, a 219 TB RAID60 filesystem (via Silicon Mechanics Storform D59J.v2 with 44 6-TB drives; attached to woju)
  • /mizuna/s1, a 262 TB RAID60 filesystem (via Silicon Mechanics Storform D59J.v3 with 44 8-TB drives; attached to mizuna); this storage was targeted for synergistic projects external to SCRiM, including PIAMDDI and DairyCAP (purchased with institutional funds and support from other projects)
  • total available storage in excess of 630 TB
  • full backup of all filesystems was handled by Penn State EMS Computing via TSM on a Dell PV3660i

Web and Network Services

  • an RStudio Server Pro instance, running on mizuna; used heavily by our graduate students and postdocs
  • a protected download server, running on mizuna
  • woju, napa, and mizuna were connected to one another via InfiniBand
  • woju, napa, and mizuna had direct connections to the PSU high-speed research network
  • VNC remote desktop access

ICDS Roar Cluster

As the computational needs of SCRiM researchers grew, SCRiM leadership chose to take advantage of a then-new shared campus research cluster managed by the Institute for Computational and Data Sciences (ICDS) rather than upgrading and expanding our existing systems. In early 2016, SCRiM purchased an allocation on this cluster (now known as “ICDS Roar”) comprising 400 cores and 400TB storage (later expanded to 800TB). SCRiM was one of the earliest and largest users of this resource. Although SCRiM has formally ended, the computing and storage allocations remain active through the institutional support and contributions from other sponsored projects.

To learn moare about ICDS Roar, please visit https://www.icds.psu.edu/computing-services/.