OIT has been providing cooperative cluster computing services to UCI researchers for many years. Comprising at various times MPC (“Medium Performance Computing”), BDUC (“Broadcom Distributed and Unified Cluster”), and even Green Planet (a cluster hosted for the School of Physical Sciences), the service continues to evolve as technology changes.
With the support of Southern California Edison’s Strategic Energy Program (SEP) which offers grants to replace older computers with new, energy-efficient systems (something of a “cash for clunkers” for computers), along with contributions from the Office of Research, OIT has been able to upgrade some of the components of the shared-use computing cluster and has rechristened it HPC (“High Performance Computing.”) Further upgrades will take place over the coming year.
With MPC, individual researchers could add computing nodes to the cluster with the understanding that, in exchange for OIT providing environment and security, 25% of the computing capacity would be made available to the UCI research community. In contrast, HPC uses advanced queuing and scheduling techniques developed by HPC system administrator Joseph Farran. These techniques dynamically allow unused capacity in a given researcher’s segment of the cluster to be made available to others. This results in a sustained use of over 90% of the massive computational capacity of the cluster. Researchers interested in participating in HPC should contact Joseph Farran at x4-5551.
Technical specifications of the upgraded nodes include:
- 64-core AMD CPUs providing an aggregate of over 2000 cores
- 8 Nvidia GPUs (4 Tesla, 4 Fermi)
- 8.8TB RAM
- QDR Infiniband inter-node communication channel
- 500TB storage in a Gluster distributed filesystem
- GridEngine scheduler via 18 private/group queues and 9 free queues
- CUDA development tools
- licensed software including SAS, STATA, CLCBio, MATLAB, Mathematica
- GNU, Intel and PGC compilers and Eclipse and Totalview debuggers