Broadcom Donations Support Research Computing

Cluster Computing

The Broadcom Corporation is a generous donor of computing equipment to UCI.  Following its contribution of hundreds of rack-mounted compute servers in 2007, two subsequent donations of compute servers and disk storage have benefited UCI and other UC campuses.

Among the service improvements the recent Broadcom contributions have made possible are research computing equipment for the Bren School of ICS, expansion of the space available for faculty and staff email storage (increased disk quotas), augmentation of the MPC and BDUC compute clusters available to all campus researchers, and an upcoming application server.  This will allow the use of a remote-access tool (e.g. Windows Terminal) to run research software (e.g. Matlab or SAS) remotely.  The server will be highlighted in a subsequent issue of IT News.

In these times of reduced state funding, Broadcom’s ongoing support of UCI is deeply appreciated.

Greenplanet: Cluster Computing for Physical Sciences

Greenplanet

Physical Sciences, with support from IAT-NACS, has assembled a high-performance computing cluster for climate modeling and other computational-intensive research.

Called “Greenplanet,” the cluster comprises nodes purchased by faculty in Earth Systems Sciences (ESS), Chemistry, and Physics, and it is expected that Math faculty will also participate.  At this time, Greenplanet includes almost 900 CPUs and is still growing.

IAT provides secure, climate-controlled space in the Academic Data Center,  system administration services as a team with Physical Sciences IT staff, and consultation on code parallelization and optimization.

According to Assistant Professor Keith Moore of ESS, Greenplanet is “a flexible cluster, suitable for massively parallel complex computations (such as climate simulations), and for smaller-scale use on a single node as a workstation.”

A typical node features 8 64-bit Intel CPUs.  Greenplanet features the Load Sharing Facility (LSF) for job management and the Lustre caching file system for extremely high-performance access to the large datasets typical of climate modeling.  Two message passing techniques are available for parallel code: OpenMP for communication between CPUs on a node, and MPI for communication between CPUs on different nodes.  Greenplanet also has the high-performance Infiniband interlink between nodes for high-speed communications.  There is extensive instrumentation available for tuning jobs to optimal execution speed and use of all available computational capacity in the cluster.

Software includes the Climate Systems Modeling package, parallel Matlab, and quantum chemistry packages such as Gaussian and Turbomole.

New Computing Cluster

Computer Cluster

Computer Cluster

Last year, Broadcom graciously donated over 400 compute servers to UC Irvine. While the majority of the servers were distributed to campus researchers, NACS and the Bren School of Information and Computer Sciences have collaborated to bring a new general-purpose campus computing solution to researchers and graduate students at no charge.

Initially, the Broadcom Distributed Unified Cluster (BDUC) is comprised of 80 nodes: 40 nodes with 32-bit Intel processors and 40 nodes with 64-bit AMD processors. Broadcom is expected to donate newer servers over time, allowing nodes to be upgraded.  NACS and ICS plan to further expand the cluster as well, subject to available staff and Data Center resources.

BDUC includes standard open-source compilers, debuggers, and libraries; in addition, the MATLAB Distributed Computing Engine (DCE) will soon be available.  In the near future, BDUC will offer priority queues for research groups that provide financial support or hardware to the cluster.

BDUC is now available to all faculty, staff, and graduate using your UCInetID and password. To request an account, send an e-mail to bduc-request@uci.edu.  A new user how-to guide is available on the NACS website http://www.nacs.uci.edu/computing/bduc/newuser.html.

Cluster Computing

NACS hosts and manages the Middle Performance Computing (MPC) “Beowulf” Cluster on behalf of campus researchers who need substantial computational power.

MPC comprises private nodes and shared nodes, including a part-time shared cluster using NACS PC lab systems. MPC systems feature a mix of architectures to provide high computational throughput.

A feature of the MPC service is the opportunity for researchers to join their own systems to the cluster. In exchange for system administration, housing, and 24/7 oversight provided by NACS, researchers allow 25% of their systems to be configured as part of a campus computational resource. (The remaining 75% is configured to be a cluster, or “queue”, dedicated to the owner.) Contributors may, of course, make use of systems designated for campus use.

Contributors also become voting members of the MPC Advisory Board. The purpose of the MPCAB is to advise NACS on the governance, policies, procedures, and technical aspects of the MPC cluster.

Researchers may request accounts on MPC (and other NACS resources) online. Any future changes that impact MPC users will be posted on the MPC website.

NACS also hosts the GradEA Beowulf Cluster for the exclusive use of UCI graduate students.

MPC web site:
http://www.nacs.uci.edu/computing/mpc/

MPC account requests:
http://www.nacs.uci.edu/rcs/resources.html

GradEA web site:
http://www.nacs.uci.edu/computing/gradea/