• Log In
  • Skip to primary navigation
  • Skip to content
  • Skip to primary sidebar
  • Skip to footer

Information Technology News Archive

1996 - 2017

  • Home
  • About OIT
You are here: Home / Archives for Research Support / Cluster Computing

Cluster Computing

High Performance Computing Cluster

December 7, 2012 by Allen Schiano

Data Center

OIT has been providing cooperative cluster computing services to UCI researchers for many years.  Comprising at various times MPC (“Medium Performance Computing”), BDUC (“Broadcom Distributed and Unified Cluster”), and even Green Planet (a cluster hosted for the School of Physical Sciences), the service continues to evolve as technology changes.

With the support of Southern California Edison’s Strategic Energy Program (SEP) which offers grants to replace older computers with new, energy-efficient systems (something of a “cash for clunkers” for computers), along with contributions from the Office of Research, OIT has been able to upgrade some of the components of the shared-use computing cluster and has rechristened it HPC (“High Performance Computing.”)  Further upgrades will take place over the coming year.

With MPC, individual researchers could add computing nodes to the cluster with the understanding that, in exchange for OIT providing environment and security, 25% of the computing capacity would be made available to the UCI research community.  In contrast, HPC uses advanced queuing and scheduling techniques developed by HPC system administrator Joseph Farran.  These techniques dynamically allow unused capacity in a given researcher’s segment of the cluster to be made available to others.  This results in a sustained use of over 90% of the massive computational capacity of the cluster.  Researchers interested in participating in HPC should contact Joseph Farran at x4-5551.

Technical specifications of the upgraded nodes include:

  • 64-core AMD CPUs providing an aggregate of over 2000 cores
  • 8 Nvidia GPUs (4 Tesla, 4 Fermi)
  • 8.8TB RAM
  • QDR Infiniband inter-node communication channel
  • 500TB storage in a Gluster distributed filesystem
  • GridEngine scheduler via 18 private/group queues and 9 free queues
  • CUDA development tools
  • licensed software including SAS, STATA, CLCBio, MATLAB, Mathematica
  • GNU, Intel and PGC compilers and Eclipse and Totalview debuggers

Filed Under: Cluster Computing, High Performance Computing, Research Support Tagged With: BDUC, Cluster, HPC, MPC

Greenplanet: Cluster Computing for Physical Sciences

July 22, 2009 by Francisco Lopez

Greenplanet

Physical Sciences, with support from IAT-NACS, has assembled a high-performance computing cluster for climate modeling and other computational-intensive research.

Called “Greenplanet,” the cluster comprises nodes purchased by faculty in Earth Systems Sciences (ESS), Chemistry, and Physics, and it is expected that Math faculty will also participate.  At this time, Greenplanet includes almost 900 CPUs and is still growing.

IAT provides secure, climate-controlled space in the Academic Data Center,  system administration services as a team with Physical Sciences IT staff, and consultation on code parallelization and optimization.

According to Assistant Professor Keith Moore of ESS, Greenplanet is “a flexible cluster, suitable for massively parallel complex computations (such as climate simulations), and for smaller-scale use on a single node as a workstation.”

A typical node features 8 64-bit Intel CPUs.  Greenplanet features the Load Sharing Facility (LSF) for job management and the Lustre caching file system for extremely high-performance access to the large datasets typical of climate modeling.  Two message passing techniques are available for parallel code: OpenMP for communication between CPUs on a node, and MPI for communication between CPUs on different nodes.  Greenplanet also has the high-performance Infiniband interlink between nodes for high-speed communications.  There is extensive instrumentation available for tuning jobs to optimal execution speed and use of all available computational capacity in the cluster.

Software includes the Climate Systems Modeling package, parallel Matlab, and quantum chemistry packages such as Gaussian and Turbomole.

Filed Under: Academic Data Center, Cluster Computing, High Performance Computing, Research Support, System Administration Tagged With: Cluster Computing, High Performance Computing, Research Computing

New Computing Cluster

February 23, 2009 by Francisco Lopez

Computer Cluster

Computer Cluster

Last year, Broadcom graciously donated over 400 compute servers to UC Irvine. While the majority of the servers were distributed to campus researchers, NACS and the Bren School of Information and Computer Sciences have collaborated to bring a new general-purpose campus computing solution to researchers and graduate students at no charge.

Initially, the Broadcom Distributed Unified Cluster (BDUC) is comprised of 80 nodes: 40 nodes with 32-bit Intel processors and 40 nodes with 64-bit AMD processors. Broadcom is expected to donate newer servers over time, allowing nodes to be upgraded.  NACS and ICS plan to further expand the cluster as well, subject to available staff and Data Center resources.

BDUC includes standard open-source compilers, debuggers, and libraries; in addition, the MATLAB Distributed Computing Engine (DCE) will soon be available.  In the near future, BDUC will offer priority queues for research groups that provide financial support or hardware to the cluster.

BDUC is now available to all faculty, staff, and graduate using your UCInetID and password. To request an account, send an e-mail to bduc-request@uci.edu.  A new user how-to guide is available on the NACS website http://www.nacs.uci.edu/computing/bduc/newuser.html.

Filed Under: Cluster Computing, High Performance Computing, Research Computing, Research Support Tagged With: Cluster Computing, Research Computing

Primary Sidebar

Links

  • Office of Information Technology
  • UC Irvine

Recent Posts

  • In Brief April 2017
  • Eduroam… WOW!
  • Tips and Tricks: Webfiles
  • Campus Radio System Upgrade
  • OIT Does That? Classrooms and Labs

IT News Archives

Need Help?

  • Call Us - (949) 824-2222
  • Email Us - oit@uci.edu
  • Help Desk
  • Knowledgebase

About OIT

  • OIT Employment Opportunities
  • Org Chart (PDF)
  • Policies

Contact Us

Office of Information Technology
University of California, Irvine
Irvine, CA 92697

949-824-2222

© 2025 UC Regents