ZotPortal: Online Resources for Students


After an extensive campus-wide planning process, the student portal “ZotPortal” went live on April 27 of this year.  IAT-NACS worked with Student Affairs to design the high-reliability and high-performance system hardware, and provides ongoing network and system administration services, as well as housing elements of ZotPortal in separate data centers.

Through ZotPortal students can access academic and administrative information, connect to a Facebook account, subscribe to UCI campus news, student media and entertainment feeds, check UCI libraries catalogue and even search for people and campus web sites from one search box.

Students can arrange ZotPortal’s look and layout flexibly through a user-friendly drag-and-drop interface, subscribing to the particular information channels they want.

ZotPortal runs on hardware intended to provide maximal service continuity.  There are duplicate servers, connected through IAT’s DMRnet.  In the event one server becomes unavailable (say due to a power failure), the twin automatically assumes all portal activity.  Within each physical server are many CPUs, configured to provide a flexible group of virtual servers so that ZotPortal can support very large numbers of simultaneous requests.  Data is stored on a disk cluster configured with Sun’s ZFS (zettabyte file system) which provides both redundancy (data protection) and high performance parallel access.

Greenplanet: Cluster Computing for Physical Sciences


Physical Sciences, with support from IAT-NACS, has assembled a high-performance computing cluster for climate modeling and other computational-intensive research.

Called “Greenplanet,” the cluster comprises nodes purchased by faculty in Earth Systems Sciences (ESS), Chemistry, and Physics, and it is expected that Math faculty will also participate.  At this time, Greenplanet includes almost 900 CPUs and is still growing.

IAT provides secure, climate-controlled space in the Academic Data Center,  system administration services as a team with Physical Sciences IT staff, and consultation on code parallelization and optimization.

According to Assistant Professor Keith Moore of ESS, Greenplanet is “a flexible cluster, suitable for massively parallel complex computations (such as climate simulations), and for smaller-scale use on a single node as a workstation.”

A typical node features 8 64-bit Intel CPUs.  Greenplanet features the Load Sharing Facility (LSF) for job management and the Lustre caching file system for extremely high-performance access to the large datasets typical of climate modeling.  Two message passing techniques are available for parallel code: OpenMP for communication between CPUs on a node, and MPI for communication between CPUs on different nodes.  Greenplanet also has the high-performance Infiniband interlink between nodes for high-speed communications.  There is extensive instrumentation available for tuning jobs to optimal execution speed and use of all available computational capacity in the cluster.

Software includes the Climate Systems Modeling package, parallel Matlab, and quantum chemistry packages such as Gaussian and Turbomole.

ESMF Open to Campus Researchers

UCI’s Earth System Modeling Facility (ESMF) offers accounts to all UCI researchers and students interested in High Performance Computing. The ESMF presently consists of a cluster of 88 IBM Power4 CPUs in seven 8-way and one 32-way SMP nodes running AIX 5.1L. The Visual Age compilers fully support OpenMP and MPI. The computational environment is batch-oriented and is suitable for large- scale numerical simulations. The environment is similar to that found at many national supercomputer centers, but, we hope, with less bureaucracy. Instructions for obtaining an ESMF account are at


Idle ESMF CPU time is a waste and the goal is to minimize it. We have constructed a batch queue environment which gives priority to the earth-systems simulations which are its primary task, and places other jobs in queues of lower priority (standby queues). This allows other UCI researchers to benefit from idle CPU time without penalizing ESMF’s core users.

Grad Student Computing Cluster

NACS provides and supports various computing resources and services for the UCI community. One noteworthy computing resource, GradEA, has been developed for the exclusive use of UCI graduate students. The intent is to provide graduate students with access to high-speed CPUs, large-data storage capacity, and advanced software.

GradEA consists of 11 dual-CPU Intel Xeon 2.0 GHz nodes, running the Red Hat Linux 8.0 operating system. The most recent hardware upgrade to GradEA occurred in January of this year and more are planned for the coming year. Software available on GradEA includes: Mathematica 4.2, MATLAB 6.5, IDL 5.4, SAS, IMSL, S+/R, the Portland Group compiler suite, MPICH (aka MPI) and Open PBS.

Programs on GradEA can be run in a single, or multi-CPU mode; the cluster network is interconnected by Gigabit ethernet. Users also have access to a disk-storage workspace of 700GB (Gigabytes).

By default, all graduate students have accounts on the GradEA cluster; try logging into gradea.uci.edu with your UCInetID and password. Further information is available at: http://www.nacs.uci.edu/computing/gradea

Upgrade of Convex C3840

Network & Academic Computing Services will soon upgrade the Convex C3840, the principal NACS numerical computation server, to a Hewlett-Packard Exemplar SPP2000 with 16 CPUs and 2 Gigabytes of memory. The SPP2000 will provide more than ten times the computing performance of the C3840 and will provide a comprehensive parallel computing environment for the first time within NACS for research and educational applications.

Major application software currently on the C3840 will also be available on the SPP2000 including MARC/MENTAT, Gaussian-94, GCG, and IMSL. In addition to these applications and compilers (Fortran90, C and C++), the new machine will have MPI (Message Passing Interface) and PVM (Parallel Virtual Machine), two widely used libraries for developing parallel applications based on message-passing. The Global Shared Memory (GSM) model for developing parallel applications is also supported in Fortran90. Assistance from NACS will be available to the current users of the C3840 for migration of their user-developed computer codes to the new machine.

NACS will be offering a variety of workshops and other educational opportunities on parallel computing using the SPP2000 in coming months. Please check the NACS Web page to obtain the latest information about the SPP2000. For additional information, please send e-mail to NACS@UCI.EDU, or contact Donald Frederick of NACS, FREDERIK@UCI.EDU or (949) 824-3200.