UCI Lightpath: a High-Speed Network for Research

lightpathOIT has built a dedicated high-performance network infrastructure that can help meet the needs of researchers requiring the transfer of a large quantity of data within and beyond campus. This network is called UCI Lightpath which is funded by a Grant from National Science Foundation Campus Cyberinfrastructure – Network Infrastructure Engineering Program (NSF CC-NIE)

UCI Lightpath is composed of a Science DMZ with a 10 Gbps connection to the science and research community on the Internet, and a dedicated 10 Gbps network infrastructure on campus.  A science DMZ is a portion of the network that is designed so that the equipment, configuration, and security policies are optimized for high-performance scientific applications rather than for general-purpose business systems or “enterprise” computing.

The initial infrastructure covers eight campus locations including the OIT Data Center where computing clusters, such as HPC and Greenplanet reside.  The UCI Lightpath network infrastructure is separate from the existing campus network (UCINet.)  The diagram shows the current status of the UCI Lightpath.

For more information of UCI Lightpath and its access policy, please refer to OIT website http://www.oit.uci.edu/network/lightpath/

 

UCI’s Internet Connections Upgraded

connectivityOIT recently improved UCI’s connection to the Internet, increasing bandwidth from 6 Gbps (billion bits per second) to 20 Gbps. This upgrade enhances connections from the main campus, UCI Medical Center, and the residential network. The upgrade provides faster network access both to the research Internet and the general commodity Internet.

UCI connects to the Internet via CENIC, a regional network service provider providing Internet connections to California research and education organizations. CENIC provides two connections for the campus: CalREN-HPR and CalREN. CalREN-HPR supplies researchers with high-speed connectivity to other research networks, such as Internet2 and the Energy Science Network (ESnet). CalREN provides general Internet commodity services.

Last July, when OIT began work on the UCI Lightpath project, our CalREN-HPR network connection was upgraded from 1Gbps to 10Gbps with a 1Gbps diversified backup link. (Lightpath is a dedicated science network funded by the National Science Foundation). This February, our CalREN general Internet connection was upgraded from five 1Gbps connections to a 10Gbps connection.

OIT is also working with CENIC to establish additional fiber infrastructure between UCI and UCLA which will enable us to upgrade our diversified backup paths from 1Gbps to higher bandwidth. Our goal is to upgrade both backup links of CalREN-HPR and CalREN to 10Gbps in the near future.

High Performance Computing Cluster

Data Center

OIT has been providing cooperative cluster computing services to UCI researchers for many years.  Comprising at various times MPC (“Medium Performance Computing”), BDUC (“Broadcom Distributed and Unified Cluster”), and even Green Planet (a cluster hosted for the School of Physical Sciences), the service continues to evolve as technology changes.

With the support of Southern California Edison’s Strategic Energy Program (SEP) which offers grants to replace older computers with new, energy-efficient systems (something of a “cash for clunkers” for computers), along with contributions from the Office of Research, OIT has been able to upgrade some of the components of the shared-use computing cluster and has rechristened it HPC (“High Performance Computing.”)  Further upgrades will take place over the coming year.

With MPC, individual researchers could add computing nodes to the cluster with the understanding that, in exchange for OIT providing environment and security, 25% of the computing capacity would be made available to the UCI research community.  In contrast, HPC uses advanced queuing and scheduling techniques developed by HPC system administrator Joseph Farran.  These techniques dynamically allow unused capacity in a given researcher’s segment of the cluster to be made available to others.  This results in a sustained use of over 90% of the massive computational capacity of the cluster.  Researchers interested in participating in HPC should contact Joseph Farran at x4-5551.

Technical specifications of the upgraded nodes include:

  • 64-core AMD CPUs providing an aggregate of over 2000 cores
  • 8 Nvidia GPUs (4 Tesla, 4 Fermi)
  • 8.8TB RAM
  • QDR Infiniband inter-node communication channel
  • 500TB storage in a Gluster distributed filesystem
  • GridEngine scheduler via 18 private/group queues and 9 free queues
  • CUDA development tools
  • licensed software including SAS, STATA, CLCBio, MATLAB, Mathematica
  • GNU, Intel and PGC compilers and Eclipse and Totalview debuggers

Broadcom Donations Support Research Computing

Cluster Computing

The Broadcom Corporation is a generous donor of computing equipment to UCI.  Following its contribution of hundreds of rack-mounted compute servers in 2007, two subsequent donations of compute servers and disk storage have benefited UCI and other UC campuses.

Among the service improvements the recent Broadcom contributions have made possible are research computing equipment for the Bren School of ICS, expansion of the space available for faculty and staff email storage (increased disk quotas), augmentation of the MPC and BDUC compute clusters available to all campus researchers, and an upcoming application server.  This will allow the use of a remote-access tool (e.g. Windows Terminal) to run research software (e.g. Matlab or SAS) remotely.  The server will be highlighted in a subsequent issue of IT News.

In these times of reduced state funding, Broadcom’s ongoing support of UCI is deeply appreciated.

ZotPortal: Online Resources for Students

ZotPortal

After an extensive campus-wide planning process, the student portal “ZotPortal” went live on April 27 of this year.  IAT-NACS worked with Student Affairs to design the high-reliability and high-performance system hardware, and provides ongoing network and system administration services, as well as housing elements of ZotPortal in separate data centers.

Through ZotPortal students can access academic and administrative information, connect to a Facebook account, subscribe to UCI campus news, student media and entertainment feeds, check UCI libraries catalogue and even search for people and campus web sites from one search box.

Students can arrange ZotPortal’s look and layout flexibly through a user-friendly drag-and-drop interface, subscribing to the particular information channels they want.

ZotPortal runs on hardware intended to provide maximal service continuity.  There are duplicate servers, connected through IAT’s DMRnet.  In the event one server becomes unavailable (say due to a power failure), the twin automatically assumes all portal activity.  Within each physical server are many CPUs, configured to provide a flexible group of virtual servers so that ZotPortal can support very large numbers of simultaneous requests.  Data is stored on a disk cluster configured with Sun’s ZFS (zettabyte file system) which provides both redundancy (data protection) and high performance parallel access.