OIT Data Center

Data Center

During October, OIT, Facilities Management, and Design and Construction Services completed the process of expanding the OIT Data Center (OITDC) facility in Engineering Gateway.  The OITDC consolidates systems from the Law Building Data Center (LBDC) and saves the campus energy and money.  It also allows the space formerly occupied by the LBDC to be used to meet the needs of the Law School.

Focus has now turned to improving engineering aspects of the data center (power, cooling, and control systems).

OIT is also redesigning housing and planning expanded capacity for the cluster systems hosted in the OITDC (HPC and GreenPlanet).  These projects are expected to continue into 2014.

OIT Data Center

Data Center

During October, OIT, Facilities Management, and Design and Construction Services will complete the process of expanding the OIT Data Center facility in Engineering Gateway.  When it is done, it will consolidate systems from the Law Building Data Center (LBDC), and save the campus energy and money.  It will also allow the space formerly occupied by the LBDC to be used to meet the needs of the Law School.

This process will require two interruptions to services.  The first, on October 6, will include:

  • A&BS websites including EH&S, Police, Facilities Management and others
  • Zotmail
  • Campus data warehouse
  • The HR Jobs site and other HR Web Applications  (QuickReq, FastClass etc.)
  • Campus Time Reporting System
  • The SNAP portal and services available through it
  • Campus cashiering, billing and credit card systems, including the Student Billing System, and DEFT for disbursement
  • PayQuest, PALweb, Equipment Management (EQS), and Permanent Budget (PBS)
  • All Graduate Division Applications
  • All Office of Research and ULAR Applications
  • FacServ Facilities ERP applications (work orders and billing), Facilities Tririga system, Facilities iPool, Fleet and Tiscor systems
  • All Kuali applications
  • Campus imaging and document storage applications (including Rapid Return, EROS, Exfiles)
  • Cascade Content Management System
  • OIT confluence wiki
  • SAMS and “Administrative” LDAP

The second interruption will take place the weekend of October 20-21 and (in addition to the services listed above) will include:

  • University Advancement systems (including Advance) and file server
  • A&BS file server
  • All Design & Construction systems
  • All Parking & Transportation systems including MyCommute, Permit Registration System; main Parking website and associated services
  • Access to the UC Learning Center (www.uclc.uci.edu)

Services hosted in Engineering Gateway and other facilities will continue to operate as normal – this includes the campus network and telephone system, email services, EEE, Academic Personnel systems, and research compute clusters.

ZotPortal: Online Resources for Students

ZotPortal

After an extensive campus-wide planning process, the student portal “ZotPortal” went live on April 27 of this year.  IAT-NACS worked with Student Affairs to design the high-reliability and high-performance system hardware, and provides ongoing network and system administration services, as well as housing elements of ZotPortal in separate data centers.

Through ZotPortal students can access academic and administrative information, connect to a Facebook account, subscribe to UCI campus news, student media and entertainment feeds, check UCI libraries catalogue and even search for people and campus web sites from one search box.

Students can arrange ZotPortal’s look and layout flexibly through a user-friendly drag-and-drop interface, subscribing to the particular information channels they want.

ZotPortal runs on hardware intended to provide maximal service continuity.  There are duplicate servers, connected through IAT’s DMRnet.  In the event one server becomes unavailable (say due to a power failure), the twin automatically assumes all portal activity.  Within each physical server are many CPUs, configured to provide a flexible group of virtual servers so that ZotPortal can support very large numbers of simultaneous requests.  Data is stored on a disk cluster configured with Sun’s ZFS (zettabyte file system) which provides both redundancy (data protection) and high performance parallel access.

Greenplanet: Cluster Computing for Physical Sciences

Greenplanet

Physical Sciences, with support from IAT-NACS, has assembled a high-performance computing cluster for climate modeling and other computational-intensive research.

Called “Greenplanet,” the cluster comprises nodes purchased by faculty in Earth Systems Sciences (ESS), Chemistry, and Physics, and it is expected that Math faculty will also participate.  At this time, Greenplanet includes almost 900 CPUs and is still growing.

IAT provides secure, climate-controlled space in the Academic Data Center,  system administration services as a team with Physical Sciences IT staff, and consultation on code parallelization and optimization.

According to Assistant Professor Keith Moore of ESS, Greenplanet is “a flexible cluster, suitable for massively parallel complex computations (such as climate simulations), and for smaller-scale use on a single node as a workstation.”

A typical node features 8 64-bit Intel CPUs.  Greenplanet features the Load Sharing Facility (LSF) for job management and the Lustre caching file system for extremely high-performance access to the large datasets typical of climate modeling.  Two message passing techniques are available for parallel code: OpenMP for communication between CPUs on a node, and MPI for communication between CPUs on different nodes.  Greenplanet also has the high-performance Infiniband interlink between nodes for high-speed communications.  There is extensive instrumentation available for tuning jobs to optimal execution speed and use of all available computational capacity in the cluster.

Software includes the Climate Systems Modeling package, parallel Matlab, and quantum chemistry packages such as Gaussian and Turbomole.

DMRnet Keeps You Up

chat logo

NACS and AdCom have jointly developed a network infrastructure for units with mission-critical computing services.  “DMRnet” (short for “Dual Modular Redundant Network”) allows you to create twin servers and to locate them separately in the NACS and AdCom Data Centers.

With this arrangement, an interruption in service (power, network, etc.) at one physical location can automatically transfer services (fail over) to the server in the other location. Users of your critical services will see no interruption.

DMRnet was designed and developed in response to the need to have UCI’s main web site, www.uci.edu, continuously available.  The upcoming Student Portal will be the latest client of the DMRnet system.

NACS staff are available to consult with interested departments on the options, cost, and fitness of DMRnet for your particular need.  DMRnet is intended only for the most critical campus services.