• Log In
  • Skip to primary navigation
  • Skip to content
  • Skip to primary sidebar
  • Skip to footer

Information Technology News Archive

1996 - 2017

  • Home
  • About OIT
You are here: Home / Archives for Research Support / High Performance Computing

High Performance Computing

UCI Lightpath: a High-Speed Network for Research

June 29, 2015 by Jessica Yu

lightpathOIT has built a dedicated high-performance network infrastructure that can help meet the needs of researchers requiring the transfer of a large quantity of data within and beyond campus. This network is called UCI Lightpath which is funded by a Grant from National Science Foundation Campus Cyberinfrastructure – Network Infrastructure Engineering Program (NSF CC-NIE)

UCI Lightpath is composed of a Science DMZ with a 10 Gbps connection to the science and research community on the Internet, and a dedicated 10 Gbps network infrastructure on campus.  A science DMZ is a portion of the network that is designed so that the equipment, configuration, and security policies are optimized for high-performance scientific applications rather than for general-purpose business systems or “enterprise” computing.

The initial infrastructure covers eight campus locations including the OIT Data Center where computing clusters, such as HPC and Greenplanet reside.  The UCI Lightpath network infrastructure is separate from the existing campus network (UCINet.)  The diagram shows the current status of the UCI Lightpath.

For more information of UCI Lightpath and its access policy, please refer to OIT website http://www.oit.uci.edu/network/lightpath/

 

Filed Under: About OIT, High Performance Computing, High Performance Computing, Network, Research Computing, Research Support Tagged With: High Speed Network, LightPath, Research Computing

Greenplanet: Cluster Computing for Physical Sciences

July 22, 2009 by Francisco Lopez

Greenplanet

Physical Sciences, with support from IAT-NACS, has assembled a high-performance computing cluster for climate modeling and other computational-intensive research.

Called “Greenplanet,” the cluster comprises nodes purchased by faculty in Earth Systems Sciences (ESS), Chemistry, and Physics, and it is expected that Math faculty will also participate.  At this time, Greenplanet includes almost 900 CPUs and is still growing.

IAT provides secure, climate-controlled space in the Academic Data Center,  system administration services as a team with Physical Sciences IT staff, and consultation on code parallelization and optimization.

According to Assistant Professor Keith Moore of ESS, Greenplanet is “a flexible cluster, suitable for massively parallel complex computations (such as climate simulations), and for smaller-scale use on a single node as a workstation.”

A typical node features 8 64-bit Intel CPUs.  Greenplanet features the Load Sharing Facility (LSF) for job management and the Lustre caching file system for extremely high-performance access to the large datasets typical of climate modeling.  Two message passing techniques are available for parallel code: OpenMP for communication between CPUs on a node, and MPI for communication between CPUs on different nodes.  Greenplanet also has the high-performance Infiniband interlink between nodes for high-speed communications.  There is extensive instrumentation available for tuning jobs to optimal execution speed and use of all available computational capacity in the cluster.

Software includes the Climate Systems Modeling package, parallel Matlab, and quantum chemistry packages such as Gaussian and Turbomole.

Filed Under: Academic Data Center, Cluster Computing, High Performance Computing, Research Support, System Administration Tagged With: Cluster Computing, High Performance Computing, Research Computing

New Computing Cluster

February 23, 2009 by Francisco Lopez

Computer Cluster

Computer Cluster

Last year, Broadcom graciously donated over 400 compute servers to UC Irvine. While the majority of the servers were distributed to campus researchers, NACS and the Bren School of Information and Computer Sciences have collaborated to bring a new general-purpose campus computing solution to researchers and graduate students at no charge.

Initially, the Broadcom Distributed Unified Cluster (BDUC) is comprised of 80 nodes: 40 nodes with 32-bit Intel processors and 40 nodes with 64-bit AMD processors. Broadcom is expected to donate newer servers over time, allowing nodes to be upgraded.  NACS and ICS plan to further expand the cluster as well, subject to available staff and Data Center resources.

BDUC includes standard open-source compilers, debuggers, and libraries; in addition, the MATLAB Distributed Computing Engine (DCE) will soon be available.  In the near future, BDUC will offer priority queues for research groups that provide financial support or hardware to the cluster.

BDUC is now available to all faculty, staff, and graduate using your UCInetID and password. To request an account, send an e-mail to bduc-request@uci.edu.  A new user how-to guide is available on the NACS website http://www.nacs.uci.edu/computing/bduc/newuser.html.

Filed Under: Cluster Computing, High Performance Computing, Research Computing, Research Support Tagged With: Cluster Computing, Research Computing

Moving Bulk Data

January 23, 2009 by Harry Mangalam

Bulk Data

Moving Bulk Data

Data transfer is a routine activity for most faculty, whether it’s sharing research data with colleagues, downloading research databases, or backing up vital data.  When the volume of data you’re transferring is in the tens or hundreds of megabytes, any tool can get the job done.  When you have gigabytes, or tens of gigabytes of data to move, more strategy is called for.

The tool and strategy you should use depends on the kind of data you have, the size of the data, whether you need to do the transfer once or repeatedly, and the computer and tools you’re most comfortable with.  Some ideas are outlined below, but NACS’s Research Computing Support maintains a detailed discussion with links to sites from which you can get data transfer tools.

Two basic strategies exist which can reduce the actual volume of data you need to transfer: compression and synchronization.  Unless your data is already in a compressed form (say, MP3 files), compression can save a great deal of time and network capacity.  Many transfer tools can even do on-the-fly compression.  If your files contain sensitive information, you may wish to consider encrypting the data you’re transferring, although this imposes a small time penalty.

The second strategy, particularly when you’re regularly moving the same data, is to use a synchronization tool that recognizes that only part of your data is new and needs to be transferred.  This can be particularly convenient if you have an entire directory tree you wish to send over the network.

A final technique which might apply in some cases is to make the best possible use of the network, either by setting up multiple parallel data-transfer streams, or even creating a special-purpose GridFTP node.  RCS staff can help you analyze your data transfer needs, choose a method, and set up your system.

RCS staff will also coordinate with NACS Network Engineers to ensure they are aware of research data transfer needs in various campus locations.  This will help inform future network upgrade plans.  In addition, in a few cases, it may be possible to upgrade network connections to higher speed to support critical research requirements.

Filed Under: High Performance Computing, Research Computing, Research Support Tagged With: data transfer, Research Computing

High Speed Academic Networking

November 10, 2008 by Jessica Yu

Cenic

CENIC Network

The Corporation for Education Network Initiatives in California (CENIC — http://www.cenic.org/)  and UC have been discussing a possible new network infrastructure.  The intent is to facilitate ad-hoc, point-to-point, gigabit research network connections among UC campuses and other institutions (including Stanford and USC) connected to CENIC’s High Performance Research (HPR) Network.

This new infrastructure would parallel the existing production network links and could provide two distinctive services: dedicated, low-latency bandwidth to researchers’ labs for special applications, and optical connections for network protocol development or similar activities.

A Zotmail to all faculty recently went out to identify those with needs in this area.  Faculty input is sought to guide NACS on how to proceed and at what priority relative to other network needs. For more information, please join the discussion mailing list high-speed-networking@uci.edu .

Meanwhile, if you transfer research data sets over the network, and the speed of doing so is impeding your work, we want to hear from you.  Please contact NACS at x42222 or email nacs@uci.edu .

Filed Under: High Performance Computing, Network Planning & Consulting Tagged With: Networking, Research Support

  • Page 1
  • Page 2
  • Next Page »

Primary Sidebar

Links

  • Office of Information Technology
  • UC Irvine

Recent Posts

  • In Brief April 2017
  • Eduroam… WOW!
  • Tips and Tricks: Webfiles
  • Campus Radio System Upgrade
  • OIT Does That? Classrooms and Labs

IT News Archives

Need Help?

  • Call Us - (949) 824-2222
  • Email Us - oit@uci.edu
  • Help Desk
  • Knowledgebase

About OIT

  • OIT Employment Opportunities
  • Org Chart (PDF)
  • Policies

Contact Us

Office of Information Technology
University of California, Irvine
Irvine, CA 92697

949-824-2222

© 2025 UC Regents