Computing Centre

The interim Computing Centre at DMSC is a Danish in-kind contribution to ESS.

The Computing Centre is currently in both an operational and a design phase, and its mission is therefore twofold:

network cable
  • To provide various IT and in particular scientific computing services to the ESS divisions in the planning and construction phases of the ESS facility in Lund; 
  • and to develop expertise in areas relevant to the future data management and software centre.

The Computing Centre provides a number of services, which can be split into two different categories.

The primary activity is the operation of the high performance computing cluster, which is used by scientists who rely on computer modelling in order to support the design of the ESS facility and consists of two main parts:

  1. High performance scientific computing cluster

  2. High performance storage and backup system

The current activities also involve more traditional software development support and infrastructure services, and include:

  • Environments for software development
  • Data centre IT infrastructure
  • Hosting of various IT pilot projects
  • Technical user support

Should you, as an employee of ESS, require access to these services, or wish to discuss other services which we are able to provide, please e-mail us.

The cluster consists of the following components:

  • 65 compute nodes with a total of 892 cores
  • 146TB of storage of which 66TB is fast parallel storage
  • A management network, used for maintenance
  • A InifiniBand network, implementing the interconnect between nodes, and between the nodes and the storage system
  • A batch-system for handling jobs
Nodes

The nodes of the cluster is divided into 3 different queues each of which is made up of different hardware:

The Express Queue

The 1 compute nodes of the express queue is a DELL PowerEdge 410 systems with the following specifications:

  • Processor: 2x Intel Xeon 2.66Ghz with six cores (only 8 cores is availab for the express queue)
  • Memory: 48GB (6x8GB dual rank modules)
  • System disk: 2x 300GB SAS 6GBPS
  • Ethernet: Gigabit Ethernet controller, used for management
  • InifiniBand: QLogic HPA InfiniBand controller, used for production interconnect
The Short Queue

The 4 compute nodes of the long queue are Lenovo x3550M5 systems with the following specifications:

  • Processor: 2x Intel Xeon E5-2630 v3 2.40Ghz with 8 cores each
  • Memory: 128GB
  • System disk: 1x 300GB SAS
  • Ethernet: Gigabit Ethernet controller, used for management
  • InifiniBand: QLogic HPA InfiniBand controller, used for production interconnect
The Long Queue

The 37 compute nodes of the long queue are DELL PowerEdge 410 systems with the following specifications:

  • Processor: 2x Intel Xeon 2.66Ghz with six cores
  • Memory: 48GB (6x8GB dual rank modules)
  • System disk: 2x 300GB SAS 6GBPS
  • Ethernet: Gigabit Ethernet controller, used for management
  • InifiniBand: QLogic HPA InfiniBand controller, used for production interconnect
The Very Long Queue

The 24 compute nodes of the long queue are DELL PowerEdge C8220 systems with the following specifications:

  • Processor: 1x Intel Xeon E5-2650 2.00Ghz with 8 cores
  • Memory: 64GB
  • System disk: 1x 100GB SAS 6GBPS
  • Ethernet: Gigabit Ethernet controller, used for management
  • InifiniBand: QLogic HPA InfiniBand controller, used for production interconnect
Storage

The cluster's storage systems provides a total of 146 TB of storage. These 146 TB is provided by two storage systems:

  • 66 TB fast parallel storage accessible through the Lustre distributed file system. The storage servers are connected to the InfiniBand network, and the file system is mounted on all compute nodes.
  • 80 TB of slower ZFS based storage available for the nodes by means of NFS over ethernet. 
Software

The cluster has a suite of software available for use in scientific computation, as well as a batch system for controlling the cluster resources.

Currently the following software is installed on the cluster:

  • Simple Linux Utility for Resource Management (SLURM) as the batch system.
  • McStas in variouse versions
  • Mantid versions 3.4.0, 3.5.1 and 3.6.0
  • Variuos scientific software packeges, such as scipy, matplotlib, numpy, CERNlib and CERN ROOT and BOOST
  • MPI libraries for parallel execution:
    1. Openmpi 1.4.3
    2. MVAPICH 2-1.6-qlc for kernel supported MPI over Infiniband
    3. MPICH 1.2.6
  • GCC compilers in versions 4.4.5 (systrem compiler) and versions 4.6.2, 4.9.2 and 5.4.0
  • Intel compiler version 12.1.2

The Computing Centre provides a number of vital software services for the development of the ESS.

In addition to running the high performance cluster, the Computing Centre also provides a number of infrastructure services for the use of ESS staff. If you require any of these services please e-mail us.

Development

Software development is a vital part of the development of the European Spallaction Source. Developers working on the ESS project are hosted in a number of different countries and it is therefore vital that they have the right software at their disposal for collaboration. The Computing Centre therefore hosts a number of services which enables this collaboration to take place successfully:

  • Repository managers (Gitlab)
  • Continuous integration (Jenkins)
  • Build servers for both Linux, Windows and Mac OSX
Communication
Internal Services

Finally, the Computing Centre hosts a number of internal services that are required to run a modern data centre, these include:

  • A backup system located at an ofsite location.
  • Support software (Request Tracker)
  • Virtualization system (Ovirt)
  • Foremann and Puppet for deployment an provisioning of servers and services.
  • Monitoring (using Monit, Icinga and OSSEC)
  • High performance and ressiliant storage systems (Lustre and ZFS)
  • DNS
  • LDAP for user management

The interim Computing Centre is hosted at the Niels Bohr Institute, University of Copenhagen.

The ESS DMSC hardware is colocated together with the Danish Centre for Scientific Computing at the H. C. Ørsted Institute, University of Copenhagen.