
Computing Centre
The Computing Centre at DMSC
The Computing Centre is currently in both an operational and a design phase, and its mission is therefore twofold:

- To provide various IT and in particular scientific computing services to the ESS divisions in the planning and construction phases of the ESS facility in Lund;
- and to develop, design and build expertise and systems in areas relevant to the future Data Management and Software Centre.
The Computing Centre is located in Copenhagen at the Data Management and Software Centre
The ESS DMSC hardware is located at the DMSC premises in an on-site data center facility built and operated by the Data Management and Software Centre.
The cluster consists of the following components:
- 196 compute nodes with a total of 2512 cores
- 466TB of storage
- A management network, used for maintenance
- A InifiniBand network (QDR and HDR), implementing the interconnect between nodes, and between the nodes and the storage system
- A batch-system for handling jobs
The nodes of the cluster are divided into several different queues each of which is made up of different hardware:
The 4 compute nodes of the long queue are Lenovo x3550M5 systems with the following specifications:
- Processor: 2x Intel Xeon E5-2630 v3 2.40Ghz with 8 cores each
- Memory: 128GB
- System disk: 1x 300GB SAS
- Ethernet: Gigabit Ethernet controller, used for management
- InifiniBand: QLogic HPA InfiniBand controller, used for production interconnect
The 24 compute nodes of the long queue are DELL PowerEdge C8220 systems with the following specifications:
- Processor: 1x Intel Xeon E5-2650 2.00Ghz with 8 cores
- Memory: 64GB
- System disk: 1x 100GB SAS 6GBPS
- Ethernet: Gigabit Ethernet controller, used for management
- InifiniBand: QLogic HPA InfiniBand controller, used for production interconnect
The 24 compute nodes of the long queue are DELL PowerEdge FC430 systems with the following specifications:
- Processor: 2 x Intel Xeon E5-2680 V4 2.4Ghz with 14 cores each
- Memory: 132GB
- System disk: 1x 100GB SAS 6GBPS
- Ethernet: 10Gb/s Gigabit Ethernet controller, used for management
- InifiniBand: QLogic HPA InfiniBand controller, used for production interconnect
The cluster's storage systems provide a total of 466 TB of storage on a mix of Lustre and ZFS file systems. Furthermore, a 110TB off site backup system exists.
The cluster has a suite of software available for use in scientific computation, as well as a batch system for controlling the cluster resources.
Currently the following software is installed on the cluster:
- Simple Linux Utility for Resource Management (SLURM) as the batch system.
- McStas in various versions
- Mantid versions 3.4.0, 3.5.1 and 3.6.0
- Various scientific software packages, such as scipy, matplotlib, numpy, CERNlib and CERN ROOT and BOOST
- MPI libraries for parallel execution:
- Openmpi versions 3.0 and 4.0
- MVAPICH 2-1.6-qlc for kernel supported MPI over Infiniband
- MPICH 1.2.6
- GCC compilers in various versions
- Intel compiler versions 12.1.2 and 17.1
The Computing Centre provides a number of vital software services for the development of the ESS.
In addition to running the high performance cluster, the Computing Centre also provides a number of infrastructure services for the use of ESS staff. If you require any of these services please e-mail us.
Software development is a vital part of the development of the European Spallation Source. Developers working on the ESS project are hosted in a number of different countries and it is therefore vital that they have the right software at their disposal for collaboration. The Computing Centre therefore hosts a number of services which enables this collaboration to take place successfully:
- Repository managers (Gitlab)
- Continuous integration (Jenkins)
- Build servers for Linux, Windows and Mac OSX
Finally, the Computing Centre hosts a number of internal services that are required to run a modern data centre, these include:
- A backup system located at an off-site location.
- Support software (Request Tracker)
- Virtualization system (Ovirt)
- Foremann and Puppet for deployment and provisioning of servers and services.
- Monitoring (using Monit, Icinga and OSSEC)
- High performance and resilient storage systems (Lustre and ZFS)
- DNS
- LDAP for user management