You are here: Home News 2015 HPC Newsletter 06/15

HPC Newsletter 06/15

HPC Newsletter 06/15

Dear colleagues,

Welcome to our 6th newsletter in 2015. The procurement process for the bwForCluster ENM is well on track. We have just completed the first round of negotiations with the vendors. Meanwhile, our sibling bwForCluster MLS&WISO in Mannheim/Heidelberg is about to start and will provide additional HPC resources to researchers in the fields of Molecular Life Science, Economics and Social Sciences.

With best regards,

Your HPC Team, Rechenzentrum, Universität Freiburg

Table of Contents

Upcoming Events and Important Dates

bwforCluster ENM procurement

NEMO status

bwForCluster MLS&WISO about to start

bwUniCluster status

bwForCluster ENM governance

Construction works in the Rechenzentrum

Towards 100GbE BelWue interconnect

High performance IO in storage

Publications

Upcoming Events / Important Dates               

26.10.2015 - 28.10.2015 GPU Programming using CUDA (2015-CUDA (2)) HLRS
28.10.2015  Introduction to bw-UniCluster HLRS
05.11.2015 - 06.11.2015 Scientific Visualization (2015-VIS2) HLRS
07.12.2015 - 11.12.2015 Fortran for Scientific Computing HLRS

 

bwForCluster ENM procurement

The procurement process for the bwForCluster ENM is well on track. After having received and evaluated the first indicative offers from the vendors, we invited them for the first round of negotiations. Each vendor was given two hours to present his solution and answer our questions. The discussions were very fruitful and provided us with valuable information to clarify and improve our hardware and configuration specification. According to the rules of the procurement process, we cannot give more detailed information at this point. However, we are currently sorting out some of the open questions and working towards a final specification. We are optimistic that the final result can fully meet the expectations of our scientific communities.

NEMO status

The preliminary bwForCluster NEMO in Freiburg was under fluctuating loads in the last couple of weeks and we experienced phases of fairly low usage. NEMO will be kept in operational state until the bwForCluster ENM starts its operation. To this end, we have secured an extensive collection of spare parts. The software stack is continuously getting extended according to your requirements. Please note that NEMO is already using the software environment of the forthcoming bwForCluster ENM, so every additional software package and every improvement in its configuration will instantly be present once the more powerful new hardware becomes available.

bwForCluster MLS&WISO about to start

In Mannheim and Heidelberg, the combined bwForCluster MLS&WISO is almost ready to start. It is dedicated to research in Molecular Life Science, Economics and Social Sciences as well as scientific computing in methods development. In accordance to the state-wide HPC strategy (bwHPC), the resources are available to all scientists in Baden-Württemberg working in the above mentioned scientific areas.

The production part of the bwForCluster MLS&WISO offers a total of 616 compute nodes. They are distributed between two sites in Mannheim and Heidelberg. The two parts of the cluster are linked via a special InfiniBand interconnect that aggregates four 40 GBit/s links into a single 160 GBit/s connection bridging the 28km distance. Latency is only slightly above the hard limit set by the speed of light, effectively merging the two parts into a single high-performance computing resource.

For more technical details, and more importantly, for information on how to access the resources once they become generally available, please take a look at the bwHPC Wiki.

bwUniCluster status 

We would like to remind you that the high performance computing resource bwUniCluster (hosted by the KIT in Karlsruhe) is available to all students, scientists and employees of the University of Freiburg without any formal project proposal. For those already using the bwUniCluster, we are always looking for feedback on whether the bwUniCluster meets your expectations and how long the average queuing time of your jobs is at the moment. For information on access to the bwUniCluster and updated usage statistics of the past 12 months, please take a look at http://www.hpc.uni-freiburg.de/bwunicluster.

bwForCluster ENM governance 

In mid-September the Computing Center of the University of Freiburg hosted the ZKI Autumn Meeting, a conference attended by nearly 300 head staff members of computing centers from all over Germany. The HPC group within the eScience department of the Computing Center together with colleagues from Mannheim organized a half-day workshop on “Governance issues in cooperations” in the afternoon before the official start of the main conference. We used this platform for the exchange of ideas to refine our governance structures for the bwForCluster ENM. In preparation for the workshop, we held numerous discussions with members from our scientific communities. We face the challenge to satisfy a broad range of different communities with various requirements and expectations. Additionally, we need to consider the interests of shareholders who contributed to the overall investment. 

Construction works in the Rechenzentrum

“Die Mauer muss weg!”

The wall that separated machine hall IIa and machine hall IIb has been deconstructed. Not only is this aesthetically more pleasing, but it allows us to implement consolidated concepts for cooling, power supply and access control. Apart from that, there are minor additional construction tasks left to be completed, but in principle the infrastructure to receive the new bwForCluster ENM is already in place.

Towards 100GbE BelWue interconnect

After a long and strenuous evaluation period, the Computing Center acquired a new border router. This new router will allow up to 100 Gbit/s transfers in the future and thus enable fast access to the bwForCluster ENM from the outside and fast access to data hosted on external sites. Funds have been provided to purchase the multiple 40Gbit/s ports component to allow the cluster to be connected with 80 Gbit/s to the core switch and to storage components available in the BFG cluster. To make full use of the new bandwidth capabilities, our BelWue peering sites need to upgrade their uplinks as well. Therefore, increasing the bandwidths will be a gradual process.

High performance IO in storage

Mass storage for the most part still uses traditional spinning disks. Usually, they are deployed in massively parallel hardware configurations, e.g. by application of striping methods over several storage targets. This produces good results in situations where data is processed in large data streams that are linearly written or linearly read. The performance drops significantly if the read/write patterns get random or many small files need to be processed. Solid State Drives, due to their non-mechanical nature, can cope much better than spinning disks with such usage patterns. However, they are also significantly more expensive and do not offer the same storage capacities. Thus, we were investigating a hybrid solution: Combining the traditional spinning disk backend with an SSD based block cache frontend. We acquired a couple of SSDs and created such a combined block device. The initial experiments and first test results look very promising. The SSD caching significantly speeds up file system performance, especially for random writes. This could very well become the standard solution for the $HOME directories in the cluster.

Publications

Please inform us about any scientific publication and any published work that was achieved by using bwHPC resources (bwUniCluster, bwForCluster JUSTUS, bwForCluster MLS&WISO or pre-bwForCluster NEMO). An informal E-Mail to publications@bwhpc-c5.de is all it takes. Thank you!

Your publication will be referenced on the bwHPC-C5 website:

http://www.bwhpc-c5.de/en/user_publications.php

We would like to stress that it is in our mutual interest to promote all accomplishments which have been made using bwHPC resources. We are required to report to the funding agencies during and at the end of the funding period. For these reports, scientific publications are the most important success indicators. Further funding will therefore strongly depend on both quantity and quality of said publications.


HPC Team, Rechenzentrum, Universität Freiburg
http://www.hpc.uni-freiburg.de

bwHPC-C5 Project
http://www.bwhpc-c5.de

To subscribe to our mailinglist, please send an e-mail to to hpc-news-subscribe@hpc.uni-freiburg.de
If you would like to unsubscribe, please send an e-mail to hpc-news-unsubscribe@hpc.uni-freiburg.de

Previous newsletters: http://www.hpc.uni-freiburg.de/news/newsletters

For questions and support, please use our support address enm-support@hpc.uni-freiburg.de

Filed under: