You are here: Home News 2018 HPC Newsletter 1801

HPC Newsletter 1801

HPC Newsletter 1801

Dear colleagues,

Everybody is likely very busy towards the end of the year, so we will keep this newsletter brief.

The time between the years is a great opportunity to perform large or lengthy computations on NEMO, the bwUniCluster and the other bwForClusters. Remember that thousands of jobs can be queued for later execution, so that NEMO will not be running low on tasks when everybody is on vacation. As an additional benefit, this would provide a strong argument for the NEMO II grant application that we will start preparing next year.

The NEMO Cluster-Beirat (advisory board) has met on December 14. We are planning for a larger Nutzerversammlung (general assembly) in the first half of 2019 with the possibility to attend the meeting remotely via teleconference.

The Cluster-Beirat recommends the enforcement of quotas on the parallel filesystem. To this end, the current “virtual” per-user quotas will
be replaced by hard group quotas in 2019. There will be a separate announcement with more details before these quotas are established. Until then please respect the individual user quotas of 20 Terabyte and 1 Million files: https://www.bwhpc-c5.de/wiki/index.php/BwForCluster_NEMO_Hardware_and_Architecture#Limits_and_best_practices

NEMO has been extended with 20 more compute nodes thanks to the investment from Markus Schumacher and Joachim Dzubiella. Using our uniform operating model for large scale scientific research infrastructures, these nodes have been made available immediately after delivery. Please keep those 920 NEMO nodes busy between the years, since they will not be taking a vacation. With NEMO becoming more busy all the time, chances of getting large computations done quickly are much higher during these days.

Early in 2019, NEMO will offer a small number of GPUs (8x Nvidia Tesla V100). These are not meant for large production jobs. Instead, our communities are invited to evaluate whether larger investments in this technology are feasible. Further procurements should allow for a multi-vendor strategy, i.e. CUDA support will not guaranteed.

NEMO at times is extremely busy with a lot of jobs queued up. This makes rapid prototyping of jobs NEMO difficult. To circumvent this problem, there is now an express queue available. The express queue offers only very limited resources in terms of walltime, cpu cores and memory. However, using this queue allows jobs to start significantly faster: https://www.bwhpc-c5.de/wiki/index.php/BwForCluster_NEMO_Specific_Batch_Features#Express_Jobs

We wish you a Merry Christmas and a Happy New Year

Your NEMO team


HPC Team, Rechenzentrum, Universität Freiburg
http://www.hpc.uni-freiburg.de

bwHPC initiative and bwHPC-S5 project
http://www.bwhpc.de

To subscribe to our mailing list, please send an e-mail to to hpc-news-subscribe@hpc.uni-freiburg.de
If you would like to unsubscribe, please send an e-mail to hpc-news-unsubscribe@hpc.uni-freiburg.de

Previous newsletters: http://www.hpc.uni-freiburg.de/news/newsletters

For questions and support, please use our support address enm-support@hpc.uni-freiburg.de

Filed under: