You are here: Home News 2016 HPC Newsletter 01/16

HPC Newsletter 01/16

HPC Newsletter 01/16

Dear colleagues,

Welcome to our 1st newsletter in 2016. The HPC Team is happy to announce that the purchase decision for the forthcoming bwForCluster NEMO has been taken. The contract with the vendor was signed on March 7th. This has been a very long and complex procedure, but we are confident that the final result will justify the effort. Unfortunately, we are not ready to disclose the technical details yet. Currently, we aim for general availability of the bwForCluster NEMO in July 2016.

Please note that "bwForCluster NEMO" will be the official name of the new cluster, replacing the old name "bwForCluster ENM".

Wishing you a Happy Easter!

Your HPC Team, Rechenzentrum, Universität Freiburg

Table of Contents

Upcoming events and important dates

Outlook for 2016

ForHLR II in Karlsruhe officially started

bwForCluster NEMO

HPC-Team Reorganization

ViCE and Citable Methods

DFG-Proposal "Performance engineering for scientific software"

HPC Visualization in the context of the bwVisu project

Storage Solutions for Scientific Data: LSDF-II

Publications

Upcoming events and important dates

30.03.2016: Neuroscience HPC CodeJam, Bernstein Center, Universität Freiburg

05.04.2016: Mathematica-Wissenstransfertag, SCC Campus Süd, KIT

18.04.2016: Kick-Off meeting for the state-funded ViCE project in Mannheim

14-16.06.2016: NVidia Hackathon in Strasbourg (to be confirmed)

For a list of upcoming course opportunities, please see http://www.bwhpc-c5.de/en/course_opportunities.php

Outlook for 2016

The arrival of the new bwForCluster NEMO has significant consequences for the overall HPC strategy of the University. Together with the bwClusters in Karlsruhe, Heidelberg/Mannheim, Ulm and Tübingen there will finally be enough computational power available to begin the decommissioning process of the old Black Forest Grid. We will start the process with the parts of the BFG that are older than 5-6 years. In the long run, the Black Forest Grid hardware will remain active exclusively as a LHC Tier-2 production site.

The bwForCluster NEMO has been designed with expansion in mind. To this end, there is a significant amount of money left for extensions and upgrades in the first year. Having a very diverse mix of scientific communities and no live statistics, it is difficult to give an estimate how the money should be invested in a most efficient way. Therefore, we will closely monitor the system in the first few months of usage. Our scientific communities and shareholders will be involved in the decision process.

We strive to adopt a continuous renewal mechanism for the bwForCluster NEMO. This means that new investments by prospective shareholders are welcome at any time. This is in contrast to the traditional strategy to replace everything at once after five years of usage. In the new model, shareholders can get virtually exclusive usage on the part of the cluster that is covered by their investment.

In parallel to the technical processes, we have started to establish additional governance mechanisms. We plan to have the first user assembly shortly after the official start of the bwForCluster NEMO and a cluster advisory board will be formed consisting of representatives from the scientific communities, the shareholders and the cluster operating team.

ForHLR II in Karlsruhe officially started

Since March 2016, a new HPC resource is available to researchers from Baden-Württemberg through the "ForHLR II" high performance compute cluster in Karlsruhe at KIT. While the bwForCluster NEMO and its siblings are entry-level HPC resources, the "ForHLR II" represents the next level for advanced HPC users. As such, for requesting compute resources, a detailed application is required. This application will undergo a full scientific review process.

The new ForHLR II also has a compute partition with GPU co-processors.

For further details (in German language), please consult this page.

bwForCluster NEMO

Please take note of the following naming convention:
  • Production cluster on new hardware: "bwForCluster NEMO
  • Preliminary cluster on old hardware: "pre-bwForCluster NEMO", "pre-Cluster NEMO" or "test cluster NEMO"
Arrival and installation of the new cluster hardware is projected to happen in the second quarter of 2016. Installation and setup of the new hardware will proceed in three phases. During the first two phases, the old pre-cluster (also called NEMO) will run in parallel and remain available. First, the new cluster hardware will be installed, provisioned by the vendor and then verified to satisfy the procurement specifications. After that, the new hardware will be provisioned with our designated software environment and integrated into the bwHPC infrastructure. In the final step, the "bwForCluster NEMO" will be made generally available to its designated user communities. The final transition from the pre-cluster to the production cluster should take only a day. After that, the old hardware will be decommissioned.

HPC-Team Reorganization

In preparation for the start of the bwForCluster NEMO, the HPC-Team was restructured. Dr. Raphaël Pesché has left the team and will take other responsibilities at the Rechenzentrum. For compensation, the core HPC-Team is supplemented by colleagues from the closely related projects "Virtual Research Environments" and "Citable Methods". Both, core HPC-Team and colleagues from related projects, are members of the eScience department of the the computing center. The new structure was further strengthened by the reorganization of office space in the Rechenzentrum. The general rule for the eScience department is now: Same project, same office.

There has always been a tight cooperation between the administrators operating the Black Forest Grid and the administrators running the forerunner of the bwForCluster NEMO, namely the bwGRiD. As such, there always existed the notion of a "virtual" HCP-Team. This has now been taken one step further by starting to merge resources (wiki, ticket system, monitoring) and establishing a weekly jour fixe.

ViCE and Citable Methods

Two state funded projects, namely "Virtual Research Environments" (ViCE) and "Citable Methods" are about to start. To this end, the eScience department of the Rechenzentrum has teamed up with partners from other universities and with scientific work groups in Freiburg. The emphasis of the first project is the creation of templates for research environments. These templates should be usable in HPC clusters, cloud environments and desktop computers alike. The second project explores various strategies for making scientific results acquired through computer simulations or data analysis reproducible and citable. Virtualization is used as a key technology to separate the research environment from the underlying hardware.

DFG-Proposal "Performance engineering for scientific software"

Together with partners from Ulm and Stuttgart, the Rechenzentrum Freiburg has answered a call for proposal from the German Research Foundation (DFG) related to efficiency considerations in software used on high performance compute clusters. Our proposal focuses on generally applicable strategies to improve code efficiency, not individual optimizations for very special use cases. Furthermore, to make the effort sustainable, our proposal also aims at defining metrics to make the improvements measurable and comparable.

In a lot of scientific communities, code efficiency is underestimated and considered as a by-product instead of a notable scientific contribution. Therefore, one key contribution from Freiburg to the proposal was the notion to provide a proper publication platform for such efficiency improvements.

HPC Visualization in the context of the bwVisu project

For visualization purposes, TurboVNC is available on the pre-bwForCluster and will also be installed on the new bwForCluster NEMO. VirtualGL can be made available in the pre-bwForCluster if there is sufficient demand, albeit only on rather old GPU hardware. There are no plans to acquire additional GPU hardware as a computational resource for the new bwForCluster ENM, since this is covered by our colleagues in Tübingen and Karlsruhe with the upcoming bwForCluster BinAC and the new Tier-2 cluster "ForHLR II". Instead, Freiburg will investigate the MIC architecture of the Xeon Phi processors.

Before scientific data can be visualized, it needs to be pre-processed and prepared for visualization. This raises the question where the raw data is to be stored and how it is transported to the pre-processing engine - possibly a HPC cluster - and the visualization service. There might be inherent limitations in this workflow, such as bandwidth and security related restrictions. Eventually, this can be solved individually on a case-by-case basis. However, with virtual research environments investigated in the project ViCE, template solutions could be created, thereby promoting best-practice approaches. The two projects will work closely together in that area.

Storage Solutions for Scientific Data: LSDF-II

Finding solutions for storing an ever-growing amount of scientific data is an ongoing challenge. Offering storage resources as a service is in high contrast to offering computational resources. Computational resources are no longer needed once a computation is done. They can then be reused for other computations. In most cases, storage resources are used in another way. What goes into a central storage possibly ends up staying there forever, or at least until the hardware is decommissioned. This is the primary reason why every 5 years, people with large storage requirements (or people administrating said resources) get slightly nervous.

Managing storage can be a tedious task for the operators of the service as well as for the users. In a perfect world, scientific data would be available in a storage cloud, with unlimited storage and unlimited bandwidth. Users that are authorized to access the data would simply map it onto the file system of their local workstation or HPC cluster. Designated parts of the data could be synced to other devices and exchanged with other colleagues. Ideally, scientific data should be versioned, so one could have a time based view on development of the data sets.

With the LSDF-II grant proposal, we try to address these issues in a similar fashion to the way the demand for HPC resources was addressed. Since the ideal solution outlined in previous paragraph is not economically feasible, we will consult our scientific communities to define priorities and find solutions that can realistically be deployed.

Publications

Please inform us about any scientific publication and any published work that was achieved by using bwHPC resources (bwUniCluster, bwForCluster JUSTUS, bwForCluster MLS&WISO or pre-bwForCluster NEMO). An informal E-Mail to publications@bwhpc-c5.de is all it takes. Thank you!

Your publication will be referenced on the bwHPC-C5 website:

http://www.bwhpc-c5.de/en/user_publications.php

We would like to stress that it is in our mutual interest to promote all accomplishments which have been made using bwHPC resources. We are required to report to the funding agencies during and at the end of the funding period. For these reports, scientific publications are the most important success indicators. Further funding will therefore strongly depend on both quantity and quality of said publications.


HPC Team, Rechenzentrum, Universität Freiburg
http://www.hpc.uni-freiburg.de

bwHPC-C5 Project
http://www.bwhpc-c5.de

To subscribe to our mailinglist, please send an e-mail to to hpc-news-subscribe@hpc.uni-freiburg.de
If you would like to unsubscribe, please send an e-mail to hpc-news-unsubscribe@hpc.uni-freiburg.de

Previous newsletters: http://www.hpc.uni-freiburg.de/news/newsletters

For questions and support, please use our support address enm-support@hpc.uni-freiburg.de

Filed under: