Current status of bwSFS and extensions
The hardware installation for the bwSFS core and individual extensions have been largely completed. The system has been installed and connected to the network gradually over the last few months. The individual components (storage grid and file system heads) have now been set up and networked with our partners in Tübingen (GRE tunnel).
Various agreements on the entry points, e.g. for the S3 part, are still in progress (shared certificates with Tübingen, primary entry points on hyperconverged infrastructure, failover between modules and sites, ...).
Currently, the first large-scale tests for fail-safety are planned to be carried out by the provider on January 20th/21st. (These were actually planned for mid-December, but were somewhat delayed by the troublesome situation due to the restrictions).
Parallel to the development of bwSFS, another caching component (also NetApp; FullFlash) was procured in de.NBI, which can be used for special applications. These include NFS shares, the cloud storage components (as a replacement, supplement for the existing CEPH).
The component is used for de.NBI part upstream of bwSFS, which was not primarily designed for performance but for stability/security, and can usefully accelerate other applications, such as the OMERO database. The further setup of the system will take place from January 11, as from then on all participants will be on board again. For this component, there is still some work to be done on the network and the new backbone infrastructure for Scientific Computing (Alcatel C32 100GbE switch stacked).
At this point we can also start to set up the first accounts and configure the whole thing against the desired AD structure. The goal is a comfortable management of the project, user and group data of the different parties. The admins of the central groups and the co-applicants should also be involved so that the whole thing fits accordingly. They should then be involved in the upcoming training sessions, depending on their intended role. After installation, configuration and adaptation, bwSFS will be used to support the active handling of research data at the university and for NEMO and bioinformatics communities (e.g. within the framework of SDC BioDATEN and NFDI DataPLANT) in the country and larger projects such as de.NBI/Galaxy. For ongoing activities and discussions to build the system and higher level services, such as the Data Publication Service InvenioRDM or (Data) Versioning Service Gitlab. The extensions for de.NBI have been installed distributed in row 1 and near the cloud in cabinet 48. Further configuration is being prepared at the moment and will continue in the new year (in a timely manner).
The coordination with Tübingen and the application and tender were realized on the Freiburg side in a leading role by the colleagues of the eScience department (bwHPC-S5, RDMG). An overview and discussion of RDM and the considerations and developments around bwSFS can also be found on the RDMG pages on the continuing education ILIAS.