You are here: Home News & Alerts Alerts and Malfunctions File server / central storage …

File server / central storage system: maintenance work for Wednesday, 23.06.21 from 9 a.m. onwards

on Wednesday, 23.06.21 a system update of the file server / central storage system will be performed. The maintenance work will start around 9 am and is expected to last 9 hours. Individual interruptions at different times must be expected. The affected services and the effects are described below.

==================
Services:
==================

All services that use the central storage system are affected. These services include: Ilias, web server, workgroup server, BSCW, NEMO, bwLehrpool, home directories, login server, shares / group drives and profiles (Windows).


==================
General impact:
==================

Depending on how the various services are connected, outages of different lengths occur. Login, session and storage problems can therefore occur at any time within the maintenance window. If necessary, please refer to the protocol-specific notes, which are described below.

Each storage node (36 storage nodes are in use, with only 16 storage nodes enabling a connection and the other 20 storage nodes being in the background and thus having no customer contact) is updated and restarted individually one after the other. This process takes about 8-9 hours, with the next storage node being updated only after the previous storage node has been successfully updated. The storage node is therefore not available for the duration of the update (approx. 20-30 minutes). Services whose protocol binding automatically switches to another storage node will only be affected for a short time. Services whose protocol connection does not allow automatic switching will therefore be unavailable for up to approx. 40 minutes. Since the individual storage nodes are updated at an arbitrary point in time, it is not possible to determine at which point in time the individual services will be affected.

Note for home directories / shares / group drives: For these services, the directory may be unavailable for a short time and access may hang. Depending on the timeout, access may be possible again after just a few minutes, so you simply have to wait a short while here. If access is still not possible after a longer time, you may have to establish a new connection manually.

In general, since each storage node is updated individually, services may be affected several times within the maintenance window.


Notes for the different protocols:

==================
Impact for NFSv3 customers using
use ufr-dyn.isi1.public.ads.uni-freiburg.de
==================

Customers who mount our storage system using NFSv3 via the ufr-dyn.isi1.public.ads.uni-freiburg.de URL should be minimally impacted by this procedure. The reason for this is that the IP of a storage node is automatically passed to another node as soon as the original node is unavailable. Therefore, we expect only a short latency to be noticed.


==================
Impact for all other clients (SMB + NFSv3/v4),
which use ufr.isi1.public.ads.uni-freiburg.de
(also applies to ufr2, fnet and phfr)
==================

For all customers that mount the storage area via ufr.isi1.public.ads.uni-freiburg.de (both SMB and NFSv3/v4), this procedure mainly means that at some point in time the node connecting to the storage system will be unavailable for the duration of the reboot/update (approx. 30 minutes). With 16 nodes with approx. 20-30 minutes per node, this means a potential outage (amounting to up to approx. 30 minutes) within a time window of approx. 8-9 hours. If necessary, a new connection to the storage system can be established manually / automatically immediately to connect to a new node. This can keep downtime to a minimum, although it can of course happen that a connection is made to a node which is updated later.

NFS/SMB: In case of a hard-mount, the connection will naturally hang until the storage node is available again.

We apologize for any inconvenience this may cause and will do our best to keep the disruption to a minimum.

Yours sincerely,
Your Storage Team