Migration from bwGRiD
SHUTDOWN: bwGRiD and Lustre Storage in Freiburg
Unfortunately, we have to inform you that the bwGRiD cluster in Freiburg will go offline on December 1st (computation service) and December 18th (file service).
The last months have seen numerous failures of the LustreFS that resulted in day long outages of the bwGRiD cluster, despite great efforts to keep the service alive. The bwGRiD service has been running far longer than originally expected. However, with the current degree of unreliability which cannot easily be dealt with, we cannot afford to keep the service running.
New jobs will only start if they finish by Monday, December 1st (4 weeks from now). After December 1st, no new jobs can be submitted. Login will be possible until Thursday, December 18th. We will provide instructions on how to transfer the data to another service (e.g. bwFileStorage) below.
Possible alternatives:
- Early adopters are invited to use our testbed for the new bwForCluster ENM infrastructure, starting December 2nd
- GPU machines will be consolidated into the BFG infrastructure
- bwUniCluster in Karlsruhe is available as a general computational resource (the share of Freiburg is still not fully utilized, thus queue waiting should be short)
- If your scientific domain is Theoretical Chemistry, you are also eligible to use the forthcoming bwForCluster JUSTUS in Ulm (terms and conditions apply: entitlement and acceptance by user board)
Migrating to NEMO (pre)
The NEMO (pre) cluster will only be available for bwIDM users. There are different migration paths depending on your user type on the bwGRiD Freiburg:
- BFG user with local university accounts
- bwIDM user starting with "
ou_
" - Old certificate user starting with "
dgbw
"
Getting an Account on the bwForCluster ENM (pre) - NEMO
Skip this part if you already have an bwIDM account on the bwGRiD Freiburg. BFG and certificate users first have to get an entitlement and register for the new service. See www.bwhpc-c5.uni-freiburg.de/nemo for more information (THIS INFORMATION WILL FOLLOW SOON).
Migrating Data to the bwForCluster ENM (pre) - NEMO
There is a new $HOME
and $WORK
for all users, so you will have to copy all needed scripts to the new cluster. The storage is currently not that big, so please do not transfer terabytes of files.
Saving old Data from the bwGRiD Freiburg to the bwFileStorage
When using bwGRiD Freiburg, bwForCluster ENM (pre) or the bwUniCluster you are eligible for using the central storage bwFileStorage. First you'll need an entitlement for either the "bwgrid
", the "bwunicluster
" or "bwLSDF-FileService
". So if you already can login to the bwGRiD cluster in Freiburg or the bwUniCluster you do not need to do anything. the next step depends on your user account on the bwGRiD cluster.
BFG User
You do not need to save your $HOME
directory since this service will still be available on the BFG cluster and has already backups and snapshots. You will need to save your $WORK
directory which should be in /scratch/home/
. Run 'echo $WORK
' in the linux console to verify.
bwIDM User
You will have to save $HOME
and $WORK
directories. For Freiburg users the directories are located in /home/fr/fr_fr/
and /work/fr/fr_fr/
. This should be equivalent for other sites. Run 'echo $HOME; echo $WORK
' in the linux console to verify.
Certificate User
You will have to save $HOME
and $WORK
directories. The directories are located in /gridhome/
and /scratch/gridhome
. Run 'echo $HOME; echo $WORK
' in the linux console to verify.
Copying files from the login node to bwFileStorage
Files can be accessed though different protocols (SCP/SSH/SFTP/HTTPS). I you just want to access your directory on the bwFileStorage or want to verify that your files have been saved you can use the web browser or the login host.
# Example for HTTPS access https://bwfilestorage.lsdf.kit.edu/fr/fr_fr/fr_mj0 # Example for SSH access ssh fr_mj0@bwfilestorage-login.lsdf.kit.edu
For transferring data please use one of the following examples.
SCP Transfer
SCP can be used for single files. If you want to transfer directories you'll have to tar them first. For faster transfer adapt the cipher (see example). You can expect to transfer about 50 MBytes per second.
# Copy a 1 GByte and a 10 GByte file to bwFileStorage ~$ scp -c arcfour128 testfile-1g fr_mj0@bwfilestorage.lsdf.kit.edu: testfile-1g 100% 1000MB 58.8MB/s 00:17 ~$ scp -c arcfour128 testfile-10g fr_mj0@bwfilestorage.lsdf.kit.edu: testfile-10g 100% 10GB 44.8MB/s 03:43
# Tar test directory before transferring tar cvf test.tar test
Mounting bwFileStorage Directory
You can mount the directory through SSHFS and FUSE and copy files locally on the bwGRiD cluster.
# Create directory on login node login.bwgrid.uni-freiburg.de ~$ mkdir bwfilestorage # Mount directory from the bwFileStorage ~$ sshfs fr_mj0@bwfilestorage.lsdf.kit.edu: bwfilestorage # Do whatever you want, e.g. list directory content ~$ ls bwfilestorage snapshots temp testfile testfile-big # Umount directory ~$ fusermount -u bwfilestorage
Further Information
- bwHPC Wiki: www.bwhpc-c5.de/wiki/index.php/BwFileStorage
- Registration: https://bwidm.scc.kit.edu
- User's Manual (German): www.scc.kit.edu/downloads/sdm/Nutzerhandbuch.pdf
- RZ Information (German): www.rz.uni-freiburg.de/services/bwdienste/bwfilestorages