Difference between revisions of "EESSI (Admin guide)"

From HPC Wiki
EESSI (Admin guide)
Jump to navigation Jump to search
(Created page with "EESSI (Admin guide)<nowiki /> EESSI<nowiki /> {{DISPLAYTITLE:EESSI (Admin Guide)}}<nowiki /> Streaming scientific...")
 
 
(One intermediate revision by the same user not shown)
Line 24: Line 24:
 
Packages for other Linux distributions or container setups are available as well.
 
Packages for other Linux distributions or container setups are available as well.
  
Since the configuration for EESSI is shipped with the `cvmfs-config-default` package, no additional configuration is required to access the EESSI software stack.
+
Since the configuration for EESSI is shipped with the <code>cvmfs-config-default</code> package, no additional configuration is required to access the EESSI software stack.
  
 
There are some CVMFS configuration best practices in the [https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/ MultiXScale CVMFS tutorial].
 
There are some CVMFS configuration best practices in the [https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/ MultiXScale CVMFS tutorial].
 
 
Most importantly, consider to:
 
Most importantly, consider to:
 
* take special care of the CVMFS cache on diskless worker nodes  
 
* take special care of the CVMFS cache on diskless worker nodes  
 
* set up routing for offline worker nodes
 
* set up routing for offline worker nodes
* configure autofs to *never unmount* repositories due to inactivity or use static mounts
+
* configure autofs to never unmount repositories due to inactivity or use static mounts
  
 
== Caching with SQUID Proxies ==
 
== Caching with SQUID Proxies ==
Line 52: Line 51:
  
 
== Parallel to Local Software Installations ==
 
== Parallel to Local Software Installations ==
Sites can provide access to EESSI while still maintaining their own software installations.
+
Sites can provide access to EESSI while still maintaining their own software installations with any way of distribution.
It is possible to operate a Stratum 0 server to distribute the local software stack with the same technology as EESSI.
+
It is also possible to operate a Stratum 0 server to distribute the local software stack with the same technology as EESSI.
This can be a good approach to  
+
This can be a good approach to reduce load of parallel file systems, which often suffer from access to many small files (often the case for software installations).
  
 
Mixing modules of different software stacks is not likely to work without conflicts.
 
Mixing modules of different software stacks is not likely to work without conflicts.
  
 
== Additional Resources ==
 
== Additional Resources ==
* The EESSI project [https://www.eessi.io/docs/support/ provides support]!
+
* The EESSI project [https://www.eessi.io/docs/support/ provides support]
 
* https://www.eessi.io/
 
* https://www.eessi.io/
 
* https://www.eessi.io/docs/
 
* https://www.eessi.io/docs/
 
* https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/
 
* https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/

Latest revision as of 15:33, 28 March 2024

Streaming scientific software with EESSI requires some preparation by HPC administrators to ensure good performance and reliability.

If multiple users on many clients are expected to use EEESI, the site should definitely install at least one SQUID proxy that caches all CVMFS requests at the local network level. It is easy to set up, but must be configured in all CVMFS client configurations as a first point of access.

If a local copy of the entire EESSI software stack is required, a "Stratum 1" server can be installed. It can either be part of the public EESSI distribution network, or just serve private clients on your local network. Stratum 1 servers synchronize the software stack with the main copy (Stratum 0) and can help to be a "good citizen" in the EESSI community, by reducing the load on their public servers.

CVMFS Installation and Configuration

Following the CVMFS installation guide for RHEL based Linux distributions, the CVMFS software repository can be added to the local package manager:

sudo yum install https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm
sudo yum install -y cvmfs

Packages for other Linux distributions or container setups are available as well.

Since the configuration for EESSI is shipped with the cvmfs-config-default package, no additional configuration is required to access the EESSI software stack.

There are some CVMFS configuration best practices in the MultiXScale CVMFS tutorial. Most importantly, consider to:

  • take special care of the CVMFS cache on diskless worker nodes
  • set up routing for offline worker nodes
  • configure autofs to never unmount repositories due to inactivity or use static mounts

Caching with SQUID Proxies

It is recommended to set up a dedicated Squid forward proxy server to cache common HTTP(s) requests in a local network. This reduces latency and improves the user experience while using software distributed with EESSI.

The proxy should have a fast network connection to all client systems, a decent amount of memory, and fast local storage. Corresponding packages are likely available in common Linux distributions

For more detailed installation instructions, follow the MultiXScale documentation.

Cluster-local Copy through a Stratum-1 Server

CVMFS is distributing software by managing a central main copy on the "Stratum 0" server and replicating this copy on a second level, the "Stratum 1" servers. This way, the load is distributed and a lower latency through a closer physical distance can be achieved.

A Stratum 1 server can be kept private to your local cluster and ensures a full copy of the EESSI software stack. It also reduces the load on public EESSI servers

Follow the MultiXScale documentation for more detailed instructions of how to install your own Stratum 1 server.

Parallel to Local Software Installations

Sites can provide access to EESSI while still maintaining their own software installations with any way of distribution. It is also possible to operate a Stratum 0 server to distribute the local software stack with the same technology as EESSI. This can be a good approach to reduce load of parallel file systems, which often suffer from access to many small files (often the case for software installations).

Mixing modules of different software stacks is not likely to work without conflicts.

Additional Resources