Difference between revisions of "How to Use Jupyter with virtual environments"
(2 intermediate revisions by the same user not shown) | |||
Line 13: | Line 13: | ||
== Setup Virtual Environment == | == Setup Virtual Environment == | ||
− | Make a folder on the cluster where you want to save the virtual environment and go into the folder. For a detailed descrition on virtual environments see also []. | + | Make a folder on the cluster where you want to save the virtual environment and go into the folder. For a detailed descrition on virtual environments see also [[How_to_virtual_environments]]. |
Load python using the installed modules. Example shown with the current latest python module on Influx Cluster | Load python using the installed modules. Example shown with the current latest python module on Influx Cluster | ||
Line 99: | Line 99: | ||
Inside the slurm script, add the options needed to run. Below is an example of a 1 cpu setup for bash shell. | Inside the slurm script, add the options needed to run. Below is an example of a 1 cpu setup for bash shell. | ||
− | #!/bin/bash | + | #!/bin/bash |
− | + | ||
#Job parameters | #Job parameters | ||
#SBATCH --job-name=TensorTest | #SBATCH --job-name=TensorTest |
Latest revision as of 15:05, 17 September 2024
This is a setup for using Jupyter on the HPC clusters using virtual environments installed with pip, and running the file as: 2) an interactive job. 3) Using slurm scripts with Papermill.
Some sites already have jupyterLab instances available. If these are available, we recommend using these instances, instead of setting up your own interactive job. The user interface might be different from place to place, but besides choosing the desired kernel to run, it should not require any setup and should launch a jupyterLab web interface automatically.
This setup is setup for using python and uses TensorFlow as an example.
Setup Virtual Environment
Make a folder on the cluster where you want to save the virtual environment and go into the folder. For a detailed descrition on virtual environments see also How_to_virtual_environments.
Load python using the installed modules. Example shown with the current latest python module on Influx Cluster
- module load python/3.11
Available modules can typically be found using
- module avail
Create the virtual environment. If you dont want to inherit already installed packages inside python on the server, remove "--system-site-packages". myevn is the name of the virtual environment, feel free to change it to whatever fits you, just remember to replace the name in the other scripts.
- virtualenv --system-site-packages myenv
The virtual environment can then be activated using:
- source myenv/bin/activate
Jupyter can then be installed using the pip command:
- pip install jupyterlab
In case a needed package does not exist, these can be installed in the same way. For instance, Tensorflow is already installed on most clusters, but if "--system-site-packages" is not used, you would need to install these yourself. At the time of writing, the basic functionality needed is
- pip install tensorrt - pip install tensorflow[and-cuda]
When finished, deactivate the virtual environment with the command.
- deactivate
Run jupyter using interactive job
We will here setup a jupyter-notebook running in your own web browser connected to an interactive job on the cluster. We here assume that the cluster uses slurm to manage the jobs.
First start a job using the slurm command srun on
- CPU: srun --account=AccountName --partition=PatitionName --time=00:30:00 --pty $SHELL -i - GPU: srun --account=AccountName --ntasks=1 --partition=PatitionName --qos=QOS --nodes=1 --time=00:30:00 --gpus-per-task=1 --cpus-per-task=1 --pty $SHELL -i
Some HPC clusters do not require an account, and partition and qos specify where to run the jobs. This will depend on the specific cluster.
Then load the needed modules. For instance, on Influx this is currently:
- module load python/3.11 - GPU:module load cuda/12.1
cuda/11.0 is too old for the newest version of tensorflow, and will get an error when running model.
Activate virtual environment
- source /path_to_virtual_environment/myenv/bin/activate
Start jupyter using:
- jupyter-lab --no-browser --ip $(hostname -f)
Press enter once more if nothing is happening after a couple of seconds. ctrl + c to exit when you dont need it anymore.
In the output, while not having exited, look for an URL looking like http://c01.clusternet:8888/lab?token=813d3bf71988d0a21284c77ffa310446a7b5b9a80c1a579c You need to connect c01.clusternet:8888 to a local port on your own machine.
On your own machines terminal, set up a tunnel to the cluster using:
- ssh -L 9999:c01.clusternet:8888 username@hostname
9999 can be changed to another local port. c01.clusternet:8888 needs to match what the above URL showed.
go to the address in a browser
- http://localhost:9999/lab?token=....
token=... is the last part of the URL we found above, in this example token=813d3bf71988d0a21284c77ffa310446a7b5b9a80c1a579c
Often the token is not needed, and one can simply use
- http://localhost:9999/lab
Run ipynb Files using slurm script with Papermill
There are different ways to run a jupyter script on the cluster. One can for instance export to a python file and then run that. We here show how to run the ipynb file directly using Papermill.
First, we need to install Papermill. Load the virtual environment and python module and install using pip
- source /path_to_virtual_environment/myenv/bin/activate - pip install papermill
On the Cluster make a slurm script using an editor, for instance vim:
- vim runCPU.sh
Inside the slurm script, add the options needed to run. Below is an example of a 1 cpu setup for bash shell.
#!/bin/bash #Job parameters #SBATCH --job-name=TensorTest #Resources ####SBATCH --account=If_needed #SBATCH --time=00:30:00 #SBATCH --partition=PatitionName(for example cpu_compute, depends on cluster) #SBATCH --ntasks=1 #SBATCH --nodes=1 #SBATCH --tasks-per-node=1 #SBATCH --cpus-per-task=1 ####SBATCH --gpus-per-task=1 #load modules available on cluster, for example module load python/3.11 #module load cuda/12.1 #load virtual environment source /path_to_virtual_environment/myenv/bin/activate #Job step(s) srun papermill in.ipynb out.ipynb
This will take in in.ipynb run the file and output out.ipynb
The progress can be seen in the slurm-JOBID.out file
run the slurm script using - sbatch runCPU.sh