Keras
Overview
Keras is a high-level neural networks API, written in Python that can run on top of TensorFlow installed on the CSF.
Restrictions on use
There are no restrictions on accessing this software on the CSF. The software is released under the MIT License and all usage must adhere to that license.
Set up procedure
Since version 2.4, Keras has been packaged within Tensorflow as tensorflow.keras. To load Keras please load one of the following modulefiles:
# Backend is Tensorflow 2.4.0 (Python 3.7): module load apps/binapps/tensorflow/2.4.0-37-cpu module load apps/binapps/tensorflow/2.4.0-37-gpu
To access the older versions of this software you must first load one of the following modulefiles:
# Backend is Tensorflow 1.14.0 (Python 3.6) and the tf modulefile will be loaded for you: module load apps/binapps/keras/2.2.4-tensorflow-gpu # GPU version of tensorflow 1.14.0 module load apps/binapps/keras/2.2.4-tensorflow-cpu # CPU version of tensorflow 1.14.0 # Backend is Tensorflow 1.11.0 (Python 3.6) and the tf modulefile will be loaded for you: module load apps/binapps/keras/2.2.2-tensorflow-gpu # GPU version of tensorflow 1.11.0 module load apps/binapps/keras/2.2.2-tensorflow-cpu # CPU version of tensorflow 1.11.0 # Backend is Tensorflow but no tensorflow modulefile loaded. You must # load a tensorflow modulefile first. Check the list of available versions on CSF. module load apps/binapps/keras/2.2.4 module load apps/binapps/keras/2.2.2
The keras modulefile will automatically load the tensorflow and anaconda python modulefiles for you unless otherwise indicated.
Running the application
Please do not run Keras (via python) on the login node. Jobs should be submitted to the compute nodes via batch.
Interactive use on a Backend Node
To request an interactive session on a backend compute node run:
# CPU-only jobs qrsh -l short # Wait until you are logged in to a compute-node, then: module load apps/binapps/keras/2.2.4-tensorflow-cpu python >>> [type your python code interactively] # or run a python script you are developing, for example python myscript.py # GPU jobs (you may use up to 4 GPUs depending on your granted access to the GPUs) qrsh -l v100=1 bash # Wait until you are logged in to a gpu-node, then: module load apps/binapps/keras/2.2.4-tensorflow-gpu python >>> [type your python code interactively] # or run a python script you are developing, for example python myscript.py
An example Keras session is given below.
If there are no free interactive resources the qrsh
command will ask you to try again later. Please do not run Keras (python) on the login node. Any jobs running there will be killed without warning.
Example Script (for CPU or GPU)
The following skeleton script can be used in an interactive session or in a batch job (from a jobscript). It ensures tensorflow does not use more cores than you have requested in your jobscript. If run on a GPU-node, it will automatically use the GPU.
# Some of the following code has been taken from the Keras examples at: # https://keras.io/getting-started/sequential-model-guide/ from keras import backend as K from keras.models import Sequential from keras.layers import Dense, Dropout import numpy as np import tensorflow as tf import os # Get number of cores reserved by the batch system # ($NSLOTS is set by the batch system, or use 1 otherwise) NUMCORES=int(os.getenv("NSLOTS",1)) print("Using", NUMCORES, "core(s)" ) # Create TF session using correct number of cores. # NOTE: # If you are using the GPU version, this will also automatically use # the GPU and set up the Keras CUDA libraries. Please see the CSF3 # tensorflow page for other GPU-specific tf.ConfigProto() entries. sess = tf.Session(config=tf.ConfigProto(inter_op_parallelism_threads=NUMCORES, allow_soft_placement=True, device_count = {'CPU': NUMCORES})) # Set the Keras TF session K.set_session(sess) # Replace the rest of the script with your own code # Generate dummy data x_train = np.random.random((1000, 20)) y_train = np.random.randint(2, size=(1000, 1)) x_test = np.random.random((100, 20)) y_test = np.random.randint(2, size=(100, 1)) # MLP for binary classification example model = Sequential() model.add(Dense(64, input_dim=20, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(x_train, y_train, epochs=20, batch_size=128) score = model.evaluate(x_test, y_test, batch_size=128) print("Score", score)
Serial CPU with a GPU batch job submission
Make sure you have the modulefile loaded then create a batch submission script, for example:
#!/bin/bash --login #$ -cwd # Job will run from the current directory # GPU jobs may use up to 4 GPUs depending on your access. #$ -l v100=1 # We now recommend loading the modulefile in the jobscript (change version as required) module load apps/binapps/keras/2.2.4-tensorflow-cpu # or use the -gpu version # $NSLOTS is automatically set to the number of cores requested on the pe line # and can be read by your python code. export OMP_NUM_THREADS=$NSLOTS python my-script.py
Submit the jobscript using:
qsub scriptname
where scriptname is the name of your jobscript.
Parallel CPU with a GPU batch job submission
Ensure you have loaded the correct modulefile and then create a jobscript similar to the following:
#!/bin/bash --login #$ -cwd # Run job from directory where submitted #$ -pe smp.pe 8 # Number of cores (singe compute node). Can be 2-32 on CPU-only nodes. # GPU jobs can use up to 8 cores PER GPU (see below) # GPU jobs may use up to 4 GPUs depending on your access. # In this case you may also use up to 8 CPU cores per GPU. #$ -l v100=1 # We now recommend loading the modulefile in the jobscript (change version as required) module load apps/binapps/keras/2.2.4-tensorflow-cpu # or use the -gpu version # $NSLOTS is automatically set to the number of cores requested on the pe line # and can be read by your python code. export OMP_NUM_THREADS=$NSLOTS python my-script.py
The above my-script.py
example will get the number of cores to use from the $NSLOTS
environment variable.
Submit your jobscript using
qsub jobscript
where jobscript
is the name of your jobscript.
Further info
Updates
None.