Instructions for preparing your Docker

Please use our 
template (see this repo) and the instructions below for preparing your DockerThis template has been made using the grand-challenge template (described in this post) and adapted for our challenge. You can also check the instructions given in this post, but please keep to our template.  For example versions of the template see the following branches of our template repo: TF/Keras example version and PyTorch example version.

In the below explanations, the indications underlined and in bold should be adapted to your specific requirements. 

General Preparation of Docker Image


Clone the repo and rename the folder to your teamname:

$ git clone

$ mv valdo-docker-example TeamName

Go into this folder and check you're on the main branch (this is the default branch):

$ cd TeamName

$ git branch

This should output: * main

Filling in the template

General Guidelines

  • Replace teamname/TeamName everywhere by your actual teamname (please use the indicated capitalization conventions)
  • Please do all tasks indicated by the TODO comments
  • Please check/adapt all files! (Note: If you're on Linux you can ignore the *.bat files, if you're on Windows you can ignore all *.sh files)
  • As a final check search for "teamname", "TODO" and "..." in all files to check if you have done all necessary tasks

Step 1 Dockerfile:

Open the Dockerfile. To configure your DockerFile fill in all "..." and do all tasks in the #TODO comments. You will need to adapt the file as following (filling in the bold underlined parts):

  • Choose your preferred base image at the top:
    FROM BaseImage:BaseImageVersion

    E.g. FROM tensorflow/tensorflow:2.5.0-gpu
  • Add commands that copy your model into your docker, e.g.:
    • COPY model_weights.h5 /home/
    • COPY model_architecture.json /home/
  • Change the following labels in the Dockerfile:
    • fill in your teamname
    • ..hardware.cpu.count:  fill in required number of CPU cores
    • ..hardware.memory:  fill in required amount of RAM (later fill in this same value in the file, for the --memory variable when running the docker) (e.g. 10G (when your method needs 10 GB) or 200M (when your method needs 200 MB))
    • ..hardware.gpu.count:  fill in number of GPUs required to run (max 2) (if your method does not use a GPU, please fill in 0 here)
    • ..hardware.gpu.memory :  fill in minimal amount of GPU memory required to run (e.g. 10G (when your method needs 10 GB) or 200M (when your method needs 200 MB))

Step 2 Add Algorithm

Open the file. Please do all tasks indicated by the # TODO comments in this file. This includes the following:

  • Indicate the required input modalities (self.input_modalities)
  • Indicate if you will save an uncertainty map (self.flag_save_uncertainty)
    • Fill in False for Task 1 (PVS) and Task 2 (MB)
    • Fill in True for Task 3 (Lacunes)
  • Load any models/files in the  __init__ function
  • Add your algorithm in the predict function
    • Please only change the code as indicated by the # TODO comments and please do not change the process_case and _load_input_image functions.
    • You can also check the instructions given by this grand-challenge post (See section "Bring your own Algorithm in"), but keep to our template (we have changed the template so input_images is a list of images and "predict" should return a list of images as well 

Step 3 Additional tasks

  • Copy your model files into your TeamName directory (e.g. model_weights.h5  and model_architecture.json).

  • Update the requirements.txt file (no need to add libraries that are included in your base Docker image e.g. TensorFlow/PyTorch if you are using a TensorFlow/PyTorch base Docker image)

  • Test if you can build your docker image by running (if not fix the error(s) and try again):


  • In the expected_output.json file fill in the filename variable for inputs and for outputs, this should be a list of filenames
  • Change teamname to your actual teamname in, in and in    (or the .bat versions of these files instead of the .sh  if you're on Windows). In also change the required amount of RAM (e.g.  use --memory=10g for 10 GB of RAM) 

Testing your docker image

Add a training case (include all necessary image modalities) to the "test" directory and update the expected_output.json file in this directory accordingly (ignore the image type, you can just use metaio_image as type even if the image type is NIftI). Run the following to check if the Docker works:

$ ./ 

This saves the predictions in the folder ./output/images/.

Docker export

Once you are happy with your checks, you will need to export your docker image. Run the following:

$ ./ 

This will save your docker as TeamName.tar.gz. At submission time please send us your exported Docker and the corresponding Dockerfile. See the submission page.

Check Docker Image

Once you have submitted your files, we will run your Docker on a case of the training set and send it to you. Please check that this is the exact same output as you get in your own environment is. Please send us an e-mail to indicate if the output is as you expected or not.

Further remarks

Please use only the official base images from TensorFlowPyTorch or SciPyIf you want to use a different base image, please let us know by e-mail.
If you click on the tab "Tags" you can select the docker image with your required version. If you use TensorFlow use a docker image version with "-gpu" if you want to be able to run on a gpu. 

We recommend the following base images:

If your model is trained with an older version of TensorFlow or PyTorch, you can use a newer version if you load in the weights. For TensorFlow loading full models using a newer version of TensorFlow will probably not work, but loading the architecture from a .json file and loading in the weights from a .h5 file does work in a newer TensorFlow version. 

To try out your Docker in your own environment, you need to install Docker and if your method requires a GPU you need to install the Nvidia Container Toolkit.  If your method does not require a GPU, remove --gpus="device=0" in the docker run call so you can run without the Nvidia Container Toolkit. 


SimpleITK and numpy

Important notes about converting a SimpleITK image to numpy. 

  • The order of dimensions is different in SimpleITK and numpy, so don't forget to reorder the axes from (z,y,x) to (x,y,z) (and back when you convert it back to a SimpleITK image)
  • When converting back to SimpleITK, don't forget to copy the image header information from the input image (includes e.g. voxel spacing etc).

SimpleITK to numpy:
  image = SimpleITK.GetArrayFromImage(img)
  image = np.moveaxis(image, [0, 2], [2, 0])

Numpy to SimpleITK:

  outimg = np.moveaxis(outimg, [0, 2], [2, 0])
  outitk = SimpleITK.GetImageFromArray(outimg)

"Expected output was not found.."

The error message "Expected output was not found..." indicates either the output was not saved, or you need to adapt the "expected_output.json" file so it indicates the filenames that should be saved (you need to adapt to the subject ids you are using to test your docker). This is used as a check to know if the output was saved and if it was saved as the correct filename(s).

"Unknown runtime specified nvidia"

This indicates you have not installed the Nvidia Container Toolkit (or the toolkit is not working).  If your method does not require a GPU, you can remove --gpus="device=0" in the docker run call so you can run without the Nvidia Container Toolkit. If your method does require a GPU, you need to install the Nvidia Container Toolkit.