West-Life applications on the HTC Accelerated Platform

  • A docker image with opencl + nvidia drivers + DisVis application and one with PowerFit application ready to run on GPU servers have been prepared for Ubuntu, with the goals of checking performance described here.
  • In collaboration with EGI-Engage task JRA2.4 (Accelerated Computing) the DisVis and PowerFit docker images have been then re-build to work on the SL6 GPU servers which had a NVIDIA™ driver 319.x supporting only CUDA 5.5 version. These servers form the grid-enabled cluster at CIRMMP, and successful grid submissions to this cluster using the enmr.eu VO has been carried out with the expected performance the 1st of December.
  • The 14th of July the CIRMMP servers were updated with the latest NVIDIA™ driver 352.93 supporting now CUDA 7.5 version. DisVis and PowerFit images used are now the ones maintained by the INDIGO-DataCloud project and distributed via dockerhub at indigodatacloudapps repository.
  • INDIGO-DataCloud "udocker" tool is used to run the containers. This has the advantage that docker containers are run in the user space, so the grid user does not obtain root privileges in the WN, avoiding this way any security concern.
  • Here follows a description of the scripts and commands used to run the test:
$ voms-proxy-init --voms enmr.eu $ glite-ce-job-submit -o jobid.txt -a -r cegpu.cerm.unifi.it:8443/cream-pbs-batch disvis.jdl

where disvis.jdl is:

[  executable = "disvis.sh";  arguments = "10.0 2";  inputSandbox = { "disvis.sh" ,"O14250.pdb" , "Q9UT97.pdb" , "restraints.dat" };  stdoutput = "out.txt";  stderror = "err.txt";  outputsandboxbasedesturi = "gsiftp://localhost";  outputsandbox = { "out.txt" , "err.txt" , "results.tgz"};  GPUNumber=2;  ]

and disvis.sh is:

#!/bin/sh  driver=$(nvidia-smi | awk '/Driver Version/ {print $6}') <span class="redactor-invisible-space">echo hostname=$(hostname) echo user=$(id) export WDIR=`pwd` echo udocker run disvis... echo starttime=$(date) git clone <a href="https://github.com/indigo-dc/udocker">https://github.com/indigo-dc/udocker</a> cd udocker image=/cvmfs/wenmr.egi.eu/BCBR/DisVis/disvis-nvdrv_$driver.tar [ -f $image ] && ./udocker load -i $image [ -f $image ] || ./udocker pull indigodatacloudapps/disvis:nvdrv_$driver echo time after pull = $(date) rnd=$RANDOM ./udocker create --name=disvis-$rnd indigodatacloudapps/disvis:nvdrv_$driver echo time after udocker create = $(date) mkdir $WDIR/out ./udocker run --hostenv --volume=$WDIR:/home disvis-$rnd disvis /home/O14250.pdb /home/Q9UT97.pdb /home/restraints.dat -g -a $1 -vs $2 -d /home/out nvidia-smi --query-accounted-apps=timestamp,pid,gpu_serial,gpu_name,gpu_utilization,time --format=csv echo time after udocker run = $(date) ./udocker rm disvis-$rnd ./udocker rmi indigodatacloudapps/disvis:nvdrv_$driver cd $WDIR tar zcvf results.tgz out/ echo endtime=$(date)</span>

The example input files specified in the jdl script above were taken from the GitHub repo for disvis available from: https://github.com/haddocking/disvis

The performance on the GPGPU grid resources is what is expected for the card type.

The timings were compared to an in-house GPU node at Utrecht University (GTX680 card), but can differ with the last update of the DisVis code:

GPGPU-type Timing[minutes] GTX680     19 M2090 (VM) 15.5 1xK20 (VM) 13.5 2xK20      11 1xK20      11

This indicates that DisVis, as expected, is currently incapable of using both the available GPGPUs. However, we plan to use this testbed to parallelize it on multple GPUs, which should be relatively straightforward.

For running PowerFit just replace disvis.jdl with a powerfit.jdl like e.g.:

[ executable = "powerfit.sh"; inputSandbox = { "powerfit.sh" ,"1046.map" , "GroES_1gru.pdb" }; stdoutput = "out.txt"; stderror = "err.txt"; outputsandboxbasedesturi = "gsiftp://localhost"; outputsandbox = { "out.txt" , "err.txt" , "results.tgz"}; GPUNumber=2; ]

with powerfit.sh as here and input data taken from here (courtesy of Mario David)