Submit VisIt job in python script

submit local job

Assume we have a binary raw dataset named A01.bin, we first write a BOV header A01.bov, so VisIt can load the data, then we will add pseudocolor plot and isovolume operator, and use a for loop to change the isovolume attributes. Here is the python script A01.py:

import os, sys


# OpenComputeEngine("localhost", ("-np", "4"))

# opendatabase
OpenDatabase("A01.bov", 0)


# Create window attributes.
s = SaveWindowAttributes()
s.format = s.JPEG
s.outputToCurrentDirectory = 0
s.outputDirectory = "./images/"
s.fileName = "A01_pseudocolor_isovolume"
s.width = 1600
s.height = 1600
s.screenCapture = 0
s.progressive = 1
SetSaveWindowAttributes(s)

# add plot and operator
AddPlot("Pseudocolor", "polymerblends", 1, 0)
AddOperator("Isovolume", 0)


# setup isovolume attributes
IsovolumeAtts = IsovolumeAttributes()

# define variables for iso-volume animation
dmin = 1000
dmax = 2000
dstep = 500

volume_start = dmin
volume_end = dmin + dstep
nframes = (dmax - dmin) / dstep

print "will save ", nframes, "images"

for n in range(nframes):
  IsovolumeAtts.lbound = volume_start
  IsovolumeAtts.ubound = volume_end

  IsovolumeAtts.variable = "polymerblends"
  SetOperatorOptions(IsovolumeAtts, 0)

  DrawPlots()
  SaveWindow()

  volume_start += dstep
  volume_end += dstep


sys.exit()

To submit this python script so visit will run it locally, from terminal:

$ visit -cli -nowin -s A01.py

VisIt will run the database server and compute engine, and save images to the folder specified in the python script. Here the flag “-nowin” tells VisIt not to run viewer window.

To run VisIt with 4 parallel engines:

$ visit -np 4 -cli -nowin -s test.py

Note

Or, if function OpenComputeEngine() is used in python script to open parallel compute engines before OpenDatabase() function, VisIt’s component launching module (VCL) will call internal launcher to use MPI to launch the parallel engines at localhost.

Submit remote job

Here is the qsub script, named A01.pbs:

#!/bin/bash

#PBS -l nodes=1:ppn=8
#PBS -l walltime=01:00:00
#PBS -N A01
#PBS -o A01_output.txt
#PBS -e A01_error.txt
#PBS -q priority
#PBS -m e
#PBS -M email@address

cd ~
visit -nn 1 -np 8 -cli -nowin -s A01.py

Now, suppose you are logged into a remote cluster’s head node, and saved the dataset A01.bov, A01.bin, the python script A01.py, and the qsub script A01.pbs, all of these in your home directory, you can submit the job using a job schedular system such as Portable Batch System (or simply PBS):

$ qsub A01.pbs

PBS will submit the job to the queue you specified (priority queue in this example), and return to you a jobId.

Monitor job running status

Then you can monitor the job status use commands such as:

$ qstat -q           --- list all queues
$ qstat -a           --- list all jobs
$ qstat -au userid   --- list jobs for userid
$ qstat -r           --- list running jobs
$ qstat jobId        --- list status for jobId
$ qstat -f jobId     --- list full information about jobId
$ qstat -Qf queue    --- list full information about queue
$ qshow jobId        --- moniter the / node / cpu / memory usage