py4sci

Previous topic

hdf - Functions for Accessing HDF5 Files

Next topic

Parameters

This Page

Hosts

class puq.InteractiveHost(cpus=1, cpus_per_node=0)

Create a host object that runs all jobs on the local CPU.

Parameters:
  • cpus – Number of cpus each process uses. Default=1.
  • cpus_per_node – How many cpus to use on each node. Default=all cpus.
class puq.InteractiveHostMP(cpus=1, cpus_per_node=0, proc_pool=None)

This is a multiprocessing version of InteractiveHost. It can only be used when using a python function as the test program. See example 4 of TestProgram.

Unlike InteractiveHost which relies on launching separate python instances for each run, this class executes python functions directly using the multiprocessing module and a process pool. This reduces overhead significantly.

  • cpus: The number of cpus to assign to a single job.
  • cpus_per_node: The total number of cpus to use. If cpus = 1, then this parameter is the number of concurrent jobs which will run on the machine.
  • proc_pool: An externally created multiprocessing.Pool to use as the process pool. If not specified, a new pool will be created.
class puq.PBSHost(env, cpus=0, cpus_per_node=0, qname='standby', walltime='1:00:00', modules='', pack=1, qlimit=200)

Queues jobs using the PBS batch scheduler. Supports Torque and PBSPro.

Parameters:
  • env (str) – Bash environment script (.sh) to be sourced.
  • cpus (int) – Number of cpus each process uses. Required.
  • cpus_per_node (int) – How many cpus to use on each node. Required.
  • qname (str) – The name of the queue to use.
  • walltime (str) – How much time to allow for the process to complete. Format is HH:MM:SS. Default is 1 hour.
  • modules (list) – Additional required modules. Default is none.
  • pack (int) – Number of sequential jobs to run in each PBS script. Default is 1.
  • qlimit (int) – Max number of PBS jobs to submit at once. Default is 200.