Welcome to the Info TEST server!

Skip to content. | Skip to navigation

Sections
Info Services > Computing Guide > Cluster Processing > Appendix > Translating between Torque, Slurm, and HTCondor

Translating between Torque, Slurm, and HTCondor

Submit Options

DescriptionTorque/MoabSlurmHTCondor
Script directive #PBS #SBATCH NA
Queue/Partition -q queue -p partition  
Node count -l nodes=count -N min OR min-max NA
Core count -l ppn=count -n count request_cpus = count
Wall clock limit -l walltime=hh:mm:ss -t min OR -t days-hh:mm:ss  
Stdout -o filename -o filename output = filename
Stderr -e filename -e filename error = filename
Copy environment -V --export= ALL OR NONE OR variables getenv = true
Email notification -m abe

--mail-type=[ALL, END, FAIL, BEGIN, NONE]

notification = [Always, Complete, Error, Never]
Email address -M user_list --mail-user=user_list notify_user = address
Job name -N name -J name OR --job-name=name batch_name = name
Working directory -d path -D path NA
Memory per node -l mem=count[kb, mb, gb, tb] --mem=count[K, M, G, T] request_memory = count G
Memory per core -l pmem=count[kb, mb, gb, tb] --mem-per-cpu=count[K, M, G, T] NA
Virtual memory per node -l vem=count[kb, mb, gb, tb] NA request_virtualmemory seems to make jobs idle now
Virtual memory per core -l pvmem=count[kb, mb, gb, tb] NA NA
Memory per job -L tasks=1:memory=count[kb, mb, gb, tb] --mem=count[K, M, G, T] request_memory = count G
Job arrays -t arrayspec --array=arrayspec  
Variable list -v var=val[,var=val] --export=var=val[,var=val] environment = "var=val"
Script args -F arg1_[,_arg2,...] sbatch script arg1_[,_arg2,...]  

 

 

 

Commands

DescriptionTorque/MoabSlurmHTCondor
Job alter qalter scontrol update condor_qedit
Job array (10, 15, 20, 25, 30) #PBS -t 10-30%5 queue seq 10 5 30 |
Job connect to   srun --jobid jobid --pty bash -l condor_ssh_to_job jobid
Job delete qdel jobid scancel jobid condor_rm jobid
Job info detailed qstat -f jobid scontrol show job jobid condor_q -long jobid
Job info detailed checkjob jobid scontrol show job jobid condor_q -analyze -verbose jobid
Job info detailed checkjob jobid scontrol show job jobid condor_q -better-analyze -verbose jobid
Job info detailed checkjob jobid scontrol show job jobid condor_q -better-analyze -reverse -verbose jobid
Job show all qstat -1n squeue condor_q -g -all
Job show all verbose qstat -1n squeue -all condor_q -g -all -nobatch
Job show all verbose qstat -1n squeue -all condor_q -g -all -nobatch -run
Job show DAGs   condor_q -dag -nobatch
Job submit qsub sbatch condor_submit
Job submit simple echo "sleep 27" | qsub srun sleep 27 condor_run "sleep 27" &
Job submit interactive qsub -I srun --pty bash -l condor_submit -i
Node show free nodes  nodesfree condor_status -const 'PartitionableSlot && Cpus == TotalCpus'
Node show state pbsnodes -l all sinfo -Nl condor_status -state

 


Search All NRAO