1. Connecting
  2. Managing your Account
  3. Storage
  4. Data Transfer
  5. Managing Your Environment (Modules)
  6. Programming Environment
  7. Running Jobs
  8. Job Dependencies
  9. Job Arrays
  10. Running MATLAB Batch Jobs
  11. Running Mathematica Batch Jobs
  12. R Software
  13. HPC & Other Tutorials
  14. Investor Specific Information

1. Connecting

The Campus Cluster can be accessed via Secure Shell (SSH) to the head nodes using your official University NetID login and password. Generally Unix/Linux based systems have a ssh client by default, however Windows users will need to install third party software to access the Campus Cluster. Please see this non-exhaustive list of ssh clients that can be used to access the Campus Cluster.

Below is a list of hostnames that provide round-robin access to head nodes of the Campus Cluster instances as indicated:

Access Method Hostname Head Node
SSH cc-login.campuscluster.illinois.edu namehN
(ex. golubh1)

Network Details for Illinois Investors

The Campus Cluster is interconnected with the University of Illinois networks via the Campus Advanced Research Network Environment (CARNE) and is addressed out of fully-accessible public IP space, located outside of the Illinois campus firewall. This positioning of the Campus Cluster outside the campus firewall enables access to regional and national research networks at high speeds and without restrictions. This does mean, however, that for some special use cases where it is necessary for Campus Cluster nodes to initiate communication with hosts on the Illinois campus network (e.g., you are hosting a special license server behind the firewall), you will need to coordinate with your department IT pro to ensure that your hosts are in the appropriate Illinois campus firewall group. Outbound communication from Illinois to the Campus Cluster should work without issue, as well as any communications from the Campus Cluster outbound to regional and national research networks.

2. Managing your Account

When your account is first activated, the default shell is set to bash.

The tcsh shell is also available. To change your shell to tcsh, add the following line:

exec -l /bin/tcsh

to the end of the file named .bash_profile, located in your home ($HOME) directory. To begin using this new shell, you can either log out and then log back in, or execute exec -l /bin/tcsh on your command line.

The Campus Cluster uses the module system to set up the user environment. See the section Managing Your Environment (Modules) for details.

You can reset your NetID password at the Password Management page.

3. Storage

Home Directory

Your home directory is the default directory you are placed in when you log on. You should use this space for storing files you want to keep long term such as source code, scripts, etc. Every user has a 2 GB home directory quota.

The soft limit is 2 GB and the hard limit is 4 GB. Under the quota rules, if the amount of data in your home directory is over the soft limit of 2 GB but under the hard limit of 4 GB, there is a grace period of 7 days to get under the soft limit. When the grace period expires, you will not be able to write new files or update any current files until you reduce the amount of data to below 2 GB.

The command to see your disk usage and limits is quota.


    [jdoe@cc-login1 ~]$ quota
    Directories quota usage for user jdoe:

    |      Fileset       |  User   |  User   |  User   |  Project |  Project |   User   |   User   |   User   |
    |                    |  Block  |  Soft   |  Hard   |  Block   |  Block   |   File   |   Soft   |   Hard   |
    |                    |  Used   |  Quota  |  Limit  |  Used    |  Limit   |   Used   |   Quota  |   Limit  |
    | home               | 501.1M  | 2G      | 4G      |          |          | 14       | 0        | 0        |
    | scratch            | 1.249G  | 10T     | 10T     |          |          | 206088   | 0        | 0        |
    | ncsa               | 1.425G  | 64T     | 64T     | 25.88T   | 64T      | 10       | 20000000 | 20000000 |

Please note, that the quota command reads user disk usage information from a file that is updated multiple times a day, so the output may not reflect current disk usage immediately upon deleting/moving files/data from one of the reported spaces.

Additionally to view disk usage in real time the du command can be used.

      du -sh $HOME

depending on how many directories/files you have it can take a couple of minutes before the du command returns output.

Additionally for a sorted list (smallest to largest) of directories and files (visible and hidden “directories/files names proceeded by a ‘.’ “) you can run the following du command:

      du -sh $HOME/* ./.[a-z]* ./.[A-Z]* ./.[0-9]* |sort -h

Home directories are backed up using snapshots.

Scratch Directory

The scratch filesystem is shared storage space available to all users. It is intended for short term use and should be considered volatile. No backups of any kind are performed for this storage. There is a soft link named scratch in your home directory that points to your scratch directory.

Scratch Purge Policy:
All files located in scratch (/scratch/users) that are older than 30 days will be purged (deleted).

Project Space

For investors that have project space (/projects/investor_group_name), usage and quota information is available with the command:

[golubh1 ~]$ projectquota <project_directory_name or file_set_name>

Please consult with your investor technical representative regarding availability and access.


Nightly snapshots of the home and project filesystems are available for the last 30 days in the following locations:

  • Home Directory: /gpfs/iccp/home/.snapshots/home_YYYYMMDD*/$USER
  • Investor Project Directory: /gpfs/iccp/projects/<project_directory_name>/.snapshots/<project_directory_name>_YYYYMMDD*

Note: Since snapshots are created nightly, there is a window of time between snapshots when recent file changes are NOT recoverable if accidentally deleted, overwritten, etc.

No off-site backups for disaster recovery are provided for any storage. Please make sure to do your own backups of any important data on the Campus Cluster to permanent storage as often as necessary.

Data Compression

To reduce space usage in your home directory, an option for files that are not in active use is to compress them. The gzip utility can be used for file compression and decompression. Another alternative is bzip2, which usually yields a better compression ratio than gzip but takes longer to complete. Additionally, files that are typically used together can first be combined into a single file and then compressed using the tar utility.


    • Compress a file largefile.dat using gzip:
      gzip largefile.dat
      The original file is replaced by a compressed file named largefile.dat.gz
    • To uncompress the file:
      gunzip largefile.dat.gz (or: gzip -d largefile.dat.gz)
    • To combine the contents of a subdirectory named largedir and compress it:
      tar -zcvf largedir.tgz largedir
      [convention is to use extension .tgz in the file name]
      Note: If the files to be combined are in your home directory and you are close to the quota, you can create the tar file in the scratch directory (since the tar command may fail prior to completion if you go over quota):
      tar -zcvf ~/scratch/largedir.tgz largedir
    • To extract the contents of the compressed tar file:
      tar -xvf largedir.tgz

See the manual pages (man gzip, man bzip2, man tar) for more details on these utilities.


    • ASCII text and binary files like executables can yield good compression ratios. Image file formats (gif, jpg, png, etc.) are already natively compressed so further compression will not yield much gains.
    • Depending on the size of the files, the compression utilities can be compute intensive and take a while to complete. Use the compute nodes via a batch job for compressing large files.
    • With gzip, the file is replaced by one with the extension .gz. When using tar the individual files remain—these can be deleted to conserve space once the compressed tar file is created successfully.
    • Use of tar and compression could also make data transfers between the Campus Cluster and other resources more efficient.

4. Data Transfer

The Illinois Campus Cluster supports a wide variety of data transfer and movement services and is actively expanding its offerings in this area. Information on various data transfer methods is outlined in the sections below.

Globus GridFTP

The Illinois Campus Cluster Program recommends using Globus for Campus Cluster large data transfers. Globus manages the data transfer operation for the user: monitoring performance, retrying failures, auto-tuning and recovering from faults automatically where possible, and reporting status. Email is sent when the transfer is complete.

Globus implements data transfer between machines through a web interface using the GridFTP protocol. There is a predefined GridFTP endpoint (illinois#iccp) for the Illinois Campus Cluster Program to allow data movement between the Campus Cluster and other resources registered with Globus.

To transfer data between the Campus Cluster and a non registered resource, Globus Online provides a software package called Globus Connect that allows for the creation of a personal GridFTP endpoint for virtually any local (non Campus Cluster) resource.

CLI Transfer Services

For initiating data transfers from the Campus Cluster: sftp, scp, rsync, curl, wget, rclone, or bbcp can all be used. All CLI based transfers should target or source the cluster’s dedicate CLI Transfer host pool: cc-xfer.campuscluster.illinois.edu and NOT the login nodes.

For SSH based transfer tools (scp, sftp, rsync), a variety of SSH based clients are available for initiating transfers from your local system. There are two types of SSH clients, clients that support both remote login access and data transfers and clients that support data transfers only.

SSH Client Remote Login Data Transfer Installs On
MobaXterm is an enhanced terminal with an X server and a set of Unix commands (GNU/Cygwin) packaged in the application. Yes Yes Windows
Terminal is the built in SSH client for Mac OS based machines. Yes Yes Mac OS
SSH Secure Shell allows you to securely login to remote host computers, to execute commands safely on a remote computer, and to provide secure encrypted and authenticated communications between two hosts in an untrusted network. Yes Yes Windows
PuTTY is an open source terminal emulator application which can act as a client for the SSH, Telnet, rlogin, and raw TCP computing protocols and as a serial console client. Yes Yes* Windows
Mac OS
FileZilla is a fast and reliable cross-platform FTP, FTPS and SFTP client with lots of useful features and an intuitive graphical user interface.
Note: Please visit the Show additional download options link for FileZilla software free from additional sponsor bundled software/pacakges.
No Yes Windows
Mac OS
WinSCP is an open source free SFTP client, SCP client, FTPS client and FTP client for Windows. Its main function is file transfer between a local and a remote computer. Beyond this, WinSCP offers scripting and basic file manager functionality. No Yes Windows

* PuTTY’s scp and sftp data transfer functionality is implemented via Command Line Interface (CLI) by default.

5. Managing Your Environment (Modules)

The module command is a user interface to the Modules package. The Modules package provides for the dynamic modification of the user’s environment via modulefiles (a modulefile contains the information needed to configure the shell for an application). Modules are independent of the user’s shell, so both tcsh and bash users can use the same commands to change the environment.

Useful Module commands:

Command Description
module avail lists all available modules
module list lists currently loaded modules
module help modulefile help on module modulefile
module display modulefile Display information about modulefile
module load modulefile load modulefile into current shell environment
module unload modulefile remove modulefile from current shell environment
module swap modulefile1 modulefile2 unload modulefile1 and load modulefile2

To include particular software in the environment for all new shells, edit your shell configuration file ($HOME/.bashrc for bash users and $HOME/.cshrc for tcsh users) by adding the module commands to load the software that you want to be a part of your environment. After saving your changes, you can source your shell configuration file or log out and then log back in for the changes to take effect.

Note: Order is important. With each module load, the changes are prepended to your current environment paths.

For additional information on Modules, see the module and modulefile man pages or visit the Modules SourceForge page.

6. Programming Environment

The Intel compilers are available on the Campus Cluster.
module load intel/18.0
[Older versions of the Intel compiler are also available. See the output from the command module avail intel for the specific modules.]

The GNU compilers (GCC) version 4.4.7 are in the default user environment. Version 7.2.0 is also available — load this version with the command:
module load gcc/7.2.0

Compiler Commands


To build (compile and link) a serial program in Fortran, C, and C++ enter:

GCC Intel Compiler
gfortran myprog.f
gcc myprog.c
g++ myprog.cc
ifort myprog.f
icc myprog.c
icpc myprog.cc


To build (compile and link) a MPI program in Fortran, C, and C++:

MPI Implementation modulefile for MPI/Compiler Build Commands
(Home Page / User Guide)
Fortran 77: mpif77 myprog.f
Fortran 90: mpif90 myprog.f90
C: mpicc myprog.c
C++: mpicxx myprog.cc

Open MPI
(Home Page / Documentation)
Intel MPI
(Home Page / Documentation)
                GCC         Intel Compiler
Fortran 77: mpif77 myprog.f mpiifort myprog.f
Fortran 90: mpif90 myprog.f90 mpiifort myprog.f90
C: mpicc myprog.c mpiicc myprog.c
C++: mpicxx myprog.cc mpiicpc myprog.cc

For example, use the following command to load MVAPICH2 v2.3 built with the Intel 18.0 compiler:

module load mvapich2/2.3-intel-18.0


To build an OpenMP program, use the -fopenmp / -qopenmp option:

GCC Intel Compiler
gfortran -fopenmp myprog.f
gfortran -fopenmp myprog.f90
gcc -fopenmp myprog.c
g++ -fopenmp myprog.cc
ifort -qopenmp myprog.f
ifort -qopenmp myprog.f90
icc -qopenmp myprog.c
icpc -qopenmp myprog.cc

Hybrid MPI/OpenMP

To build an MPI/OpenMP hybrid program, use the -fopenmp / -qopenmp option with the MPI compiling commands:

GCC Intel Compiler
mpif77 -fopenmp myprog.f
mpif90 -fopenmp myprog.f90
mpicc -fopenmp myprog.c
mpicxx -fopenmp myprog.cc
mpif77 -fopenmp myprog.f
mpif90 -fopenmp myprog.f90
mpicc -fopenmp myprog.c
mpicxx -fopenmp myprog.cc
mpiifort -qopenmp myprog.f
mpiifort -qopenmp myprog.f90
mpiicc -qopenmp myprog.c
mpiicpc -qopenmp myprog.cc


NVIDIA GPUs are available as a purchase option of the Campus Cluster. CUDA is a parallel computing platform and programming model from NVIDIA for use on their GPUs. These GPUs support CUDA compute capability 2.0.

Load the CUDA Toolkit into your environment using the following module command:

module load cuda


The Intel Math Kernel Library (MKL) contains the complete set of functions from the basic linear algebra subprograms (BLAS), the extended BLAS (sparse), and the complete set of LAPACK routines. In addition, there is a set of fast Fourier transforms (FFT) in single- and double-precision, real and complex data types with both Fortran and C interfaces. The library also includes the cblas interfaces, which allow the C programmer to access all the functionality of the BLAS without considering C-Fortran issues. ScaLAPACK, BLACS and the PARDISO solver are also provided by Intel MKL. MKL provides FFTW interfaces to enable applications using FFTW to gain performance with Intel MKL and without changing the program source code. Both FFTW2 and FFTW3 interfaces are provided as source code wrappers to Intel MKL functions.

Load the Intel compiler module to access MKL.

Use the following -mkl flag options when linking with MKL using the Intel compilers:

Sequential libraries: -mkl=sequential
Threads libraries: -mkl=parallel

To use MKL with GCC, consult the Intel MKL link advisor for the link flags to include.

OpenBLAS, an optimized BLAS library based on GotoBLAS2 is also available. Load the library (version 0.3.12, built with gcc 7.2.0) module with the following command:

module load openblas/0.3.12_sandybridge

Link with the OpenBLAS library using

-L /usr/local/src/openblas/0.3.12/gcc/Sandy.Bridge/lib -lopenblas

7. Running Jobs

User access to the compute nodes for running jobs is available via a batch job. The Campus Cluster uses the Slurm Workload Manager for running batch jobs. See the sbatch section Batch Commands for details on batch job submission.

Please be aware that the interactive nodes are a shared resource for all users of the system and their use should be limited to editing, compiling and building your programs, and for short non-intensive runs.

Note: User processes running on the interactive nodes are killed automatically if they accrue more than 30 minutes of CPU time or if more than 4 identical processes owned by the same user are running concurrently.

An interactive batch job provides a way to get interactive access to a compute node via a batch job. See the srun or salloc section for information on how to run an interactive job on the compute nodes. Also, a very short time test queue provides quick turnaround time for debugging purposes.

To ensure the health of the batch system and scheduler users should refrain from having more than 1,000 batch jobs in the queues at any one time.

See the document “Running Serial Jobs Efficiently on the Campus Cluster” regarding information on expediting job turnaround time for serial jobs.

See the Running MATLAB / Mathematica Batch Jobs sections for information on running MATLAB and Mathematica on the campus cluster.

Running Programs

On successful building (compilation and linking) of your program, an executable is created that is used to run the program. The table below describes how to run different types of programs.

Program Type How to run the program/executable Example Command
Serial To run serial code, specify the name of the executable. ./a.out
MPI MPI programs are run with the srun command followed by the name of the executable. 

Note: The total number of MPI processes is the {number of nodes} x {cores/node} set in the batch job resource specification.

srun ./a.out
OpenMP The OMP_NUM_THREADS environment variable can be set to specify the number of threads used by OpenMP programs. If this variable is not set, the number of threads used defaults to one under the Intel compiler. Under GCC, the default behavior is to use one thread for each core available on the node. 

To run OpenMP programs, specify the name of the executable.

In bash: export OMP_NUM_THREADS=16 

In tcsh: setenv OMP_NUM_THREADS 16


MPI/OpenMP As with OpenMP programs, the OMP_NUM_THREADS environment variable can be set to specify the number of threads used by the OpenMP portion of the mixed MPI/OpenMP program. The same default behavior applies with respect to the number of threads used. 

Use the srun command followed by the name of the executable to run mixed MPI/OpenMP programs.

Note: The number of MPI processes per node is set in the batch job resource specification for number of cores/node.

In bash: export OMP_NUM_THREADS=4 

In tcsh: setenv OMP_NUM_THREADS 4

srun ./a.out

Primary Queues

Each investor group has unrestricted access to a dedicated primary queue with concurrent access to the number and type of nodes in which they invested.

Secondary Queues

One of the advantages of the Campus Cluster Program is the ability to share resources. A shared secondary queue will allow users access to any idle nodes in the cluster. Users must have access to a primary queue to be eligible to use the secondary queue.

While each investor has full access to the number and type of nodes in which they invested, those resources not fully utilized by each investor will become eligible to run secondary queue jobs. If there are resources eligible to run secondary queue jobs but there are no jobs to be run from the secondary queue, jobs in the primary queues that fit within the constraints of the secondary queue may be run on any otherwise appropriate idle nodes. The secondary queue uses fairshare scheduling.

The current limits in the secondary queues are below:

Queue Max Walltime Max # Nodes
secondary 4 hours 304
secondary-Eth 4 hours 18


    • Jobs are routed to the secondary queue when a queue is not specified. i.e., the secondary queue is the default queue on the Campus Cluster.

    • The difference between secondary and secondary-Eth queues is the compute nodes associated with the secondary queue are interconnected
      via Infinniband (IB) and the compute nodes that are associated with the “secondary-Eth” queue are interconnected via Ethernet. Currently
      Ethernet is slower than Infinniband, but this only matters in terms of performance if users have batch jobs that use multiple nodes and need
      to communicate between nodes (like with MPI codes).

Test Queue

A test queue is available for providing very short jobs with quick turnaround time.

The current limits in the test queue are:

Queue Max Walltime Max # Nodes
test 4 hours 2

Batch Commands

Below are brief descriptions of the primary batch commands. For more detailed information, refer to the individual man pages.


Note: On Wednesday, September 23, 2020, the Campus Cluster has completely transitioned from the MOAB/Torque (PBS) batch system to the SLURM batch system.

Batch jobs are submitted through a job script using the sbatch command. Job scripts generally start with a series of SLURM directives that describe requirements of the job such as number of nodes, wall time required, etc… to the batch system/scheduler (SLURM directives can also be specified as options on the sbatch command line; command line options take precedence over those in the script). The rest of the batch script consists of user commands.

Sample batch scripts are available in the directory /projects/consult/slurm.

The syntax for sbatch is:

sbatch [list of sbatch options] script_name

The main sbatch options are listed below. Also See the sbatch man page for options.

  • The common resource_names are:

    time=maximum wall clock time (d-hh:mm:ss) [default: maximum limit of the queue(partition) submittied to]


    ‑‑ntasks=p Total number of cores for the batch job

    ‑‑ntasks-per-node=p Number of cores per node (same as ppn under PBS)

    n=number of 16/20/24/28/40-core nodes [default: 1 node]
    p=how many cores(ntasks) per job or per node(ntasks-per-node) to use (1 through 40) [default: 1 core]




    Memory needs: For investors that have nodes with varying amounts of memory or to run in the secondary queue, nodes with a specific amount of memory can be targeted. The compute nodes have memory configurations of 64GB, 128GB, 192GB, 256GB or 384GB. Not all memory configurations are available in all investor queues. Please check with the technical representative of your investor group to determine what memory configurations are available for the nodes in your primary queue.




    Note: Do not use the memory specification unless absolutely required since it could delay scheduling of the
    job; also, if nodes with the specified memory are unavailable for the specified queue the job will never run.

    Specifying nodes with GPUs: To run jobs on nodes with GPUs, add the resource specification TeslaM2090 (for Tesla M2090), TeslaK40M (for Tesla K40M), K80 (for Tesla K80), P100 (for Tesla P100) or V100 (for Tesla V100) if your primary queue has nodes with multiple types of GPUs, nodes with and without GPUs or if you are submitting jobs to the secondary queue. Through the secondary queue any user can access the nodes that are configured with any of the specific GPUs. Please check with the technical representative of your investor group to determine if GPUs are available on the nodes in your primary queue.


    Note: For investors with all GPU nodes wishing to run in their primary queue, only the queue name specification (via the sbatch -p option below) is required.

Useful Batch Job Environment Variables
Description SLURM Environment Variable Detail Description PBS Environment Variable
(no longer valid)
JobID $SLURM_JOB_ID Job identifier assigned to the job $PBS_JOBID
Job Submission Directory $SLURM_SUBMIT_DIR By default, jobs start in the directory the job was submitted from. So the   cd $SLURM_SUBMIT_DIR command is not needed. $PBS_O_WORKDIR
Machine(node) list $SLURM_NODELIST variable name that containins the list of nodes assigned to the batch job $PBS_NODEFILE
each member of a job array is assigned a unique identifier (see the Job Arrays section) $PBS_ARRAYID

See the sbatch man page for additional environment variables available.


The srun command initiates an interactive job on the compute nodes.

For example, the following command:

[golubh1 ~]$ srun --partition=ncsa --time=00:30:00 --nodes=1 --ntasks-per-node --pty /bin/bash

will run an interactive job in the ncsa queue with a wall clock limit of 30 minutes, using one node and 16 cores per node. You can also use other sbatch options such as those documented above.

After you enter the command, you will have to wait for SLURM to start the job. As with any job, your interactive job will wait in the queue until the specified number of nodes is available. If you specify a small number of nodes for smaller amounts of time, the wait should be shorter because your job will backfill among larger jobs. You will see something like this:

srun: job 123456 queued and waiting for resources

Once the job starts, you will see:

srun: job 123456 has been allocated resources

and will be presented with an interactive shell prompt on the launch node. At this point, you can use the appropriate command to start your program.

When you are done with your runs, you can use the exit command to end the job.


Commands that display the status of batch jobs.

    SLURM Example Command Command Description Torque/PBS
    Example Command
    squeue -a List the status of all jobs on the system. qstat -a
    squeue -u $USER List the status of all your jobs in the batch system. qstat -u $USER
    squeue -j JobID List nodes allocated to a running job in addition to basic information.. qstat -n JobID
    scontrol show job JobID List detailed information on a particular job. qstat -f JobID
    sinfo -a List summary information on all the queues. qstat -q

See the man page for other options available.


The scancel or qdelcommand deletes a queued job or kills a running job.

  • scancel JobID deletes/kills a job.

8. Job Dependencies

PBS job dependencies allow users to set execution order in which their queued jobs run. Job dependencies are set by using the ‑‑dependency option with the syntax being ‑‑dependency=<dependency type>:<JobID>. SLURM places the jobs in Hold state until they are eligible to run.

The following are examples on how to specify job dependencies using the afterany dependency type, which indicates to SLURM that the dependent job should become eligible to start only after the specified job has completed.

On the command line:

[golubh1 ~]$ sbatch --dependency=afterany:<JobID> jobscript.pbs

In a job script:

#SBATCH --time=00:30:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --job-name="myjob"
#SBATCH --partition=secondary
#SBATCH --output=myjob.o%j
#SBATCH --dependency=afterany:<JobID>

In a shell script that submits batch jobs:

JOB_01=`sbatch jobscript1.sbatch |cut -f 4 -d " "`
JOB_02=`sbatch --dependency=afterany:$JOB_01 jobscript2.sbatch |cut -f 4 -d " "`
JOB_03=`sbatch --dependency=afterany:$JOB_02 jobscript3.sbatch |cut -f 4 -d " "`

Note: Generally the recommended dependency types to use are after, afterany, afternotok and afterok. While there are additional dependency types, those types that work based on batch job error codes may not behave as expected because of the difference between a batch job error and application errors. See the dependency section of the sbatch manual page for additional information (man sbatch).

9. Job Arrays

If a need arises to submit the same job to the batch system multiple times, instead of issuing one sbatch command for each individual job, users can submit a job array. Job arrays allow users to submit multiple jobs with a single job script using the ‑‑array option to sbatch. An optional slot limit can be specified to limit the amount of jobs that can run concurrently in the job array. See the sbatch manual page for details (man sbatch). The file names for the input, output, etc. can be varied for each job using the job array index value defined by the SLURM environment variable SLURM_ARRAY_TASK_ID.

A sample batch script that makes use of job arrays is available in /projects/consult/slurm/jobarray.sbatch.


  • Valid specifications for job arrays are
    ‑‑array 1-10
    ‑‑array 1,2,6-10
    ‑‑array 8
    ‑‑array 1-100%5
    (a limit of 5 jobs can run concurrently) 

  • You should limit the number of batch jobs in the queues at any one time to 1,000 or less. (Each job within a job array is counted as one batch job.)
  • Interactive batch jobs are not supported with job array submissions.
  • For job arrays, use of any environment variables relating to the JobID (e.g., PBS_JOBID) must be enclosed in double quotes.
  • To delete job arrays, see the qdel command section.

10. Running Matlab Batch Jobs

See the Using MATLAB on the Campus Cluster page for information on running MATLAB batch jobs.

11. Running Mathematica Batch Jobs

Standard batch job

A sample batch script that runs a Mathematica script is available in /projects/consult/pbs/mathematica.pbs. You can copy and modify this script for your own use. Submit the job with:

[golubh1 ~]$ sbatch mathematica.sbatch

In an interactive batch job

  • For the GUI (which will display on your local machine), use the –x11 option with the srun command:
    srun --x11 --export=All --time=00:30:00 --nodes=1 --ntasks-per-node=16 --partition=secondary --pty /bin/bash

    Once the batch job starts, you will have an interactive shell prompt on a compute node. Then type:

    module load mathematica

    Note: An X-Server must be running on your local machine with X11 forwarding enabled within your ssh connection in order to display X-Apps, GUIs, etc … back on your local machine. Generally users on Linux based machines only have to enable X11 forwarding by using the -X option with the ssh command. While users on Windows machines will need to ensure that their ssh client has X11 forwarding enabled and an X-Server is running. A list of ssh clients (which includes a combo packaged ssh client and X-Server) can be found in the ssh section. Additional information about running X applications can be found on the Using the X Window System page.


    For the command line interface:

    srun --export=All --time=00:30:00 --nodes=1 --ntasks-per-node=16 --partition=secondary --pty /bin/bash

    Once the batch job starts, you will have an interactive shell prompt on a compute node. Then type:

    module load mathematica

12. R Software

See the R on the Campus Cluster page for versions available and information on installing add-on packages.

13. HPC & Other Tutorials

The NSF-funded XSEDE program offers online training on various HPC topics—see XSEDE Online Training for links to the available courses.

Introduction to Linux offered by the LINUX Foundation.

14. Investor Specific Information

See here for the technical representative of each investor group and links to investor web sites (if available).