Overview

The GIGA cluster is a high-performance computing (HPC) system, also referred to as a cluster, provided by the GIGA to allow its members to perform computationaly intesive tasks.

It is composed of:

  • computing nodes to perfom analyses, hosted at the CHU (B34),

  • a scratch disk to store the temporary results,

  • master nodes to connect the different parts and manage the jobs running on the computing nodes, located at the SEGI (B26).

The GIGA cluster is directly connected to the mass storage to allow fast and easy migration of data from one to the other.

Note

The GIGA cluster is optimised for tasks that require a lot of memory. If your analysis is mainly CPU intensive, and hence requires a large number of cores, or if you need GPU accelerated nodes, you should consider using the CÉCI clusters.

Specifications

Hint

Depending on your affilition, you may have access to more partitions and/or nodes. To get more information about their respective specifications, refer to the Slurm documentation.

Here are the specifications of the default nodes and partitions available on the GIGA cluster.

Nodes

The GIGA cluster features several types of computing nodes. Here are the specifications for each of them.

Node

Node ID

CPU type

CPU count[1]

RAM

Storage type

Storage[2]

chugen

001-002

intel

20

128 GB

SSD

1.8 TB

chugen

003-004

intel

24

128 GB

SSD

1.8 TB

ptfgen

001-004

intel

16

260 GB

SSD

1.5 TB

ptfgen

006,009,012

amd

32

128 GB

SSD

1.8 TB

ptfgen

007

amd

32

128 GB

HDD

450 GB

ptfgen

013-014

intel

24

256 GB

SSD

1.8 TB

ptfgen

015-016

intel

24

380 GB

SSD

1.8 TB

urtgen

001-004

intel

24

560 GB

SSD

1.8 TB

urtgen

005-008

intel

40

192 GB

SSD

1.8 TB

urtgen

009-012

intel

36

128 GB

SSD

1.8 TB

Partitions

Warning

The default queue defq* does not contain any node. A job submitted without specifying a partition will remain pending forever. Hence, it is required to set a queue when submitting a job.

Several partitions are available, each serving a different purpose.

Partition

Time limit

Nodes count

Nodes list

all_24hrs

1 day

6

ptfgen[002-003,015-016]
urtgen[005,008]

all_5days

5 days

8

ptfgen[006-007,009,012-016]

all_5hrs

5 hours

24

chugen[001-004]
ptfgen[002-003,006-007,009,012-016]
urtgen[003-012]

kosmos

infinite

4

ptfgen[006-007,009,012]

nextflow

infinite

1

chugen001

Usage

Other resources