Computing Resources: Difference between revisions
m (Mention Fluids contribution to the Graham cluster.) |
m (update the last-edited date at the top) |
||
(13 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
As a member of the uWaterloo Applied Math Fluids Group you have access to a fairly substantial array of computing resources. These are correct as of | As a member of the uWaterloo Applied Math Fluids Group you have access to a fairly substantial array of computing resources. These are correct as of August 2024. | ||
== | == Digital Research Alliance of Canada == | ||
Compute Canada is perhaps the largest supplier of computing services available to us, and it provides significant computing power. To begin, you will need to create a Compute Canada account (follow instructions on [https://www.computecanada.ca/research-portal/account-management/apply-for-an-account/ the Compute Canada page]). Account creation will require approval from your supervisor. The main components of Compute Canada that are relevant to us are: SHARCNet, SciNet, and WestGrid | Digital Research Alliance of Canada (formerly, Compute Canada) is perhaps the largest supplier of computing services available to us, and it provides significant computing power. To begin, you will need to create a Compute Canada account (follow instructions on [https://www.computecanada.ca/research-portal/account-management/apply-for-an-account/ the Compute Canada page]). Account creation will require approval from your supervisor. The main components of Compute Canada that are relevant to us are: SHARCNet, SciNet, and WestGrid. | ||
=== SciNet === | === SciNet === | ||
Line 27: | Line 10: | ||
Niagara is another large system run by Compute Canada. Our group does not have dedicated resources, but that need not stop you from running anything there. | Niagara is another large system run by Compute Canada. Our group does not have dedicated resources, but that need not stop you from running anything there. | ||
For information on the | For information on the Niagara system and hardware, see documentation [https://docs.scinet.utoronto.ca/index.php/Main_Page here] and [https://docs.computecanada.ca/wiki/Niagara here]. | ||
=== Westgrid === | === Westgrid === | ||
Line 38: | Line 21: | ||
The [https://uwaterloo.ca/math-faculty-computing-facility/ Math Faculty Computing Facility] (MFCF) provides a central computing environment for the Faculty (excluding Computer Science). It also maintains several Linux/Unix servers that belong to the Fluids group. These servers are much smaller in scale than the Compute Canada clusters (typically tens of processors), but they are hosted locally and can be very useful tools for running smaller simulations or getting your code working properly before scaling up to Compute Canada. The MFCF staff are also very helpful regarding software installation and general computing problems / questions. | The [https://uwaterloo.ca/math-faculty-computing-facility/ Math Faculty Computing Facility] (MFCF) provides a central computing environment for the Faculty (excluding Computer Science). It also maintains several Linux/Unix servers that belong to the Fluids group. These servers are much smaller in scale than the Compute Canada clusters (typically tens of processors), but they are hosted locally and can be very useful tools for running smaller simulations or getting your code working properly before scaling up to Compute Canada. The MFCF staff are also very helpful regarding software installation and general computing problems / questions. | ||
It is important to understand that these machines do not all run the same operating system. They also have different compilers, numerical libraries, MPI libraries, | |||
and so on. In order to build and run models such as SPINS, MITgcm, and IGW, your environment must be set up appropriately for each machine. The recommended shell | |||
control files (.cshrc, .profile) provided by MFCF do that for you automatically. In some cases you may wish to choose different options (e.g. different version | |||
of gcc, different MPI library), but the recommended shell control files align with the model configuration files published in this wiki. | |||
=== Fluids-owned machines === | === Fluids-owned machines === | ||
==== hood ==== | ==== hood ==== | ||
[[info specific to hood.math | | [[info specific to hood.math | More information can be found here]]. | ||
==== bow, minnewanka, waterton (the "mountain lakes") ==== | ==== bow, minnewanka, waterton (the "mountain lakes") ==== | ||
These are | These are (2017) systems with high-speed interconnects and are managed through the SLURM scheduler (see [[Graham Tips]] for some useful SLURM-related commands). [[info specific to bow, minnewanka, waterton | More information can be found here]]. | ||
==== kesagami, kuujjua ==== | ==== kesagami, kuujjua ==== | ||
These | These (2018) systems are intended primarily to run the IGW model. | ||
==== sutton, peleee, rondeau (the "provincial parks") ==== | |||
See details [[info_specific_to_sutton,_rondeau,_pelee | here]]. | |||
=== Faculty-wide machines === | === Faculty-wide machines === | ||
Line 63: | Line 51: | ||
== Lab Systems == | == Lab Systems == | ||
Lab machines are maintained by lab members (currently | Lab machines are maintained by lab members (currently Andrew Grace), not MFCF. Accounts should exist for Fluid Lab members (if you don't have one but would like one, ask Aaron / your supervisor). Some standard software is installed. | ||
''Note: these machines do not have a queue system, so please compute responsibly and do not swamp the machines.'' | ''Note: these machines do not have a queue system, so please compute responsibly and do not swamp the machines.'' | ||
=== | === Belize2 (formerly Boogaloo) === | ||
belize2 can be accessed either in-person in the Fluids Lab or via ssh with yourUserID@boogaloo.math.uwaterloo.ca. | |||
See [[info specific to boogaloo.math and belize.math | our wiki page]] for information on the available hardware. | See [[info specific to boogaloo.math and belize.math | our wiki page]] for information on the available hardware. | ||
Boogaloo has both CPU and GPU capabilities. | Boogaloo has both CPU and GPU capabilities. | ||
Line 77: | Line 62: | ||
=== Onyx === | === Onyx === | ||
Onyx is a Windows machine that is primarily intended for visualization | Onyx (2018) is a Windows machine that is primarily intended for visualization. Both VisIt and ParaView are installed and should be GPU-aware. | ||
This machine should be used in-person to perform high-powered visualization of your datasets. | This machine should be used in-person to perform high-powered visualization of your datasets. | ||
Onyx is ''not intended'' for heavy computation, but is capable to running CUDA models. | Onyx is ''not intended'' for heavy computation, but is capable to running CUDA models. | ||
=== Belize3 === | |||
belize3.math.uwaterloo.ca (2021) is a high-power Linux workstation co-managed by MFCF. | |||
It has a moderate GPU in addition to two server-class multi-core CPUs. |
Latest revision as of 14:13, 26 August 2024
As a member of the uWaterloo Applied Math Fluids Group you have access to a fairly substantial array of computing resources. These are correct as of August 2024.
Digital Research Alliance of Canada
Digital Research Alliance of Canada (formerly, Compute Canada) is perhaps the largest supplier of computing services available to us, and it provides significant computing power. To begin, you will need to create a Compute Canada account (follow instructions on the Compute Canada page). Account creation will require approval from your supervisor. The main components of Compute Canada that are relevant to us are: SHARCNet, SciNet, and WestGrid.
SciNet
Niagara
Niagara is another large system run by Compute Canada. Our group does not have dedicated resources, but that need not stop you from running anything there. For information on the Niagara system and hardware, see documentation here and here.
Westgrid
Cedar
We have essentially no experience with this system, but it's a thing. Documentation is here.
MFCF-administered Systems
The Math Faculty Computing Facility (MFCF) provides a central computing environment for the Faculty (excluding Computer Science). It also maintains several Linux/Unix servers that belong to the Fluids group. These servers are much smaller in scale than the Compute Canada clusters (typically tens of processors), but they are hosted locally and can be very useful tools for running smaller simulations or getting your code working properly before scaling up to Compute Canada. The MFCF staff are also very helpful regarding software installation and general computing problems / questions.
It is important to understand that these machines do not all run the same operating system. They also have different compilers, numerical libraries, MPI libraries, and so on. In order to build and run models such as SPINS, MITgcm, and IGW, your environment must be set up appropriately for each machine. The recommended shell control files (.cshrc, .profile) provided by MFCF do that for you automatically. In some cases you may wish to choose different options (e.g. different version of gcc, different MPI library), but the recommended shell control files align with the model configuration files published in this wiki.
Fluids-owned machines
hood
More information can be found here.
bow, minnewanka, waterton (the "mountain lakes")
These are (2017) systems with high-speed interconnects and are managed through the SLURM scheduler (see Graham Tips for some useful SLURM-related commands). More information can be found here.
kesagami, kuujjua
These (2018) systems are intended primarily to run the IGW model.
sutton, peleee, rondeau (the "provincial parks")
See details here.
Faculty-wide machines
The central shared computing environment comprises numerous Linux and Windows servers and a small number of GPU servers. Even if you do not use those machines, the central file server is a smart place to store a copy of your important work, such as your thesis in progress, for safety. The central file service "files.math" is accessible via the Mac mini provided to you by the department, as well as via ssh/scp to the shared Linux machines.
Lab Systems
Lab machines are maintained by lab members (currently Andrew Grace), not MFCF. Accounts should exist for Fluid Lab members (if you don't have one but would like one, ask Aaron / your supervisor). Some standard software is installed.
Note: these machines do not have a queue system, so please compute responsibly and do not swamp the machines.
Belize2 (formerly Boogaloo)
belize2 can be accessed either in-person in the Fluids Lab or via ssh with yourUserID@boogaloo.math.uwaterloo.ca. See our wiki page for information on the available hardware. Boogaloo has both CPU and GPU capabilities.
Onyx
Onyx (2018) is a Windows machine that is primarily intended for visualization. Both VisIt and ParaView are installed and should be GPU-aware. This machine should be used in-person to perform high-powered visualization of your datasets. Onyx is not intended for heavy computation, but is capable to running CUDA models.
Belize3
belize3.math.uwaterloo.ca (2021) is a high-power Linux workstation co-managed by MFCF. It has a moderate GPU in addition to two server-class multi-core CPUs.