QRIScompute
This service allows users to access Nectar cloud compute infrastructure (also known as the Australian Research Cloud) to facilitate computational research, data processing, modelling, analysis and collaboration. The research cloud operates as 'infrastructure as a service' (IaaS) and is suitable for users with skills in Linux system administration.
QRIScompute facilities
The QRIScompute service provides users with access to:
- Virtual machines
Users of the Nectar research cloud can create virtual machines (instances) with up to 16 virtual CPUs. Once you have been granted an allocation, you can create instances in any of the data centres in the Nectar federation.
- Nectar storage
Nectar instances can access Nectar Volume Storage and Object Storage in the order of gigabytes to a terabyte. You can request this storage when requesting your Nectar research cloud allocation. Volume storage must requested on the node on which your instances will run.
- Nectar Operating Systems
Available operating systems include mainstream Linux distributions, such as Centos, Ubuntu, Debian, Fedora, and Scientific Linux. Unfortunately, operating systems that require licenses, such as Windows Enterprise, are not available through QRIScloud
Get a Virtual Machine
Create a virtual machine. You need to apply through the Nectar dashboard, which is also where you manage, and gain access to your cloud compute resources once they are allocated. You can apply for up to 16 virtual CPUs (vCPUs) and 64 GB of memory for short or longer term research use. The Nectar dashboard allows you to perform many other management functions, and is secured using your home university's credentials via the Australian Access Federation (AAF). Click here to get started. If you need assistance with your application, please consult your university's eResearch Analyst.
If you do not have AAF credentials through an institution, you can ask us to create an account for you that you can use for QRIScloud. The first time that you visit the Nectar Dashboard, a project trial (PT) project will be created to allow you to try out the facilities, whether you want to make use of this option or not. (Use of your PT is not a prerequisite for requesting a longer term allocation.) The PT has a duration of 3 months, and provides resources for creating instances with a total of 2 vCPUs. When you are ready, apply for a NeCTAR Allocation allowing you to use more Nectar resources, for a longer time. As project manager, you will be able to allow access to colleagues and research students who are also AAF users.
Institutional HPC
ZODIAC – HPC Cluster at James Cook University (JCU)
The Zodiac HPC cluster contains 1000 AMD Opteron processors, with three compute node configurations available:
- Standard compute nodes have 24 x 2.3GHz CPU cores and 128GB of memory
- Big memory compute nodes have 48 x 2.3GHz CPU cores and 256GB of memory
- Fast compute nodes have 32 x 3.0GHz CPU cores and 256GB of memory.
Zodiac utilises 128TB of disk storage split over 3 filesystems in conjunction with 400TB of tape storage.
Job prioritisation favours new and small users. Jobs requiring more than 48 CPU cores will not run on Zodiac, as the job management system has been designed to prevent multi-node jobs from being executed. Zodiac uses Torque+Maui for job management and scheduling, and the GNU compilers are available for software compilation.
JCU HPC services provides additional Infrastructure-as-a-Service (IaaS) for researchers with special requirements, including Windows compute, Web Services, and Databases.
Further information about Zodiac can be found at https://secure.jcu.edu.au/confluence/display/Public/Home.
If you would like to find out more about Zodiac, you can contact:
Dr. Whitney Mallett
Phone: +61 7 4781 5084
Email: This email address is being protected from spambots. You need JavaScript enabled to view it.
Isaac Newton – HPC Cluster at CQUniversity (CQUni)
The Isaac Newton HPC cluster contains 544 Intel and AMD CPU cores, with three compute node configurations available:
- Standard compute nodes consists of 28 x dual Intel E5-2670 2.6 GHz 8-core CPUs (Total of 16 CPU cores) and 128GB’s of memory
- GPU compute nodes consists of 2 x dual Intel E5-2670 2.6 GHz 8-core CPUs (Total of 16 CPU cores), 128GB’s of memory and 1 x nVidia M2075 GPU
- Large compute node consists of 1 x quad AMD Opteron 2676 2.3 GHZ 16-core CPUs (Total of 64 CPU Cores) and 512GB’s of memory
The Isaac Newton HPC cluster has 240 TB’s of raw shared storage. For more details on CQUniversity’s HPC system, you are encouraged to visit https://my.cqu.edu.au/web/eresearch/hpc-systems.
A list of HPC software available can be found at https://my.cqu.edu.au/web/eresearch/hpc-software
Live HPC utilisation graphs can be found at https://my.cqu.edu.au/web/eresearch/usage-graphs
Further information about CQUniversity’s HPC infrastructure can be found at https://my.cqu.edu.au/web/eresearch/hpc.
If you would like to find out more about CQU’s Isaac Newton HPC system, you can contact:
Jason Bell
Phone: +61 (7) 4930 9229 (x59229)
Email: This email address is being protected from spambots. You need JavaScript enabled to view it.
HPC Cluster at the University of Southern Queensland (USQ)
The current USQ HPC cluster comprises:
- 30 compute nodes, each with 2 x quad-core 2.7GHz AMD Opteron CPUs and 16GB of memory
- 1 visualisation node consisting of Sun X4440 system with 4 x six-core 2.4GHz Opteron CPUs, 64GB of memory, and an NVidia graphics card.
There is 180TB of shared storage.
A new HPC cluster will be deployed soon, and is comprised of 29 compute nodes, 1 administration node, and 1 login and file server node. The compute nodes have three configurations:
- Standard - 2 x Intel ES-2650v3 processors and 128GB of memory
- Large Memory - 2 x Intel ES-2650v3 processors and 256GB of memory
- GPU node - 2 x Intel ES-2650v4 processors, 128GB memory, and 2 x k80 GPUs.
Further information on USQ's HPC can be found at: http://www.usq.edu.au/research/support-development/development/eresearch/hpc
Documentation and support
QRIScloud support can be accessed in several ways. Additionally, QRIScloud staff have created the Virtual Wranglers portal to provide information about many aspects of NeCTAR instance operations. You are welcome to use and contribute to this store of information. Step-by-step instructions on how to set up a Nectar instance can be found here. Background information explaining what NeCTAR images are, and how to use them, can be found here. You are also welcome to consult your university's eResearch Analyst
National Computational Infrastructure (NCI)
Australia’s national research computing service, the National Computational Infrastructure (NCI), provides world-class, high-end services to Australia’s researchers, the primary objectives of which are to raise the ambition, impact, and outcomes of Australian research through access to advanced, computational and data-intensive methods, support, and high-performance infrastructure.
NCI's peak system, Gadi, is a Fujitsu Australia high-performance, distributed-memory cluster, which entered production use in November 2019. It is significantly faster than its predecessor, Raijin, and as of November 2019, it was the fastest supercomputer in the southern hemisphere.
Gadi
Gadi |
Intel Xeon Platinum 8274 (Cascade Lake) |
Two physical processors per node |
3.2 GHz clock speed |
48 cores per node |
4915 GFLOPs per node (theoretical peak). |
The National Computational Merit Allocation Scheme (NCMAS) provides researchers with access to Australia’s major national computational facilities, including Gadi. The main call for applications is made annually in October for allocations to start the following January for up to 12 months. QCIF has a share in time on Gadi and accepts applications all year round.
Bunya
Bunya is a research computer designed for general purpose use. Details about Bunya are available here