Governance of Atlas is handled by the EAS faculty who buy into Atlas, with recommendations from EAS Computing and OIT's PACE. These recommendations are then conveyed to PACE for the hands-on administration of the system.
Docs & Help
Support email address: email@example.com
firstname.lastname@example.org (General announcement/discussion list for Atlas
email@example.com (Availability announcements for HPC environment @ GT)
firstname.lastname@example.org (General HPC discussion)
http://www.pace.gatech.edu/user-guide (User Guide)
http://pace.gatech.edu/home (PACE @ GT Main Website)
http://blog.pace.gatech.edu/ (PACE Blog - Great source for information on the systems)
ATLAS is the Earth and Atmospheric Sciences shared High Performance Computing resource. The goal of ATLAS is to provide an HPC resource for the current and future needs of EAS.
Based on commodity components, the ATLAS system will be expandable by addition of nodes, networking, and file storage.
Due to cooling and electrical concerns, it was no longer possible to support many single user machines. This cluster represents a new chapter in EAS computing, one of shared resources and collaboration.
ATLAS was made possible by the EAS faculty pooling resources to purchase a single machine. The process started in Spring '08 with meetings between EAS Faculty, EAS Computing, CoS, and OIT. These meetings gathered requirements and expectations for the system.
In June '08, work with several vendors resulted in an initial configuration that would meet the requirements described by the faculty, as well as offer added capability for testing, debugging, and user expansion. Utilizing state contracts, the hardware order process began in July '08. After some legal wrangling, an acceptable "acceptance document" was agreed upon in August '08.
Parts of the cluster began arriving soon afterwards, with the balance of the hardware due October 30, 2008.
Current State of Atlas
Atlas has been expanded over the last several years, and as of Spring 2012, has the following specs:
- 3328 Cores
- 120 Terabytes of primary storage
- Access to high speed storage for runs (limited per user)
- QDR and SDR Infiniband
- Gigabit Interconnects
- 10 Gigabit storage Interconnects
Currently, 1024 cores of the original Atlas are being migrated to RedHat 6.X from 5.X.
Atlas is built and utilized by Earth & Atmospheric Sciences @ GT. To test on Atlas, you simply request a test account through EAS Computing. If you would like to use Atlas for short projects or for small runs, EAS maintains a pool of processors and storage that may be used at no cost.
If you need to have larger assets available on Atlas, you can purchase assets in the basic building blocks of Compute Nodes, Storage Nodes, and Post Processing nodes. Please contact EAS Computing for details on current optoins and cost.
Atlas Usage by Group