Specifications include, but are not limited to: 32 x CPU-nodes: 128 GB RAM, dual Intel Xeon Ice Lake 24-core CPUs. 8 x high-memory CPU-nodes: 1 TB RAM, dual Intel Xeon Ice Lake 24-core CPUs. 8 x quad GPU nodes: 512 GB RAM, dual Intel Xeon Ice Lake 24-core CPUs, 4 x Nvidia A100 (80GB) with 4-way NVLink mesh. These x86 nodes above connected through HDR100 Infiniband fabric. Preferred Intel SKU: 6342 or 6326. 2 x Power9 AC922 nodes: 256 GB RAM, dual 16-core Power9 CPUs (2.7 GHz base), no GPUs. 1 x cluster head node, configured with basic cluster monitoring and management software, SLURM scheduler, Intel OneAPI & HPC Toolkit with Community Support. All nodes connected with 25GbE network. Solution must provide appropriate switching to provide 25GbE links to all systems. An out-of-band management network is required and must be accessible from the 25GbE network. All servers running Ubuntu Linux OS. Each node should have 1 TB SSD drive; NVMe preferred. No other storage. Cluster to be installed by vendor at the MGHPCC. Past experience working at MGHPCC would be preferred. Racks are not needed. The system design should attempt to minimize both rack space and power consumption and maximize performance. 5-year onsite service and warranty required.