● Mellanox single port 100 Gbps HDR Infiniband adapter ○ 5 year UFM licenses for each node ● Single 1 Gbps Ethernet port ● Must have Out-of-Band Management support for remote "virtual KVM" console, remote media support over Ethernet, power management, and firmware management. Out-of-Band Management is a solution that provides a secure dedicated alternate access method into an IT network infrastructure to administer servers without using the primary LAN or server running an OS. ○ The system firmware and BMC firmware must be managed through out-of-band management. If the vendor does not have out-of-band management as an option,they must provide a Linuxbased command line utility to update the system and BMC firmware that can be programmatically deployed. ○ Remote consoles must be capable at all stages of system operation, including after the OS has loaded. ○ It must be capable of running shared with the system Ethernet jack, and must support management traffic on a tagged VLAN over the shared link. ○ HTML/RedFish interface; no Java ● 5 year or perpetual centralized out-of-band management console (can read and update BIOS/controllers/firmware/etc, manage licenses, remote monitoring, etc ) ○ Dell Open Manage/ HP Insight/Lenovo XClarity/etc. ● Lowest cost offering 1TB local disk for OS, mirror or raid protection not required. No performance requirements. ● ~4TB total NVMe (RAID0 provided locally in your hardware if multiple drives - this is the only RAID option) optimized for reads for hosting in-use data sets. ○ Data sets will be staged to this space from large tar’s that exist on network storage ○ Specific NVMe form factor not required eg. 2.5”, U.2 etc. Hot swap access not required ○ If vendor providing multiple drives, a RAID card must be included or similar functionality provided pre-boot ○ For customers - we do not provide backups or data recovery. Data should be housed in a resilient location. This will be mounted as /tmp_data. ● Systems should conform to our existing industry standard 19" 1200mm racks, air cooled, with 240 V and 208 V Input Power. ○ Please include the power connector required ○ Nodes should have a minimum of two power supplies. If requiring multiple power supplies greater than 2 for normal operation include N+1. State the total number of power connections required ○ Include the typical expected power of the node to be allocated in the rack ○ If your solution provides any sort of power capping/throttling, please describe. Eg per gpu, per total chassis, per PSU/power connection. ● Vendor to send a list of MAC addresses (ethernet and management) and remote management password when shipping nodes, by email if possible ● If the nodes are part of an HPC cluster handling PHI, the drive replacement policy should clearly state that the return of failing drives is not required. ● Minimum 8 cores/GPU, fastest clock for that configuration, include both an AMD and an Intel configuration if available. ● 1.5 TB Memory / node (2x device memory) ● 5 year NBD parts warranty for all components, including GPUs ● 8 way RTX 6000 Pro GPUs Quantity: We are estimating 8 machines or as many as available budget will allow.