High Performance (Spiedie) Computing

天美传媒

The High-Performance Computing Cluster, aptly name 鈥淪piedie鈥, housed at the Thomas J. Watson College of Engineering and Applied Science's data center in the Engineering and Science Building. This research facility offers compute capabilities for researchers within the Watson College and across 天美传媒.

Raw Stats

  • 16 Core 96GB Head Node
  • 312TB Available Infiniband Connected NFS Storage
  • 144 Compute Nodes
  • 3372 native compute cores
  • 12x NVidia H100 NVL GPUs, 8x NVidia A40 GPUs, 10x NVidia A5000 GPUs
  • 40,56 and 200/400Gb Infiniband Network
  • Ethernet network to all nodes for management and OS deployment

Since the deployment of the Spiedie cluster, it has gone through various expansions and deployments, growing from 32 compute nodes to 151 compute nodes as of October 2025. Most of these expansions have come from individual researcher grant awards. These individuals recognized the importance of the cluster to forward their research and helped grow this valuable resource.

Watson College continues to pursue opportunities to enhance the Spiedie Cluster and to expand its outreach to other researchers in different transdisciplinary areas of research. Support for the cluster has come from the Watson College Deans office,  School of Computing, Electrical and Computer Engineering, Mechanical Engineering, researchers from the Chemistry, and the Physics Departments. 

Head Node

Consists of a Dell R660 running a hypervisor to support the headnode and other services across discrete Virtual Machines 

Storage Node

A common file system accessible by all nodes is hosted on a Red Barn HPC server providing 312TB, with the ability to add additional storage drives.  Storage is accessible via NFS through a 56 and 400 Gb/s Infiniband interface.

Compute Nodes

The 152 compute nodes are a heterogeneous mixture of varying Intel Based processors, generations, and capacity.

Management and Network

Networking between the head, storage and compute nodes utilizes Infiniband for inter-node communication and Ethernet for management. OpenHPC and Warewulf provides monitoring, management of the nodes with SLURM handling, jobs submission, queuing, and scheduling. The cluster currently supports many MATLAB along with, VASP, COMSOL, R and almost any *nix based application.

Cluster Policy

High-Performance Computing at 天美传媒 is a collaborative environment where computational resources have been pooled together to form the Spiedie cluster.   

Access Options

Yearly subscription access 

  • Currently $1,536.51 per year, per faculty research group
  • 4TB Storage Quota
  • 122 hr wall time

Condo access

Purchase your own nodes to integrate into the cluster

  • Priority and pre-emption on your nodes
  • No wall time limit on job submissions to your nodes
  • 4TB Storage Quota
  • Your nodes are accessible to others when not in use

Watson Computing will assist with quoting, acquisition, integration and maintenance of purchased nodes.  For more information on adding nodes to the Spiedie cluster, email Phillip Valenta.