Data Center

Our privately owned and operated Tier-3 data center provides our research projects the ability and flexibility to provide on-demand scalable and high performance computing, network and storage resources in a fault-tolerance embedded architecture, ready to handle a wide array of HPC, Distributed Computing and Cloud technologies. The data center can support any required testing and experimentation research activity, and validation activities in the area of Cloud computing and Blockchain technologies.

Facility Key Features

Secure Physical Access Controls

Redundant Power Systems

Automated Fire Suppression System

Fault-Tolerant High-Speed WAN Connectivity

Automation-Driven Cooling Systems

Quickly expandable to meet new workloads

Fault Tolerant by Design (at least N+1)

Compute, Network and Storage

Our bare-metal servers consist of Supermicro’s latest generation compute units. Utilizing the Intel Xeon Gold series processors in a Dual-CPU configuration per node, multi-gigabit link aggregates and NVMe storage, our server farm is able to handle our moderately complex and heavy workloads efficiently. Supermicro open switches provide fault tolerant, multi-terrabit switching capacity which interconnects the compute and storage nodes. More specifically, it consists of 10 x Intel Xeon Gold 5218 processors, 320GB RAM, 9.6TB NVMe storage and multi-gigabit node interconnections using two Supermicro open switches.

Expandability and Upgrade Paths

The architectural approach implemented for our data center infrastructure is to be able to provide drop-in plug and play hardware upgrades which expand our capabilities rapidly. 


Our GPU upgrade path provides immense and immediate resources for demanding workloads in a scalable fashion. Expandable up to 24.6 TerraFlops of double-precision GPU compute capacity per 1 Rack Unit, in a total GPU core density of over 15,000 GPU cores per 1 Rack Unit of space. On mainstream research workloads, these resources can provide performance ratios of 1:135 comparing a regular server-class CPU.


Following our architectural approach to provide drop-in upgrades to all of our resources, our already high-performance networking technologies stack allows us to provide plug-and-play upgrades to allow for even higher performance with Infiniband and multi-100Gbit Ethernet inter-node connectivity.


Our storage tiers provide primary workloads with NVMe SSDs and high performance SAS SSDs for secondary workloads. All of our drives are redundant N+1 across our servers. Expandability in both tiers is plug-and-play to extend our capacity accordingly to a maximum of 100TB of redundant high performance storage.