X

Research Servers and Clusters

The Research Computing group assists in maintaining certain research assets.  Researchers may have funds to procure various hardware elements and we assist them with specifications, shipping, installation, configuration and on-going maintenance.

University of Memphis High Performance Research Computing Facility


Overview

The University of Memphis High Performance Computing Facility is dedicated to supporting computationally intensive research in science, technology, engineering, and mathematics fields. The facility consists of a Linux cluster with 88 Dell PowerEdge compute nodes consisting of 3,520 Xeon Gold 6148 CPU cores, 20.25 TB total DDR4 memory, 12 NVidia V100 16 GB Data Center GPUs, and 480 TB DDN GPFS storage accessible to cluster users. 78 of the nodes, each with 40 CPU cores and 192 GB memory, are designated as general compute nodes. 6 of the nodes, with 2 GPUs in addition to 40 CPU cores and 192 GB memory each, are designated as GPU compute nodes. 2 nodes, with 40 CPU cores and 736 GB memory each, are designated as big memory nodes. 2 nodes, with 40 CPU cores and 1.5 TB memory each, are designated as big big memory nodes. In addition to compute nodes, there are 2 head nodes with failover support, 2 login nodes, 4 storage nodes, 2 Infiniband switches, and 3 ethernet switches. Infiniband EDR connects all nodes, providing 100 GB per second node to node data rate. In all, the cluster is theoretically capable of 236.5 trillion floating point operations per second (Tflops) for CPU double precision workloads and 89.4 Tflops for GPU double precision workloads. Additionally, GPU nodes can theoretically perform up to 1500 Tflops for deep learning workloads.


Node Hardware

(2) Dell PowerEdge R640 Master Nodes:

  • 2x12 Intel Xeon Gold 6126 CPU cores @ 2.60GHz
  • 12x8 GB DDR4 SK Hynix RDIMM @ 2.666GHz
  • 2x240 GB SSD on Dell PERC H740P Mini Raid 1

(2) Dell PowerEdge R640 Login Nodes

  • 2x12 Intel Xeon Gold 6126 CPU cores @ 2.60GHz
  • 12x8 GB DDR4 SK Hynix RDIMM @ 2.666GHz
  • 2x240 GB SSD on Dell PERC H740P Mini Raid 1

(88) Compute Nodes:

  • (78) Dell PowerEdge C6420 General CPU Nodes:
  • 2x20 Intel Xeon Gold 6148 CPU cores @ 2.40 GHz
  • 12x16 GB DDR4 SK Hynix RDIMM @ 2.666GHz

(6) Dell PowerEdge R740 GPU Nodes:

  • 2x20 Intel Xeon Gold 6148 CPU cores @ 2.40 GHz
  • 2 NVidia 16 GB Tesla Volta V100 GPUs with 5120 CUDA cores @ 1.455 GHz
  • 12x16 GB DDR4 SK Hynix RDIMM @ 2.666GHz

(4) Dell PowerEdge R740 big memory Nodes:

  • 2x20 Intel Xeon Gold 6148 CPU cores @ 2.40 GHz
  • (2) 24x64 GB DDR4 Samsung RDIMM @2.666 GHz
  • (2) 24x32 GB DDR4 Samsung RDIMM @2.666 GHz

Storage

(4) DDN GS7K GRIDScalar GPFS Storage Nodes:

  • (all) 120x4 TB Hard Drive Disks @ 8.5 GB per second

Software

Job Scheduler

  • SchedMD Slurm (17)

Cluster Manager

  • Bright Computing Cluster Manager (8.1)

Compilers

  • Charm++ (6.8)
  • Intel (2019), UPCxx, and GNU (4, 7, & 8) compilers with support for C/C++ (03, 11, &14), Fortran (77 & 95), OpenMP (3.1 & 4.5), and 
  • NVidia CUDA toolkit (8 & 9) for CUDA and OpenCL support
  • OpenMPI (1, 2, 3), and Intel MPI (2019)

Scripting

  • Anaconda, BaSH, IPython, Julia, Python (2 & 3), R (3), SAS (9.4) , TCL, MATLAB (R2018a)

Simulation

  • ADF, ADMB, Amber, Ansys (AFD, CFX, EKM, Fluent, Icepak, TurboGrid), Autodock, CNS, Converge, Comsol, Fastsimcoal, Gamess, Gaussian, Gromacs, LAMMPS, MD++, Molpro, Moe, MOOSE, MOPAC, NAMD, NWChem, OpenFOAM, PETSC, Psi4, PyLith, Quantum Espresso, QUIP, Rosetta, SIMULIA (Abaqus, fe-safe, ISight, Tosca), SUMO, VMD, and WEKA

Data Analysis, Access & Modelling

  • ABySS, Admixtools, Admixture, Allpaths-LG, Augustus, Beast, Bedtools, Blast, Blat, Bowtie, BWA, Cactus, Cap3, cd-hit, Connectome, COPE, EigenSoft, ExabBayes, FastQC, FLASH-modified, GARLI, GMT, GnuPlot, hMETIS, HMMER, HybPiper, HyPhy, JELLYFISH, Jupyter, jModelTest, LASTZ, LattE, MAFFT, Modeller, MrBayes, mtMETIS, netCDF, novoAlign, OpenBUGS, ORTHOGRAPH, PandaSeq, ParaView, PartitionFinder, Phyml, Picard, PLINK, ProgressiveCactus, RAxML, SAC, Salmon, Samtools, SPAdes, TreeMix, TRIMAL, Trimmomatic, Trinity, VCFtools, Velvet, xmGrace, and Yasra

Libraries

  • Atlas, Autogrid, Automake, BamTools, BCFtools, Beagle, BLACS, BLAS, BOOST, Caffe, C-BLOSC, CMAKE, CNS, CURL, EXONERATE, FAISS, FFMPEG, FFTW, FOX, GCC, GD, GDAL, GFLAGS, GIT, GLOG, GSL, HDF4, HDF5, HTSLIB, IMSL, Keras, LAPACK, LEVELDB, LIBINT, miniconda, MUMPS, NCCL, NCL, OpenBLAS, OpenSEES, ParallelGNU, QT, SCALAPACK, SCONS, SNAPPY, SQLite, SuperLU, TensorFlow, and tie-array-packed

University of Memphis Communication Networks Supporting Research

The University of Memphis Network services provides networking services to over 20,000 nodes, providing connectivity to the main and regional campuses, facilities, on-campus residences, the Internet. The University supports a gigapop site for regional connections to Internet2 and is a connector for the State of Tennessee SEGP program for Internet2. In addition, the University developed a citywide research and educational network consortium which provides connectivity to St. Jude Children’s Research Hospital, University of Tennessee Health Sciences Center, LeMoyne Owen College, and Southwest Tennessee Community College.

The Networks Services’ Network Operations Center provides daily and nightly production of updates and reports for the University.