Computational Cluster

 

BMI maintains a Linux-based computational cluster that is accessible from outside the Cincinnati Children's network. The cluster currently has over 700 processing cores and is heterogeneous with both large-memory SMPs as well as low-cost processing nodes.

Accessing the cluster

The cluster is Linux-based and the preferred method of access is either SSH or NX.

Instructions to access the HPC login node via SSH:

  • If you are in the CCHMC network or connected to CCHMC VPN - ssh yourusername@bmiclusterp.chmcres.cchmc.org
  • If you are outside of CCHMC network
    • 1. ssh yourusername@research.cchmc.org
    • 2. Once you are connected to "research.cchmc.org", do "ssh yourusername@bmiclusterp.chmcres.cchmc.org"

Instructions to access the HPC login node via NX:

  • Configure your NX client to connect to nx.research.cchmc.org
  • Follow the rest of the configuration explained here.

To request access to the cluster, please send an email to Cluster Support.

CCHMC employees: After your access is enabled, you should be able to login to the cluster with your network credentials.

External users: Please use the username/password that was sent back to you via email to login to the cluster.

All users have a default disk (home directory) quota of 10 GB and a job walltime quota of 10000 hours per quarter, which is free. Both of these can be increased by sending an email to Cluster Support. An annual storage charge-back has been implemented, therefore we will need a valid budget number to accompany all requests for quota increase.  For home directories, the charges are $0.20/GB/year. Quota increases are done in multiples of 10, therefore a 10GB increase will incur an annual cost of $2.  Charges are billed annually at the end of the fiscal year.

Sample uses

This cluster can be used for interactive and batch processing of computationally intensive tasks. In particular, in can be used to perform

  • protein-protein and protein-ligand docking using AutoDock, NAMD and other protocols
  • protein secondary structure prediction, membrane domain prediction and other protein structure prediction and visualization problems using SABLE, MINNOU and other servers
  • microarray analysis using tools such as GeneSpring, RMAExpress or BioConductor (R)
  • genome-wide association studies using plink or the Wake Forest analysis suite
  • other memory-intensive or processor-intensive statistical analyses using R
  • Genomics using GATK, Bowtie, Tophat etc.
  • many other applications

Current configuration

All nodes run 64-bit CentOS Linux.

Cluster batch system is currently managed by LSF resource manager and scheduler. You can get some examples of submitting jobs using LSF here. Please read the job scheduling policies page for recent policy changes.

For individual software environment in the cluster, we use Modules. Please refer to modules page for more information and examples.

Usage statistics

You can obtain snapshot and historical information about the load on the cluster.

Help and support

For technical support, software installation, etc., please contact the Research IT Support.

 

not-front not-logged-in node-type-page one-sidebar sidebar-right page-resources-clusters section-resources page-resources-clusters section-resources taxonomy-clusters