Supercomputer at Home


Second hand Infiniband adapters are surprisingly cheap if you want to experiment with the second-to-latest generation of high speed interconnects. I was able to purchase two infiniband QDR adapters for roughly $25 each from Ebay. The adapters are agnostic to what type of cable is used (copper or optical). This means that fibreoptic cables are significantly more expensive as the cable ends themselves need to house the optic transmitters and receivers. For copper cables, the limiting factor is distance (7 meters maximum). For my application this is no problem therefore I went with QSFP copper cables and this cost me $20.

Drivers and Software

Many models of adapters are specific to a certain application, for example a mainframe, a specific server setup from a single vendor... In my case this meant I had to downgrade my servers to Windows Server 2008 R2 and I was able to obtain a set of drivers from On Windows Server 2008 R2, the only working MPI implementation as far as I know is MS-MPI as part of the High Performance Computing Pack.


When developing in C++ I use Boost.MPI so my code will work on multiple MPI-enabled clusters; my MS-MPI cluster at home or the OpenMPI Linux Cluster at university. The MPI part is not built by default in Boost so we need to enable it by hand. This can be done with the --with-mpi flag when compiling. Oftentimes it will not be able to find the correct installation for MS-MPI or HPC Pack 2008.

Edit the following file:


I stripped all the parts not relevant to my current setup:

``` if [ os.on-windows ] { # Paths for Microsoft MPI local ms_mpi_path_native = "C:\Program Files\Microsoft HPC Pack 2008 R2" ; local ms_mpi_sdk_path_native = "C:\Program Files\Microsoft HPC Pack 2008 R2" ;

    # Path for Microsoft Compute Cluster Pack
    local cluster_pack_path_native = "C:\\Program Files\\Microsoft HPC Pack 2008 R2" ;

    ECHO "os.on-windows" ;

    if [ GLOB $(cluster_pack_path_native)\\Inc : mpi.h ]

          ECHO "Found Microsoft Compute Cluster Pack: $(cluster_pack_path_native)" ;

          local cluster_pack_path = [ path.make $(cluster_pack_path_native) ] ;

          options = <include>$(cluster_pack_path)/Inc

          # Setup the "mpirun" equivalent (mpiexec)
          .mpirun = "\"$(cluster_pack_path_native)\\Bin\\mpiexec.exe"\" ;
          .mpirun_flags = -n ;

          ECHO "MS-MPI Configured" ;
          ECHO "$(options)" ;
    else if $(.debug-configuration)
          ECHO "Did not find Microsoft MPI in $(ms_mpi_path_native)" ;
          ECHO "      and/or Microsoft MPI SDK in $(ms_mpi_sdk_path_native)." ;
          ECHO "Did not find Microsoft Compute Cluster Pack in $(cluster_pack_path_native)." ;


Edit your b2 configuration files:

In project-config.jam add mpi:

using mpi ;

With multiprocessor compilation, the build time is reduced significantly:

b2 install variant=release link=static runtime-link=static threading=multi address-model=64 --toolset=msvc -j56