Molecular Dynamics Packages Installation
NAFlex offers a variety of setup workflows that prepare molecules to run Molecular Dynamics simulations with the three most used MD packages existing today: AMBER, GROMACS and NAMD. By now, and mainly due to limitations in computer power and disk space, MD simulations allowed by the server are short ones (up to 0.5ns), just to test the correctness of the prepared system. Users then can download all the necessary data to run a longer simulation in their own machines, as explained in the NAFlex setup tutorial help section and in the MDWeb Run tutorial help section.
In order to run the simulations in a local machine, the corresponding MD package must be correctly installed and configured. This section of the NAFlex help pages is dedicated to offer some information, references and useful links about installation and configuration of AMBER, GROMACS and NAMD MD packages .
1. AMBER Package Installation & Configuration
- Current version: Amber 12
- Homepage: https://ambermd.org
AMBER (Assisted Model Building with Energy Refinement) Molecular Dynamics package is divided in two main parts: AmberTools and Amber. AmberTools is a completelly free package of programs to prepare molecules to run MD simulations and for the post trajectory analysis. AmberTools is the package used by NAFlex when working with Amber forcefields. Amber, on the other hand, is the package of programs to actually run MD simulations, and this part is not free. Fees go from $400 for Academic/non-profit/government use to $20,000 for industrial (for-profit) new licensees. More information can be found in: How to obtain Amber package.
Installation instructions are given in Section 1.2 of the AmberTools Reference Manual. They refer to both Amber and AmberTools parts. There is also a section in AmberTools Reference Manual about automatically applying bug fixes that should be read (section 1.5).
Amber web page (https://ambermd.org) has some specific instructions and hints for various common operating systems, just looking for the “Running Amber on ....” links. In the next lines, there are (briefly) the steps needed to install and configure Amber/AmberTools in a local machine. For large, more especific and extended information, please refer to the Amber Reference Manual.
AMBER Installation & Configuration Steps (click to show):
- First, extract the files in some location:
cd /home/myname
tar xvfj AmberTools12.tar.bz2
tar xvfj Amber12.tar.bz2 # (only if you have licensed Amber 12!)
- Next, set your AMBERHOME environment variable:
export AMBERHOME=/home/myname/amber12 # (for bash, zsh, ksh, etc.)
setenv AMBERHOME /home/myname/amber12 # (for csh, tcsh)
Be sure to change the “/home/myname” above to whatever directory is appropriate for your machine, and be sure that you have write permissions in the directory tree you choose. You should also add $AMBERHOME/bin to your PATH.
- Next, you may need to install some compilers and other libraries. Details depend on what OS you have, and what is already installed. Package managers can greatly simplify this task. For example for Debian-based Linux systems (such as Ubuntu), the following command should get you what you need:
sudo apt-get install csh flex gfortran g++ xorg-dev \
zlib1g-dev libbz2-dev
Other Linux distributions will have a similar command, but with a package manager different than apt-get. For example, the following should work for Fedora Core and similar systems:
sudo yum install gcc flex tcsh zlib-devel bzip2-devel \
libXt-devel libXext-devel libXdmcp-devel
For Macintosh OSX, MacPorts (https://www.macports.org) serves a similar purpose. You would download and install the port program, then issue commands like this: sudo port install gcc46 MacPorts is useful because the “Xcode” compilers provided by Apple will not work to compile Amber, since no Fortran compiler is provided. Amber cross-links Fortran and C/C++ code, so a “full” GCC installation is necessary.
- Now, in the AMBERHOME directory, run the configure script:
cd $AMBERHOME
./configure --help
ebr/> - GROMACS Tutorials That will show you the configuring options. Choose the compiler and flags you want; for most systems, the following should work:
- Then,
make install
will compile the codes. If this step fails, try to read the error messages carefully to identify the problem.
- This can be followed by
make test
which will run tests and will report successes or failures. Refer to the Amber Reference Manual if "possible FAILURE" messages are found.
- If you wish to compile parallel (MPI) versions of Amber:
cd $AMBERHOME
./configure -mpi <....other options....>
make install
# Note the value below may depend on your MPI implementation
export DO_PARALLEL=”mpirun -np 2”
make test
# Note, some tests, like the replica exchange tests, require more
# than 2 threads, so we suggest that you test with either 4 or 8
# threads as well
export DO_PARALLEL=”mpirun -np 8”
make test
This assumes that you have installed MPI, that you have set your MPI_HOME environment variable to the MPI installation path, and that mpicc and mpif90 are in your PATH. Some MPI installations are tuned to particular hardware (such as infiniband), and you should use those versions if you have such hardware. Most people can use standard versions of either mpich2 or openmpi. To install one of these, use one of these simple scripts:
cd $AMBERHOME/AmberTools/src
./configure_mpich2OR
./configure_openmpi
Follow the instructions of these scripts, then return to beginning of step 7.
./configure gnu
Don’t choose any parallel options at this point. (You may need to edit the resulting config.h file to change any variables that don’t match your compilers and OS. The comments in the config.h file should help.) This step will also check to see if there are any bugfixes that have not been applied to your installation, and will apply them (unless you ask it not to). If the configure step finds missing libraries, go back to Step 3.
Useful links:
- Downloading AmberTools 12
- Amber12 and AmberTools 12 Reference Manuals
- Amber Tutorials
- Amber-related links (Tips for installing and running Amber on various architectures)
- Amber on GPU's
2. GROMACS Package Installation & Configuration
- Current version: Gromacs 4.6
- Homepage: https://www.gromacs.org
GROMACS (GROningen MAchine for Chemical Simulations) is Molecular Dynamics software package, including an impressive set of small programs to prepare and run MD simulations and analyse the resulting trajectories. Each of these programs contain a brief help about what are doing, how to run them, and information about parameters, inputs and outputs. GROMACS has also an extensive documentation, either in an on-line format (https://www.gromacs.org/Documentation) as well as in a PDF Manual.
The entire GROMACS package is Free Software, licensed under the GNU Lesser General Public License. Installation instructions are given in the on-line documentation and in Appendix A section of the Reference Manual.
In the next lines, there are (briefly) the steps needed to install and configure GROMACS in a local machine. For large, more especific and extended information, please refer to the Gromacs Documentation.
GROMACS Installation & Configuration Steps (click to show):
- Get the latest version of your compiler.
GROMACS requires an ANSI C compiler that complies with the C89 standard. For best performance, the GROMACS team strongly recommends you get the most recent version of your preferred compiler for your platform (e.g. GCC 4.7 or Intel 12.0 or newer on x86 hardware).
- Check you have CMake version 2.8.x or later.
From version 4.6, GROMACS uses the build system CMake. GROMACS requires CMake version 2.8.0 or higher. Lower versions will not work. You can check whether CMake is installed, and what version it is, with cmake --version. If you need to install CMake, then first check whether your platform's package management system provides a suitable version, or visit https://www.cmake.org/cmake/help/install.html for pre-compiled binaries, source code and installation instructions. The GROMACS team recommends you install the most recent version of CMake you can. If you need to compile CMake yourself and have a really old environment, you might first have to compile a moderately recent version (say, 2.6) to bootstrap version 2.8. This is a one-time job, and you can find lots of documentation on the CMake website if you run into problems.
- Unpack the GROMACS tarball.
tar xfz gromacs-4.6.tar.gz
cd gromacs-4.6
- Make a separate build directory and change to it.
mkdir build
cd build
- Run CMake with the path to the source as an argument.
cmake .. -DGMX_BUILD_OWN_FFTW=ON
- Run make and make install.
make
sudo make install
Useful links:
- Downloading GROMACS
- GROMACS Documentation
- GROMACS Manuals
- GROMACS Tutorials
- GROMACS Installation instructions
3. NAMD Package Installation & Configuration
- Current version: Namd 2.9
- Homepage: https://www.ks.uiuc.edu/Research/namd/
NAMD (Not [just] Another Molecular Dynamics program) is an MD simulation program based on Charmm++ parallel objects, developed at the Theoretical and Computational Biophysics Group in the University of Illinois at Urbana-Champaign. NAMD is distributed free of charge with source code.
NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR.
In the next lines, there are (briefly) the steps needed to install and configure NAMD in a local machine. For large, more especific and extended information, please refer to the Namd Documentation.
NAMD Installation & Configuration Steps (click to show):
- Unpack NAMD and matching Charm++ source code and enter directory.
tar xzf NAMD_2.9_Source.tar.gz
cd NAMD_2.9_Source
tar xf charm-6.4.0.tar
cd charm-6.4.0
- Build and test the Charm++/Converse library (multicore version):
./build charm++ multicore-linux64 --with-production
cd multicore-linux64/tests/charm++/megatest
make pgm
./pgm +p4 (multicore does not support multiple nodes)
cd ../../../../..
- Build and test the Charm++/Converse library (MPI version):
env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 --with-production
cd mpi-linux-x86_64/tests/charm++/megatest
make pgm
mpirun -n 4 ./pgm (run as any other MPI program on your cluster)
cd ../../../../..
- Download and install TCL and FFTW libraries:
(cd to NAMD_2.9_Source if you're not already there)
wget https://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz
tar xzf fftw-linux-x86_64.tar.gz
mv linux-x86_64 fftw
wget https://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64.tar.gz
wget https://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64-threaded.tar.gz
tar xzf tcl8.5.9-linux-x86_64.tar.gz
tar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz
mv tcl8.5.9-linux-x86_64 tcl
mv tcl8.5.9-linux-x86_64-threaded tcl-threaded
- Optionally edit various configuration files:
(not needed if charm-6.4.0, fftw, and tcl are in NAMD_2.9_Source)
vi Make.charm (set CHARMBASE to full path to charm)
vi arch/Linux-x86_64.fftw (fix library name and path to files)
vi arch/Linux-x86_64.tcl (fix library version and path to TCL files)
- Set up build directory and compile:
multicore version: ./config Linux-x86_64-g++ --charm-arch multicore-linux64
network version: ./config Linux-x86_64-g++ --charm-arch net-linux-x86_64
MPI version: ./config Linux-x86_64-g++ --charm-arch mpi-linux-x86_64
cd Linux-x86_64-g++
make (or gmake -j4, which should run faster)
- Quick tests using one and two processes (network version):
(this is a 66-atom simulation so don't expect any speedup)
./namd2
./namd2 src/alanin
./charmrun ++local +p2 ./namd2
./charmrun ++local +p2 ./namd2 src/alanin
(for MPI version, run namd2 binary as any other MPI executable)
Useful links:
- Downloading NAMD Source Code
- Downloading NAMD Precompiled Binaries
- NAMD Manual / User's Guide
- NAMD Tutorials
- NAMD Installation instructions