项目作者: pdidev

项目描述 :
Spack packages for PDI and dependencies
高级语言: Python
项目地址: git://github.com/pdidev/spack.git
创建时间: 2020-03-30T07:43:00Z
项目社区:https://github.com/pdidev/spack

开源协议:Other

下载


PDI spack repository

This is a spack repository with recipes for PDI and its plugins.
It can be installed in a few simple steps:

  1. setup spack,
  2. (optional): reuse already installed packages,
  3. (optional): setup a non-default compiler,
  4. install.

Step #1: Setup

To use it, you should first setup spack:

  1. # 1. Get and enable Spack
  2. git clone https://github.com/spack/spack.git
  3. . spack/share/spack/setup-env.sh
  4. # 2. Get and enable this spack repo
  5. git clone https://github.com/pdidev/spack.git spack/var/spack/repos/pdi
  6. spack repo add spack/var/spack/repos/pdi

Step #2 (optional): reuse already installed packages

Option #2.1: Reuse already installed packages on super-computers that use spack

You can tell your local spack instance that there is an upstream spack instance by editing spack/etc/spack/defaults/upstreams.yaml.

  1. upstreams:
  2. name-of-spack-instance:
  3. install_tree: /path/to/spack/opt/spack

This will let spack use the already installed packages when it sees fit, (which is almost never).
You can however force spack to use already installed package by using the flag --reuse when calling spack install.

For example on Ruche, you can do:

  1. cat<<EOF > spack/etc/spack/defaults/upstreams.yaml
  2. upstreams:
  3. ruche-system:
  4. install_tree: /gpfs/softs/spack/opt/spack/
  5. EOF

Option #2.2: Reuse already installed packages on super-computers that does not use spack

If there are packages already present on the machine that you want spack to use e.g. MPI, you can specify them as externals through the packages.yaml file found in either in a Spack installation’s etc/spack/ or a user’s ~/.spack/ directory. Here’s an example of an external configuration:

  1. packages:
  2. openmpi:
  3. externals:
  4. - spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
  5. prefix: /opt/openmpi-1.4.3
  6. - spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
  7. prefix: /opt/openmpi-1.4.3-debug
  8. - spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
  9. prefix: /opt/openmpi-1.6.5-intel

Is it recommended to only put MPI implementations, CMake and openssl as externals and let spack take care of the rest.

Step #3 (optional): setup a non-default compiler

For compilers, you can specify them in the user’s ~/.spack/linux/compilers.yaml.

  1. compilers:
  2. - compiler:
  3. spec: gcc@4.8.5
  4. paths:
  5. cc: /usr/bin/gcc
  6. cxx: /usr/bin/g++
  7. f77: /usr/bin/gfortran
  8. fc: /usr/bin/gfortran
  9. flags: {}
  10. operating_system: centos7
  11. target: x86_64
  12. modules: []
  13. environment: {}
  14. extra_rpaths: []

This can be done automatically by calling spack compiler find when the compilers are loaded.
This is needed if you want to use the Intel compilers.

For example on Ruche, you can do:

  1. spack load gcc@9.2.0
  2. spack compiler find
  3. spack unload gcc@9.2.0

Step #4: Installation

You can install PDI and most of its plugins using the following instructions after you’ve done the setup:

  1. # Install PDI and most of its plugins
  2. spack install pdiplugin-decl-hdf5 pdiplugin-decl-netcdf pdiplugin-mpi pdiplugin-pycall pdiplugin-serialize pdiplugin-set-value pdiplugin-trace pdiplugin-user-code

If you only need some of the plugins, you can adapt the last line.