HiFiMagnet Developper documentation

Welcome to HiFiMagnet Developper documentation!

HiFiMagnet Salome plugin

This section describes HiFiMagnet Salome Plugin installation from scratch. You can also use official Salome binairies to build the plugin but this will not be detailed.

Building Salome from scratch is performed within a docker image to avoid any conflitcs with pre-existing configuration. The obtained binaries will exported at the end of the build process if successful as a compressed archive. The archive will be later used when creating the containers.

All the scripts and docker or singularity recipes may be found in HiFiMagnet github repository. In particular see the followings directory:

  • docker: for all docker related stuff

  • singularity: same for singularity

  • package: for all debian/ubuntu packaging questions

1. Using Official binary

This is not the recommended installation and this method is supported only starting from Salome 8.3.0.

If a Salome tarball exist for your os, just retrieve it from the Salome website. Note that you would need to be a registered user to download the official binary. Otherwise, you could try to build Salome from scratch followings instructions given above.

Once you have binary, just untar it and try to run Salome from a command line.

To build a container from Official Salome binaries, you can custom create a custom Dockerfile-official from the template Dockerfile-official.in in HiFiMagnet docker repository Do not forget to create an account on Salome website.

cp Dockerfile-official.in Dockerfile-official
perl -pi -e "s|DIST|$DIST|" Dockerfile-official

cp WELCOME-official WELCOME
perl -pi -e "s|VERSION|8.3.0|" WELCOME
perl -pi -e "s|SVERSION|V8_3_0|" WELCOME
perl -pi -e "s|DIST|$DIST|" WELCOME

docker build \
   --build-arg HTTP_USER=... \
   --build-arg HTTP_PASSWD=... \
   --build-arg VERSION=8.3.0 \
   --build-arg ARG SVERSION=V8_3_0 \
   --build-arg ARG GRAPHICS=nvidia \
   --build-arg ARG DIST=jessie \
   --build-arg ARG MESHGEMS=0 \
   -t salome-8.3.0:new -f Dockerfile-official .

rm -f WELCOME
rm -f Dockerfile-official

To install HiFiMagnet Salome plugin you have then 2 options:

  • use salomeTools if they are available,…​.:

  • use the installer provided in HiFiMagnet Salome plugin source,…​

2. Building from scratch

  • First create a directory to hold salome tarball release:

mkdir -p $HOME/Salome_Packages
  • Prepare the archives directory in SALOME-${VERSION}/ARCHIVES that contains the sources tarballs of the pre-requisites

  • Check or create a new application in PROJECT/applications, see for instance SALOME-8.3.0.pyconf

  • Launch the building of salome

./build_salome_new.sh -v $VERSION -p \$DIST -m [-d]

where -m enable MPI support and -d is used for debugging purposes.

The actual build process involves:

  • the creation of a temporary docker image to hold the salomeTools suite that perform the installation. The prerequisites are:

    • pyconf file describing the requested configuration

    • directory SALOME_$VERSION: holding the archive tarball of the dependencies,

  • The Salome tarball is written into $HOME/Salome_Packages directory

  • the creation of a docker image holding VERSION of Salome plus a graphic driver.

Only drive for nvidia is installed

The image can be pushed to feelpp/salome dockerhub.

docker login
docker tag ...
docker push

On Windows 10 Pro you have to:

  • install MobaXterm to get a X11 server,

  • install WSL,

  • install Debian/Stretch from Windows Store,

To install Salome in WSL Debian/Stretch: * start Debian * setup lncmi debian repository * install Salome binary for Debian/Stretch in Debian WSL.

Then to run Salome:

  • start MobaXterm to get X11 up and running

  • launch Debian from menu or by typing debian.exe in a powershell

  • from the "debian" terminal, run Salome

Note that you may have to add some missing packages to get salome working.

3. Singularity image

  • Supported Versions: 7.8.0 to 8.4.0

  • Supported Graphic card: nvidia

The most common procedure to create a singularity image for Salome relies on Docker image and depends on Singularity version:

  • for 2.3.1 version:

DIST=$(docker run -it --rm trophime/salome-$VERSION:$TAG lsb_release -cs)
sudo singularity create --size 6144 ./salome-$VERSION-$DIST-$TAG.simg
sudo singularity --verbose import ./salome-$VERSION-$DIST-$TAG.simg docker://trophime/salome-$VERSION:$TAG

where VERSION and TAG respectively stand for Salome version and graphics driver.

  • for 2.4.1 version and higher:

DIST=$(docker run -it --rm trophime/salome-$VERSION:$TAG lsb_release -cs)
sudo -E singularity -vvv build --force --notest [--writable] "./salome-$VERSION-$DIST-$TAG.simg" "./salome-docker.def"

An alternative is to build the image using the Salome binairies, for details see stretch.def recipe:

sudo -E singularity -vvv build --force --notest [--writable] "./salome-$VERSION-$TAG.simg" "./stretch.def"

We also provide a bash script to automate the creation of Salome singularity image:

#! /bin/bash -x

usage(){
   echo ""
   echo "Description:"
   echo "               Builds Salome Singularity container from official release "
   echo ""
   echo "Usage:"
   echo "               build.sh [ <option> ] ... ]"
   echo ""
   echo "Options:"
   echo ""
   echo "-i <dockerimage> Specify the name of docker image. Default is hifimagnet-from-scratch"
   echo ""
   echo "-t <tag>         Specify tag to use"
   echo ""
   echo "-b <bootstrap.def> Specify the name of bootstrap.def file. Default is hifimagnet-docker.def"
   echo ""
   echo "-h              Prints this help information"
   echo ""
   exit 1
}

TESTSUITE=0

#########################################################
## parse parameters
##########################################################
while getopts "hb:t:i:s:u:p:" option ; do
   case $option in
       h ) usage ;;
       b ) BOOTSTRAP=$OPTARG ;;
       t ) DOCKERTAG=$OPTARG ;;
       i ) DOCKERIMAGE=$OPTARG ;;
       s ) TESTSUITE=1 ;;
       u ) DOCKER_USER=$OPTARG ;;
       p ) DOCKER_PASSWD=$OPTARG ;;
       ? ) usage ;;
   esac
done
# shift to have the good number of other args
shift $((OPTIND - 1))

# Optionally set VERSION and others if none is defined.
: ${RELEASE:="yakkety"}
: ${BOOTSTRAP:="hifimagnet-docker.def"}
: ${DOCKERFILE="Dockerfile"}
: ${BRANCH:="develop"}
: ${DOCKERTAG=${BRANCH}-${RELEASE}}
: ${DOCKERIMAGE="hifimagnet"}

: ${DOCKER_USER=${DOCKER_LOGIN}}
: ${DOCKER_PASSWD=${DOCKER_PASSWORD}}

if [ -f  hifimagnet-${DOCKERTAG}.simg ]; then
    echo "hifimagnet-${DOCKERTAG}.simg already exists"
    exit 1
fi

# Check dockerhub credential
if [ ! -z ${DOCKER_USER} ] && [ ! -z ${DOCKER_PASSWD} ] ; then
    docker login --username=${DOCKER_USER} --password="${DOCKER_PASSWD}"
    isOK=$?
    if [ "$isOK" != "0" ]; then
        echo "credentials for dockerhub are invalid"
        echo "username=${DOCKER_USER}"
        echo "password=${DOCKER_PASSWD}"
        exit 1
    fi
else
    echo "Docker environment variable not set: [ DOCKER_PASSWORD , DOCKER_LOGIN ] !"
    exit 1
fi

# Set credentials for singularity
export SINGULARITY_DOCKER_USERNAME=${DOCKER_USER}
export SINGULARITY_DOCKER_PASSWORD=${DOCKER_PASSWD}

# Check existence of docker image
# if [[ "$(docker images -q feelpp/${DOCKERIMAGE}:${DOCKERTAG} 2> /dev/null)" == "" ]]; then
#     docker pull feelpp/${DOCKERIMAGE}:${DOCKERTAG}
# fi

# TODO:
# Get Release from Dockerfile: requires lbs-release to be installed
# Set Os to Release in BOOSTRAP file or Check Os/Release are consistent

# Get singularity version
SINGULARITY_BIN=$(which singularity)

SINGULARITY_MAJOR_VERSION=`${SINGULARITY_BIN} --version | sed 's/\([0-9]*\)[.]*[0-9]*[.]*[0-9]*.*/\1/'`
SINGULARITY_MINOR_VERSION=`${SINGULARITY_BIN} --version | sed 's/[0-9]*[.]*\([0-9]*\)[.]*[0-9]*.*/\1/'`
SINGULARITY_PATCH_VERSION=`${SINGULARITY_BIN} --version | sed 's/[0-9]*[.]*[0-9]*[.]*\([0-9]*\).*/\1/'`

# Get image size if singularity is less than 2.4
if [ $SINGULARITY_MAJOR_VERSION -eq 2 ] && [ $SINGULARITY_MINOR_VERSION -le 3 ] ; then
    SAFETY=1.1
    SIZE_MB=""
    SIZE=$(docker images feelpp/${DOCKERIMAGE}:${DOCKERTAG}  --format "{{.Size}}")
    isGB=$(echo $SIZE | grep GB)
    if [ -n "$isGB" ] ; then
        SIZE=$(echo $SIZE | tr -d "GB")
        echo "feelpp/${DOCKERIMAGE}:${DOCKERTAG}: ${SIZE} GB"
        SIZE_MB=$(echo "(${SAFETY}*${SIZE}*1024+0.5)/1" | bc)
    else
        isMB=$(echo $SIZE | grep GB)
        if [ -n "$isMB" ] ; then
            SIZE_MB=$(echo "(${SAFETY}*${SIZE}+0.5)/1" | bc)
            echo "feelpp/${DOCKERIMAGE}:${DOCKERTAG}: ${SIZE_MB}"
        fi
    fi

    ${SINGULARITY_BIN} create --size ${SIZE_MB} ${DOCKERIMAGE}-${DOCKERTAG}.simg
    ${SINGULARITY_BIN} -vvv import ${DOCKERIMAGE}-${DOCKERTAG}.simg docker://feelpp/${DOCKERIMAGE}:${DOCKERTAG}

    # NB user should be a sudoer at least for singularity
    # sudo -E ${SINGULARITY_BIN} -vvv  bootstrap ${DOCKERIMAGE}-${DOCKERTAG}.img ${BOOTSTRAP}

else
    # echo "singularity 2.4 and above are not supported right now"
    # exit 1

    # NB user should be a sudoer at least for singularity
    sudo -E ${SINGULARITY_BIN} -vvv build --force --notest --writable "./${DOCKERIMAGE}-${DOCKERTAG}.simg" "./${BOOTSTRAP}"

    # add option -w to have a "compatible" singularity image
fi

# where to store Singularity images??

# clear singularity cache ??
rm -rf $HOME/.singularity
#sudo rm -rf /root/.singularity???

docker logout

!! Watch out for kernel limitations when using latest Debian/Ubuntu releases !! (eg on cesga Ubuntu 18.04 LTS images won’t run, same for Debian buster)

The obtained singularity images may be pushed to a Sregistry service like cesga registry:

export SREGISTRY_CLIENT=registry
export SREGISTRY_CLIENT_SECRETS=~/.sregistry-cesga
[export SREGISTRY_STORAGE=...]

sregistry push --name hifimagnet/salome --tag 8.4.0 salome-8.4.0.simg

The token that needs to be stored in $SREGISTRY_CLIENT_SECRETS file can be obtained following the procedure below:

  • connect to the Sregistry service (eg cesga sregistry use the FiWare authentification})

  • in the top right scrolling menu with your user id, select Token

  • copy the line to a $SREGISTRY_CLIENT_SECRETS in your home directory

The token looks like that:

{ "registry": { "token": "xxxxxxx", "username": "user", "base": "http://sregistry.lcmi.local" } }

To download the image from Sregistry:

export SREGISTRY_CLIENT=registry
export SREGISTRY_CLIENT_SECRETS=~/.sregistry-cesga
[export SREGISTRY_STORAGE=...]

sregistry pull --name salome-8.4.0.simg hifimagnet/salome:8.4.0

The image may be renamed using:

[sregistry rename hifimagnet/salome:8.4.0 salome-8.4.0.simg]

On cesga, as sregistry is not installed you should instead use sregistry-cli image :

singularity run -B /mnt shub://sregistry.srv.cesga.es/mso4sc/sregistry pull hifimagnet/salome:8.4.0
singularity run -B /mnt shub://sregistry.srv.cesga.es/mso4sc/sregistry rename hifimagnet/salome:8.4.0 salome-8.4.0.simg

Note that you may reveive some harmless warnings like that:

To run Salome from the image:

singularity shell [--nv] "./salome-$VERSION-$DIST-$TAG.simg salome"

or

singularity run [--nv] "./salome-$VERSION-$DIST-$TAG.simg"

--nv to use nvidia driver for host (option is only valid for 2.4.1 and later).

MagnetTools

.1. Prerequis

MagnetTools relies on the following software :

  • popt,

  • gsl, freesteam,

  • spherepack, expokit,

  • sundials,

  • yamlcpp, json-spirit

  • nag (only required for Optimization)

.2. From scratch

HiFiMagnet

.1. PREREQUISITES

  • MagnetTools: for Optimization and Analytical calc.

  • Feel++: for Numerical Axi and 3D Modeling

    • toolbox: CSM

    • RB framework

  • Salome: for GUI/TUI CAD and Meshing features

  • gmsh: for converting Mesh format

  • MeshGems: for Meshing features

2. Using containers

2.1. Docker

2.1.1. Feel++ Containers

2.1.2. HiFiMagnet Containers

2.2. Singularity

2.2.1. Feel++ Containers

2.2.2. HiFiMagnet Containers

3. From scratch

MSO4SC Portal

.1. Creating an App in MSO Portal

An application for MSO4SC portal is basically a workflow described using TOSCA. The application atually consists in:

  • a blueprint file, describing the workflow

  • some scripts to perform specific operations at bootstrap and end

An helper script is also provided to:

  • package the app for the portal,

  • …​

.1.1. blueprint: workflow TOSCA

.1.2. Running in the orchestrator CLI

  • Create a local-blueprint-inputs.yaml from the template file.

    • Define the HPC ressource you are planning to use.

    • Enter your credentials (user/passwd)

    • Setup input parameters

  • Connect to the cloudify client via docker:

docker pull mso4sc/msoorchestrator-cli
docker run -it --rm \
    -v $HOME/MSO4SC/resources:/resources \
     mso4sc/msoorchestrator-cli
  • Within the docker:

cfy profiles use  ORCHESTRATOR_IP -t  default_tenant -u USER -p PASSWD

Then you can simply deploy your application using:

./deploy up

To undeploy on the orchestrator, just:

./deploy down
If the application fails during one step of the deploy up process, you’ll have to force cancel execution by hands. cfy executions list, cfy executions cancel -f <id> in the orchestrator client (*docker).

Some useful commands :

  • cfy status

  • cfy deployment list

  • cfy deployment delete -f <name>

  • cfy execution list

  • `cfy execution cancel -f <id>

  • cfy deployment delete -f <name>

  • cfy blueprint list

  • cfy blueprint delete <name>

See here for details

.1.3. Package workflow for MSO4SC portal

To package the workflow for latter use in the portal, just create a gzipped archive using :

./deploy pkg

.1.4. Register App in MSO4SC portal

To register the app, following instructions in this section. Remember that you have a choice:

  • add the app in the Marketplace

  • or make it available to every user of the platform

.2. Creating your own MSO Portal