项目作者: IBM

项目描述 :
Example scripts and configuration files to install and configure IBM Spectrum Scale in a Vagrant environment
高级语言: Shell
项目地址: git://github.com/IBM/SpectrumScaleVagrant.git
创建时间: 2019-01-24T21:13:45Z
项目社区:https://github.com/IBM/SpectrumScaleVagrant

开源协议:Apache License 2.0

下载


Storage Scale Vagrant

Example scripts and configuration files to install and configure IBM Storage Scale in a Vagrant environment.

Installation

The scripts and configuration files provision a single node IBM Storage Scale cluster using Vagrant.

Get the scripts and configuration files from GitHub

Open a Command Prompt and clone the GitHub repository:

  1. git clone https://github.com/IBM/StorageScaleVagrant.git
  2. cd StorageScaleVagrant

Get the Storage Scale self-extracting installation package

The creation of the Storage Scale cluster requires the Storage Scale
self-extracting installation package. The developer edition can be downloaded
from the Storage Scale home page.

Download the Storage_Scale_Developer-5.2.2.1-x86_64-Linux-install package and
save it to directory StorageScaleVagrant/software on the host.

Please note that in case the Storage Scale Developer version you downloaded is
newer than the one we listed here, you still might want to use the new
version. You need to update the $SpectrumScale_version variable in
Vagrantfile.common to match the version you
downloaded before continuing.

Vagrant will copy this file during the provisioning from the host to directory
/software on the management node m1.

Install Vagrant

Follow the Vagrant Getting Started
Guide
to
install Vagrant to get familiar with Vagrant.

Provisioning

Storage Scale Vagrant supports the creation of a single node Storage Scale
cluster on VirtualBox, libvirt and on AWS. There is a subdirectory for each
supported provider. Follow the instructions in the subdirectory of your
preferred provider to install and configure a virtual machine.

Directory Provider
aws Amazon Web Services
virtualbox VirtualBox
libvirt libvirt (KVM/QEMU)

Please note that for AWS you might want to prefer the new “Cloudkit” Storage
Scale capability that is also available with the Storage Scale Developer Edition.
For more details about Cloudkit, please refer to the documentation.

Once the virtual environment is provided, Storage Scale Vagrant uses the same
scripts to install and configure Storage Scale. Storage Scale Vagrant executes
those scripts automatically during the provisioning process (vagrant up) for
your preferred provider.

Directory Description
setup/install Perform all steps to provision a Storage Scale cluster
setup/demo Perform all steps to configure the Storage Scale for demo purposes

Storage Scale Management Interfaces

Storage Scale Vagrant uses the Storage Scale CLI and the Storage Scale REST API
to install and configure Storage Scale. In addition it configures the Storage
Scale GUI to allow interested users to explore its capabilities.

[!IMPORTANT]
As per default setting, the Storage Scale GUI is mapped to port 8888 on the host.
That might conflict with other software using that port (see issue #54).
You can configure the port yourself here
for VirtualBox and here for libvirt.

Storage Scale CLI

Storage Scale Vagrant configures the shell $PATH variable and the sudo
secure_path to include the location of the Storage Scale executables.

  1. [vagrant@m1 ~]$ sudo mmlscluster
  2. GPFS cluster information
  3. ========================
  4. GPFS cluster name: demo.example.com
  5. GPFS cluster id: 4200744107440960413
  6. GPFS UID domain: demo.example.com
  7. Remote shell command: /usr/bin/ssh
  8. Remote file copy command: /usr/bin/scp
  9. Repository type: CCR
  10. Node Daemon node name IP address Admin node name Designation
  11. ------------------------------------------------------------------
  12. 1 m1.example.com 10.1.2.11 m1m.example.com quorum-manager-perfmon
  13. [vagrant@m1 ~]$

Storage Scale REST API

To explore the Storage Scale REST API, enter
https://localhost:8888/ibm/api/explorer (for AWS please use https://>AWS Public IP>/ibm/api/explorer)
in a browser. The Storage Scale REST API uses the
same accounts as the Storage Scale GUI. There’s also a blog post available which
contains more details on how to explore the REST API using the IBM API Explorer
URL:

Trying out and exploring the Storage Scale REST API using “curl” and/or the IBM API Explorer website

Configuration of Storage Scale Cluster:

  1. [vagrant@m1 ~]$ curl -k -X GET --header 'Accept: application/json' -u admin:admin001 'https://localhost:8888/scalemgmt/v2/cluster'
  2. {
  3. "cluster" : {
  4. "clusterSummary" : {
  5. "clusterId" : 4200744107441232322,
  6. "clusterName" : "demo.example.com",
  7. "primaryServer" : "m1.example.com",
  8. "rcpPath" : "/usr/bin/scp",
  9. "rcpSudoWrapper" : false,
  10. "repositoryType" : "CCR",
  11. "rshPath" : "/usr/bin/ssh",
  12. "rshSudoWrapper" : false,
  13. "uidDomain" : "demo.example.com"
  14. }
  15. },
  16. .....
  17. "status" : {
  18. "code" : 200,
  19. "message" : "The request finished successfully."
  20. }
  21. }
  22. [vagrant@m1 ~]$

Cluster nodes:

  1. [vagrant@m1 ~]$ curl -k -X GET --header 'Accept: application/json' -u admin:admin001 'https://localhost:8888/scalemgmt/v2/nodes'
  2. {
  3. "nodes" : [ {
  4. "adminNodeName" : "m1.example.com"
  5. } ],
  6. "status" : {
  7. "code" : 200,
  8. "message" : "The request finished successfully."
  9. }
  10. }
  11. [vagrant@m1 ~]$

Storage Scale GUI

To connect to the Storage Scale GUI, enter https://localhost:8888 (AWS:
https://<AWS Public IP>) in a browser. The GUI is configured with a
self-signed certificate. The login screen shows, after accepting the
certificate. The user admin has the default password admin001.
To be able to use the GUI early in the installation process, a user
performance with the default password monitor is created.

Cluster overview in Storage Scale GUI:

Storage Scale Filesystem

Storage Scale Vagrant configures the filesystem fs1 and adds some example data
to illustrate selected Storage Scale features.

Filesystems

The filesystem fs1 mounts on all cluster nodes at /ibm/fs1:

  1. [vagrant@m1 ~]$ mmlsmount all
  2. File system fs1 is mounted on 1 nodes.
  3. [vagrant@m1 ~]$ mmlsfs fs1 -T
  4. flag value description
  5. ------------------- ------------------------ -----------------------------------
  6. -T /ibm/fs1 Default mount point
  7. [vagrant@m1 ~]$

On Linux, a Storage Scale filesystem can be used like any other filesystem:

  1. [vagrant@m1 ~]$ mount | grep /ibm/
  2. fs1 on /ibm/fs1 type gpfs (rw,relatime,seclabel)
  3. [vagrant@m1 ~]$ find /ibm/
  4. /ibm/
  5. /ibm/fs1
  6. /ibm/fs1/.snapshots
  7. [vagrant@m1 ~]$

REST API call to show all filesystems:

  1. [vagrant@m1 ~]$ curl -k -s -S -X GET --header 'Accept: application/json' -u admin:admin001 'https://localhost/scalemgmt/v2/filesystems/'
  2. {
  3. "filesystems" : [ {
  4. "name" : "fs1"
  5. } ],
  6. "status" : {
  7. "code" : 200,
  8. "message" : "The request finished successfully."
  9. }
  10. }[vagrant@m1 ~]$

Storage Pools

Storage pools allow to integrate different media types such es NVMe, SSD and
NL-SAS into a single filesystem. Each Storage Scale filesystem has at list the
system pool which stores metadata (inodes) and optionally data (content of
files).

  1. [vagrant@m1 ~]$ mmlspool fs1
  2. Storage pools in file system at '/ibm/fs1':
  3. Name Id BlkSize Data Meta Total Data in (KB) Free Data in (KB) Total Meta in (KB) Free Meta in (KB)
  4. system 0 4 MB yes yes 5242880 1114112 ( 21%) 5242880 1167360 ( 22%)
  5. [vagrant@m1 ~]$ mmdf fs1
  6. disk disk size failure holds holds free in KB free in KB
  7. name in KB group metadata data in full blocks in fragments
  8. --------------- ------------- -------- -------- ----- -------------------- -------------------
  9. Disks in storage pool: system (Maximum disk size allowed is 15.87 GB)
  10. nsd3 1048576 1 Yes Yes 229376 ( 22%) 11384 ( 1%)
  11. nsd4 1048576 1 Yes Yes 204800 ( 20%) 11128 ( 1%)
  12. nsd5 1048576 1 Yes Yes 217088 ( 21%) 11128 ( 1%)
  13. nsd2 1048576 1 Yes Yes 225280 ( 21%) 11640 ( 1%)
  14. nsd1 1048576 1 Yes Yes 237568 ( 23%) 11640 ( 1%)
  15. ------------- -------------------- -------------------
  16. (pool total) 5242880 1114112 ( 21%) 56920 ( 1%)
  17. ============= ==================== ===================
  18. (total) 5242880 1114112 ( 21%) 56920 ( 1%)
  19. Inode Information
  20. -----------------
  21. Number of used inodes: 4108
  22. Number of free inodes: 103412
  23. Number of allocated inodes: 107520
  24. Maximum number of inodes: 107520
  25. [vagrant@m1 ~]$

A typical configuration is to use NVMe or SSD for the system pool for metadata
and hot files, and to add a second storage pool with NL-SAS for colder data.

  1. [vagrant@m1 ~]$ cat /vagrant/files/spectrumscale/stanza-fs1-capacity
  2. %nsd: device=/dev/sdg
  3. nsd=nsd6
  4. servers=m1
  5. usage=dataOnly
  6. failureGroup=1
  7. pool=capacity
  8. %nsd: device=/dev/sdh
  9. nsd=nsd7
  10. servers=m1
  11. usage=dataOnly
  12. failureGroup=1
  13. pool=capacity
  14. [vagrant@m1 ~]$ sudo mmadddisk fs1 -F /vagrant/files/spectrumscale/stanza-fs1-capacity
  15. The following disks of fs1 will be formatted on node m1:
  16. nsd6: size 10240 MB
  17. nsd7: size 10240 MB
  18. Extending Allocation Map
  19. Creating Allocation Map for storage pool capacity
  20. Flushing Allocation Map for storage pool capacity
  21. Disks up to size 322.37 GB can be added to storage pool capacity.
  22. Checking Allocation Map for storage pool capacity
  23. Completed adding disks to file system fs1.
  24. mmadddisk: mmsdrfs propagation completed.
  25. [vagrant@m1 ~]$

Now the filesystem has two storage pool.

  1. [vagrant@m1 ~]$ mmlspool fs1
  2. Storage pools in file system at '/ibm/fs1':
  3. Name Id BlkSize Data Meta Total Data in (KB) Free Data in (KB) Total Meta in (KB) Free Meta in (KB)
  4. system 0 4 MB yes yes 5242880 1101824 ( 21%) 5242880 1155072 ( 22%)
  5. capacity 65537 4 MB yes no 20971520 20824064 ( 99%) 0 0 ( 0%)
  6. [vagrant@m1 ~]$ mmdf fs1
  7. disk disk size failure holds holds free in KB free in KB
  8. name in KB group metadata data in full blocks in fragments
  9. --------------- ------------- -------- -------- ----- -------------------- -------------------
  10. Disks in storage pool: system (Maximum disk size allowed is 15.87 GB)
  11. nsd1 1048576 1 Yes Yes 233472 ( 22%) 11640 ( 1%)
  12. nsd2 1048576 1 Yes Yes 221184 ( 21%) 11640 ( 1%)
  13. nsd3 1048576 1 Yes Yes 229376 ( 22%) 11384 ( 1%)
  14. nsd4 1048576 1 Yes Yes 204800 ( 20%) 11128 ( 1%)
  15. nsd5 1048576 1 Yes Yes 212992 ( 20%) 11128 ( 1%)
  16. ------------- -------------------- -------------------
  17. (pool total) 5242880 1101824 ( 21%) 56920 ( 1%)
  18. Disks in storage pool: capacity (Maximum disk size allowed is 322.37 GB)
  19. nsd6 10485760 1 No Yes 10412032 ( 99%) 8056 ( 0%)
  20. nsd7 10485760 1 No Yes 10412032 ( 99%) 8056 ( 0%)
  21. ------------- -------------------- -------------------
  22. (pool total) 20971520 20824064 ( 99%) 16112 ( 0%)
  23. ============= ==================== ===================
  24. (data) 26214400 21925888 ( 84%) 73032 ( 0%)
  25. (metadata) 5242880 1101824 ( 21%) 56920 ( 1%)
  26. ============= ==================== ===================
  27. (total) 26214400 21925888 ( 84%) 73032 ( 0%)
  28. Inode Information
  29. -----------------
  30. Number of used inodes: 4108
  31. Number of free inodes: 103412
  32. Number of allocated inodes: 107520
  33. Maximum number of inodes: 107520
  34. [vagrant@m1 ~]$

Disclaimer

Please note: This project is released for use “AS IS” without any warranties of any kind, including, but not limited to installation, use, or performance of the resources in this repository.
We are not responsible for any damage, data loss or charges incurred with their use.
This project is outside the scope of the IBM PMR process. If you have any issues, questions or suggestions you can create a new issue here.
Issues will be addressed as team availability permits.