项目作者: galexrt

项目描述 :
A Ceph Vagrant multi-node environment.
高级语言: Shell
项目地址: git://github.com/galexrt/ceph-vagrant-multi-node.git
创建时间: 2018-06-24T17:20:56Z
项目社区:https://github.com/galexrt/ceph-vagrant-multi-node

开源协议:Apache License 2.0

下载


ceph-vagrant-multi-node

Prerequisites

  • make
  • Vagrant (tested with 2.1.1)
  • Virtualbox
  • rsync
  • ssh-keygen

Hardware Requirements

  • Per Node (default 3 are started):
    • CPU: 1 Core
    • Memory: 1GB

Quickstart

To start with the defaults, 3x nodes, run the following:

Edit the Makefile and set the CEPH_RELEASE (for example nautilus)

  1. $ make up -j 3

The -j3 will cause three VMs to be started in parallel to speed up the Ceph cluster creation.

  1. $ make ssh-node1
  2. $ ceph -s
  3. TODO

Usage

Starting the environment

To start up the Vagrant Ceph multi node environment with the default of three nodes (not parallel) run:

  1. $ make up

Faster (parallel) environment start

To start up 3 VMs in parallel run (-j flag does not control how many (worker) VMs are started, the NODE_COUNT variable is used for that):

  1. $ NODE_COUNT=3 make up -j3

The flag -j CORES/THREADS allows yout to set how many VMs (Makefile targets) will be run at the same time.
You can also use -j $(nproc) to start as many VMs as cores/threads you have in your machine.
So to start up all VMs (three nodes) in parallel, you would add one to the chosen NODE_COUNT.

Show status of VMs

  1. $ make status
  2. node1 not created (virtualbox)
  3. node2 not created (virtualbox)
  4. node3 not created (virtualbox)

Shutting down the environment

To destroy the Vagrant environment run:

  1. $ make clean

Data inside VM

See the data/VM_NAME/ directories, where VM_NAME is for example node1.

Show make targets

  1. $ make help
  2. ceph-status Runs `ceph -s` inside the first node to return the Ceph cluster status.
  3. clean-data Remove data (shared folders) and disks of all VMs (nodes).
  4. clean-node-% Remove a node VM, where `%` is the number of the node.
  5. clean-nodes Remove all node VMs.
  6. clean Destroy node VMs, and delete data.
  7. help Show this help menu.
  8. init-ceph-cluster Run the init-ceph-cluster.sh script to deploy the Ceph cluster (automatically done by `up` target).
  9. reset-ceph-cluster Run "Starting Over" commands to "reset" the Ceph cluster.
  10. ssh-keygen Generate ssh key for `ceph-deploy` command used for the actual Ceph cluster deployment.
  11. ssh-node-% SSH into a node VM, where `%` is the number of the node.
  12. start-nodes Create and start all node VMs by utilizing the `node-X` target (automatically done by `up` target).
  13. start-node-% Start node VM, where `%` is the number of the node.
  14. status-node-% Show status of a node VM, where `%` is the number of the node.
  15. status-nodes Show status of all node VMs.
  16. stop-nodes Stop/Halt all node VMs.
  17. stop-node-% Stop/Halt a node VM, where `%` is the number of the node.
  18. stop Stop/Halt all nodes VMs.
  19. up Start Ceph Vagrant multi-node cluster. Creates, starts and bootsup the node VMs.

Variables

Variable Name Default Value Description
BOX_IMAGE centos/7 Set the VMs box image to use.
DISK_COUNT 1 Set how many additional disks will be added to the VMs.
DISK_SIZE_GB 10 GB Size of additional disks added to the VMs.
NODE_COUNT 2 How many worker nodes should be spawned.
NODE_CPUS 1 Core How many CPU cores to use for each node VM.
NODE_MEMORY_SIZE_GB 1 GB Size of memory (in GB) to be allocated for each node VM.
CEPH_RBD_CREATE true If a pool named rbd for rbd should be created.
CEPH_RBD_POOL_PG 64 PGs Count of PGs to set for the rbd pool.
CEPH_RBD_POOL_SIZE 3 The size of the rbd pool (min_size is 1).