CentOS 7
Sponsored Link

Ceph : Configure Ceph Cluster
2015/12/10
 
Install Distributed File System "Ceph" to Configure Storage Cluster.
For example on here, Configure Cluster with 1 admin Node and 3 Storage Node like follows.
                                         |
        +--------------------+           |           +-------------------+
        |   [dlp.srv.world]  |10.0.0.30  |   10.0.0.x|   [   Client  ]   |
        |    Ceph-Deploy     +-----------+-----------+                   |
        |                    |           |           |                   |
        +--------------------+           |           +-------------------+
            +----------------------------+----------------------------+
            |                            |                            |
            |10.0.0.51                   |10.0.0.52                   |10.0.0.53 
+-----------+-----------+    +-----------+-----------+    +-----------+-----------+
|   [node01.srv.world]  |    |   [node02.srv.world]  |    |   [node03.srv.world]  |
|     Object Storage    +----+     Object Storage    +----+     Object Storage    |
|     Monitor Daemon    |    |                       |    |                       |
|                       |    |                       |    |                       |
+-----------------------+    +-----------------------+    +-----------------------+

[1]
Add a user for Ceph admin on all Nodes.
It adds "cent" user on this exmaple.
[2] Grant root priviledge to Ceph admin user just added above with sudo settings.
And also install required packages.
Furthermore, If Firewalld is running on all Nodes, allow SSH service.
Set all of above on all Nodes.
[[email protected] ~]#
echo -e 'Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph

[[email protected] ~]#
chmod 440 /etc/sudoers.d/ceph

[[email protected] ~]#
yum -y install epel-release yum-plugin-priorities \
https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-1.el7.noarch.rpm

[[email protected] ~]#
sed -i -e "s/enabled=1/enabled=1\npriority=1/g" /etc/yum.repos.d/ceph.repo
[[email protected] ~]#
firewall-cmd --add-service=ssh --permanent

[[email protected] ~]#
firewall-cmd --reload

[3] On Monitor Node (Monitor Daemon), If Firewalld is running, allow required port.
[[email protected] ~]#
firewall-cmd --add-port=6789/tcp --permanent

[[email protected] ~]#
firewall-cmd --reload

[4] On Storage Nodes (Object Storage), If Firewalld is running, allow required ports.
[[email protected] ~]#
firewall-cmd --add-port=6800-7100/tcp --permanent

[[email protected] ~]#
firewall-cmd --reload

[5] Login as a Ceph admin user and configure Ceph.
Set SSH key-pair from Ceph Admin Node (it's "dlp.srv.world" on this example) to all storage Nodes.
[[email protected] ~]$
ssh-keygen

Generating public/private rsa key pair.
Enter file in which to save the key (/home/cent/.ssh/id_rsa):
Created directory '/home/cent/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cent/.ssh/id_rsa.
Your public key has been saved in /home/cent/.ssh/id_rsa.pub.
The key fingerprint is:
54:c3:12:0e:d3:65:11:49:11:73:35:1b:e3:e8:63:5a [email protected]
The key's randomart image is:

[[email protected] ~]$
vi ~/.ssh/config
# create new ( define all nodes and users )

Host dlp
    Hostname dlp.srv.world
    User cent
Host node01
    Hostname node01.srv.world
    User cent
Host node02
    Hostname node02.srv.world
    User cent
Host node03
    Hostname node03.srv.world
    User cent

[[email protected] ~]$
chmod 600 ~/.ssh/config
# transfer key file

[[email protected] ~]$
ssh-copy-id node01

[email protected]'s password:
Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node01'"
and check to make sure that only the key(s) you wanted were added.

[[email protected] ~]$
ssh-copy-id node02

[[email protected] ~]$
ssh-copy-id node03

[6] Install Ceph to all Nodes from Admin Node.
[[email protected] ~]$
sudo yum -y install ceph-deploy
[[email protected] ceph]$
ceph-deploy new node01

[[email protected] ceph]$
vi ./ceph.conf
# add to the end

osd pool default size = 2
# install Ceph on each Node

[[email protected] ceph]$
ceph-deploy install dlp node01 node02 node03
# settings for monitoring and keys

[[email protected] ceph]$
ceph-deploy mon create-initial

[7] Configure Ceph Cluster from Admin Node.
# prepare Object Storage Daemon

[[email protected] ceph]$
ceph-deploy osd prepare node01:/var/lib/ceph/osd node02:/var/lib/ceph/osd node03:/var/lib/ceph/osd
# activate Object Storage Daemon

[[email protected] ceph]$
ceph-deploy osd activate node01:/var/lib/ceph/osd node02:/var/lib/ceph/osd node03:/var/lib/ceph/osd
# transfer config files

[[email protected] ceph]$
ceph-deploy admin dlp node01 node02 node03

[[email protected] ceph]$
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
# show status (display like follows if no ploblem)

[[email protected] ceph]$
ceph health

HEALTH_OK
[8] By the way, if you'd like to clean settings and re-configure again, do like follows.
# remove packages

[[email protected] ceph]$
ceph-deploy purge dlp node01 node02 node03
# remove settings

[[email protected] ceph]$
ceph-deploy purgedata dlp node01 node02 node03

[[email protected] ceph]$
ceph-deploy forgetkeys