Skip to content

HOWTO supervise ceph cluster

Why do that

To validate the correct installation and operation of a Ceph Mimic cluster

Prerequisites

This guide considers that the environment is a 3 nodes Ceph Mimic (ceph storage size: 3*40GB).

What to do

Check status

Connect to the ceph interpreter to perform supervisory commands

1
ceph --cluster main

Check the disk space of the cluster

1
2
3
4
5
6
7
8
9
ceph> df
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED 
    120 GiB     116 GiB      3.6 GiB          2.98 
POOLS:
    NAME                ID     USED        %USED     MAX AVAIL     OBJECTS 
    mytenant-data       1      577 KiB         0        73 GiB           4 
    mytenant-fsdata     2          6 B         0        73 GiB           1 
    mytenant-fsmeta     3       25 KiB         0        55 GiB          23 

Check the OSD status:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
ceph> osd status
+----+---------------+-------+-------+--------+---------+--------+---------+-----------+
| id |      host     |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+---------------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | ppfsdaceph31i | 1030M | 38.9G |    0   |     0   |    0   |     0   | exists,up |
| 1  | ppfsdaceph32i | 1317M | 38.7G |    0   |     0   |    0   |     0   | exists,up |
| 2  | ppfsdaceph33i | 1317M | 38.7G |    0   |     0   |    0   |     0   | exists,up |
+----+---------------+-------+-------+--------+---------+--------+---------+-----------+

ceph> osd stat
3 osds: 3 up, 3 in; epoch: e45

Check the MDS (metadataserver) status

1
2
ceph> mds stat
myfs-1/1/1 up  {0=my0=up:active}

List auth

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
ceph> auth ls
installed auth entries:

mds.0
    key: AQAGseZcOmbPOBAAchZ4KPNdq2Z0VAeEAKrcpA==
    caps: [mds] allow
    caps: [mon] allow profile mds
    caps: [osd] allow rwx
mds.1
    key: AQAHseZc4N6WHxAA+9Ukie1BJWArPiu+9mdSCg==
    caps: [mds] allow
    caps: [mon] allow profile mds
    caps: [osd] allow rwx

Mounting and testing Ceph

In this example we will mount the cluster on the mount point / myceph then write a dummy file.
We will then remove the point and validate the disappearance of the file.
Then we will remount the point and validate file still exists.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# create mount directory
mkdir /myceph

# mount ceph cluster to mount point
ceph-fuse --cluster main /myceph

# check disk and put dummy file
df -h /myceph
echo hello > /myceph/world

# unmount and check file disapear
umount /myceph
ls /

# re-mount disk and validate dummmy file exists
ceph-fuse --cluster main /myceph
cat /myceph/world

Get pool file

If you want to retrieve a file wrote in the cluster (from archive topology for exemple), you can list files presents in cluster and get a file on your filesystem to read content.

list files in cluster:

1
rados -p mytenant-data --cluster main ls

should output something like this:

1
2
3
4
httpd/0/2019.05.24/1558710506/httpd-0-1558710506555
httpd/0/2019.05.24/1558711687/httpd-0-1558711687924
httpd/0/2019.05.24/1558710506/httpd-0-1558710506554
httpd/0/2019.05.24/1558711687/httpd-0-1558711687925

extract one file to your filesystem:

1
2
3
4
rados -p mytenant-data --cluster main get httpd/0/2019.05.24/1558711687/httpd-0-1558711687924 myfile

# read file
cat myfile

you can read file content:

1
2
# earliest="2019-05-24 17:28:08:157" latest="2019-05-24 17:28:08:159" fields=_ppf_id;log separator="__|__"
EYdb6moBUkLOEOSFCQMi__|__{"col":{"host":{"port":9901,"ip":"20.20.5.33","name":"ppfsdaceph33i"}},"obs":{"usr":{"loc":{"country":"United States","country_short":"US","cty_short":"Falls Church","geo_point":[-77.1922,38.864]}},"host":{"ip":"20.20.5.33","name":"host0"},"ts":"2012-12-31T01:00:00.000+01:00"},"init":{"usr":{"loc":{"country":"United States","country_short":"US","geo_point":[-80.8431,35.2271]},"name":"frank"},"host":{"ip":"128.109.154.99"}},"lmc":{"input":{"ts":"2019-05-24T16:59:55.810+02:00"},"parse":{"host":{"ip":"20.20.5.33","name":"ppfsdaceph33i"},"ts":"2019-05-24T16:59:55.818+02:00"}},"session":{"out":{"byte":5619}},"channel":"apache_httpd","type":"web","message":"May 24 16:59:55 host0 128.109.154.99 - frank [31/Dec/2012:01:00:00 +0100] \"GET /images/KSC-94EC-412-small.gif HTTP/1.0\" 200 5619 \"http://www.example.com/start.html\" \"Mozilla/5.0 (iPad; U; CPU OS 4_3_5 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8L1 Safari/6533.18.5\"","target":{"uri":{"urn":"/images/KSC-94EC-412-small.gif"}},"size":306,"parser":{"name":"apache_httpd","version":"1.2.0"},"web":{"request":{"rc":"200","method":"GET"},"header":{"referer":"http://www.example.com/start.html","version":"1.0","user_agent":"Mozilla/5.0 (iPad; U; CPU OS 4_3_5 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8L1 Safari/6533.18.5"}},"vendor":"apache_httpd","alarm":{"sev":"2","id":"160018"},"action":"OK","rep":{"host":{"ip":"20.20.5.33","name":"host0"},"ts":"2019-05-24T16:59:55.000+02:00"},"tenant":"mytenant"}