Ceph distributed storage architecture (3)

Ceph three storage methods

1. Configure block storage

Before creating a block device, you need to create a storage pool. Storage pool related commands need to be executed on the mon node-that is, the planned node1 node. Create a storage pool:

[ceph@node1 ~]$ sudo ceph osd pool create rbd 128 128 pool 'rbd' created

Note: Create pool

# If there are less than 5 OSDs, set pg_num to 128.

# For 5~10 OSDs, set pg_num to 512.

# For 10~50 OSDs, set pg_num to 4096.

Initialize the storage pool:

[ceph@node1 ~]$ sudo rbd pool init rbd

Prepare client client (client operation):

Upgrade the client kernel to the latest version. Before the update, the kernel version was:

[root@client ~]# uname -r
3.10.0-1160.el7.x86_64

Upgrade method: Import key

[root@client ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

Install the yum source of elrepo

[root@client ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

View available system kernel packages

[root@client ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

Install kernel

[root@client ~]# yum --enablerepo=elrepo-kernel install kernel-ml-devel kernel-ml -y

View the kernel default boot sequence

[root@client ~]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg

CentOS Linux (6.5.7-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-c2103ce40fd641b2bb0b9118ead8f4ff) 7 (Core)

Select the new kernel in position 0 as the default boot kernel

[root@client ~]# grub2-set-default 0
[root@client ~]# reboot

Kernel version after reboot:

[root@client ~]# uname -r
6.5.7-1.el7.elrepo.x86_64

Remove old kernel

[root@client ~]# yum remove kernel -y client

Install ceph: For environment preparation, refer to steps 2-5 of environment preparation at the beginning of the deployment ceph document. Install dependent packages, epel, and configure ceph sources.

[root@client ~]# yum install -y python-setuptools epel-release
[root@client ~]# vim /etc/yum.repos.d/ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[root@client ~]# yum install ceph ceph-radosgw -y
[root@client ~]# ceph --version
ceph version 12.2.13 (584a20eb0237c657dc0567da126be145106aa47e) luminous (stable)

Modify the read permission of the file under client:

[root@client ~]# chmod + r /etc/ceph/ceph.client.admin.keyring

Modify the ceph configuration file under client:
This step is to solve the problem of errors when mapping images

[root@client ~]# vi /etc/ceph/ceph.conf

#Add under global section:

rbd_default_features = 1

The client node creates a block device image: the unit is M, here it is 4 G

[root@client ~]# rbd create foo --size 4096 client

Node mapping mirror to host:

[root@client ~]# rbd map foo --name client.admin /dev/rbd0 client

Node formats block device:

[root@client ~]# mkfs.ext4 -m 0 /dev/rbd/rbd/foo

Client node mount block device:

[root@client ~]# mkdir /mnt/ceph-block-device
[root@client ~]# mount /dev/rbd/rbd/foo /mnt/ceph-block-device

After the client restarts, the device needs to be remapped, otherwise it may get stuck. If you need to mount it at boot, you can map and mount it in /etc/rc.local

Write data in /mnt/ceph-block-device and test:

[root@client ceph-block-device]# vim /etc/rc.local

2. Configure object storage

Since the client has upgraded the kernel and installed the client, there is no need to configure the client when doing object storage.

#Create rgw object gateway on node1, the port is 7480

[ceph@admin my-cluster]$ ceph-deploy rgw create node1
[ceph@admin my-cluster]$ ceph-deploy rgw create node2
[ceph@admin my-cluster]$ ceph-deploy rgw create node3

node1 view:

# Test the connection object gateway on the client side. Connecting to the object storage requires a user account key, so you need to create a user key on the client #Create the account key, as shown below (the radosgw-admin command is actually installed by yum -y install ceph-common)

[root@client ceph-block-device]# radosgw-admin user create --uid="Yebati" --display-name="Yebati"
{
    "user_id": "Yebati",
    "display_name": "Yebati",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "Yebati",
            "access_key": "F9OL8KWEEB3HM811Z3UW",
            "secret_key": "zKzFupLsI5LiU4NEuUEQGHu4hadN7H9uVF7MNgDd"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}
[root@client ~]# yum install s3cmd -y

#Install s3 module for connecting to object storage#After installing the s3 package, the s3cmd command will be used to connect to object storage. For convenience, we can write some connection parameters into a .s3cfg file. The s3cmd command will go to Read this file.

[root@client ~]# vim /root/.s3cfg #Create the file and write the content

[default]
access_key = "F9OL8KWEEB3HM811Z3UW"
secret_key = "zKzFupLsI5LiU4NEuUEQGHu4hadN7H9uVF7MNgDd"
host_base = 192.168.1.8:7480
host_bucket = 192.168.1.8:7480/%(bucket)
cloudfront_host = 192.168.1.8:7480
use_https = False

# s3cmd command tests connection to object storage

[root@client ceph]# s3cmd mb s3://Yebati

#Create a bucket named Yebati. The concept of bucket can be understood as a directory.

[root@client ~]# s3cmd put /var/log/yum.log s3://Yebati

upload: '/var/log/yum.log' -> 's3://Yebati/yum.log' [1 of 1]
 6094 of 6094 100% in 1s 4.95 KB/s done

[root@client ~]# s3cmd get s3://Yebati/yum.log

# Generate signed url
[root@client ~]# s3cmd signurl s3://Yebati/yum.log $(date -d 'now + 1 year' + %s)

http://192.168.1.8:7480/Yebati/yum.log?AWSAccessKeyId=F9OL8KWEEB3HM811Z3UW & amp;Expires=1729409249 & amp;Signature=ZSF+kng7zQ7f+NjXY2LHfHtNTk8=

3. Configuration file system

# Create Ceph file storage # To run the Ceph file system, you must first create a Ceph storage cluster with at least one mds (Ceph block devices and Ceph object storage do not use MDS). # Ceph MDS: Ceph file storage type service for storing and managing metadata metadata.

#Step 1, create mds service on node1 node (create at least one mds, you can also create multiple mds to achieve HA)

[cephu@admin my-cluster]$ ceph-deploy mds create node1 node2 node3

#Step 2. A Ceph file system requires at least two RADOS storage pools, one for storing data and one for storing metadata. Let’s create these two pools.

[root@node1 ceph]# ceph osd pool create ceph_data 16 #Create a ceph_data pool to store data
[root@node1 ceph]# ceph osd pool create ceph_metadata 8 #Create a ceph_metadata pool to store metadata

#Step 3, create the ceph file system and confirm the node accessed by the client

[root@node1 ceph]# ceph fs new cephfs ceph_metadata ceph_data

new fs with metadata pool 9 and data pool 8

#cephfs is the name of the ceph file system, which is the client mount point. ceph_metadata is the metadata pool created in the previous step, and ceph_data is the data created in the previous step. The order of these two pools cannot be disordered.

[root@node1 ~]# ceph fs ls
name: cephfs, metadata pool: ceph_metadata, data pools: [ceph_data]

#Client mount

[root@node1 ~]# cd /etc/ceph
[root@node1 ceph]# cat ceph.client.admin.keyring
[client.admin]
        key = AQC4FjFlBv + aMhAAg3H + Dq3xGxbQcA8/f2IUTg==

#View the contents of the client’s key file on node1. This file is called ceph.client.admin.keyring, where admin is the user name [client.admin] #admin is the user name key = AQC4FjFlBv + aMhAAg3H + Dq3xGxbQcA8/f2IUTg== # This is the client’s secret key

 [root@client ~]# mkdir /etc/ceph & amp; & amp; cd /etc/ceph

#Create a /etc/ceph directory on the cline client

[root@client ceph]# echo 'AQC4FjFlBv + aMhAAg3H + Dq3xGxbQcA8/f2IUTg==' >> admin.key

#Create a new key file and copy the client key seen from node1 to this file

[root@client ceph]# mkdir /cephfs_data

First create a directory as a mount point

[root@client ceph]# mount.ceph node1:6789:/ /cephfs_data/ -o name=admin,secretfile=/etc/ceph/admin.key

#Interpretation: node1:6789:/ /cephfs_data/, where 6789 is the port of mon. The client mounts the connection to find mon. Because we have created 3 mons on node1, we can write node2 or node3 here. The slash indicates Find the root. Finding the root of node1 is to find the cephfs file system we created on node1. /cephfs_data/ means mounting to the local ceph_data directory, -o means specifying parameter options, name=admin, secretfile=/etc/ceph/admin .key means using the user name admin, and the secret key uses the /etc/ceph/admin.key key file to connect.

[root@client ceph]# df -h

The knowledge points of the article match the official knowledge files, and you can further learn relevant knowledge. Cloud native entry-level skills treeHomepageOverview 16,710 people are learning the system