ceph持久化存储 (部署过程)

环境介绍

节点名称 节点IP 节点规划
node03 192.168.0.41 admin.mon,osd
node04 192.168.0.43 mon,osd
node05 192.168.0.45 mon,osd

操作系统

集群部署

环境预操作

  1. 添加 CEPH yum 源 在所有节点上执行
    cat > /etc/yum.repos.d/ceph.repo <<eof
    [ceph]
    name=Ceph packages for $basearch
    baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch
    enabled=1
    gpgcheck=1
    priority=1
    type=rpm-md
    gpgkey=http://mirrors.163.com/ceph/keys/release.asc
    [ceph-noarch]
    name=Ceph noarch packages
    baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
    enabled=1
    gpgcheck=1
    priority=1
    type=rpm-md
    gpgkey=http://mirrors.163.com/ceph/keys/release.asc
    [ceph-source]
    name=Ceph source packages
    baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
    enabled=0
    gpgcheck=1
    type=rpm-md
    gpgkey=http://mirrors.163.com/ceph/keys/release.asc
    priority=1
    eof
  2. 添加ceph 用户 并做无密码互信
    useradd ceph
    ssh-keygen -t rsa
    ssh-copy-id
  3. 添加ceph 环境依赖
    yum install -y yum-utils
    yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
    yum install --nogpgcheck -y epel-release
    rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
    rm -f /etc/yum.repos.d/dl.fedoraproject.org*
    mkdir /usr/local/src/ceph_install

CEPH 安装

  1. 安装monitor节点
    # ceph-deploy
    yum -y install http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm
    # ceph-deploy
    ceph-deploy --version
    1.5.38
    # 使ceph
    # CEPH
    ceph-deploy new node05 node04 node03
  2. 安装ceph服务
    ceph-deploy install --release jewel --repo-url http://mirrors.163.com/ceph/rpm-jewel/el7 --gpg-url http://mirrors.163.com/ceph/keys/release.asc node05 node04 node03
    # ceph monitor node
    cd /usr/local/src/ceph_install/
    # mon
    ceph-deploy mon create-initial
  3. 在所有osd节点的数据盘挂载点赋权
    1. 修改 挂载点目录权限
      chown -R ceph:ceph /opt/ceph-data
    2. prepare # 磁盘挂载钱准备工作
      #Cephxfsext4稿ceph.conf
      osd max object name len = 256
      osd max object namespace len = 64
      #
      mon clock drift allowed = 2
      mon clock drift warn backoff = 30
      rbd_default_features = 1
    3. 因为添加了这两行所以主节点的conf文件反生变化了,所以命令执行时需要加上—overwrite-conf选项,如下
      ceph-deploy --overwrite-conf osd prepare node03:/opt/ceph-data node04:/opt/ceph-data node05:/opt/ceph-data
    4. activate # 磁盘激活
      ceph-deploy osd activate node05:/opt/ceph-data node04:/opt/ceph-data node05:/opt/ceph-data
  4. 推送配置文件和密钥文件

    以下操作 在 Deploy节点执行

    1. 同步配置文件与秘钥
      ceph-deploy --overwrite-conf admin admin ceph-node1 ceph-node2 ceph-node3
      sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  5. 查看磁盘挂载
    [root@node05 ~]# ceph osd tree
    ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
    -1 0.81418 root default
    -2 0.27139 host node04
    0 0.27139 osd.0 up 1.00000 1.00000
    -3 0.27139 host node03
    1 0.27139 osd.1 up 1.00000 1.00000
    -4 0.27139 host node05
    2 0.27139 osd.2 up 1.00000 1.00000
  6. 部署完成,查看集群状态
    [root@node05 ~]# ceph -s
    cluster 201cfd95-300b-4ba8-ac1e-654943978e76
    health HEALTH_OK
    monmap e1: 3 mons at {node03=192.168.0.41:6789/0,node04=192.168.0.43:6789/0,node05=192.168.0.45:6789/0}
    election epoch 16, quorum 0,1,2 node03,node04,node05
    osdmap e16: 3 osds: 3 up, 3 in
    flags sortbitwise,require_jewel_osds
    pgmap v1185: 64 pgs, 1 pools, 275 MB data, 203 objects
    79543 MB used, 756 GB / 833 GB avail
    64 active+clean
    client io 109 B/s rd, 0 op/s rd, 0 op/s wr

CEPH 使用

CEPH FS 使用

  1. 创建 MDS 服务
    ceph-deploy mds create node03 node04 node05
  2. 创建ceph FS 元数据服务
    ceph osd pool create cephfs_data 10 # pool FS 10 PG PG
    ceph osd pool create cephfs_metadata 10 # inode
    ceph fs new cephfs cephfs_metadata cephfs_data # FS
  3. 查看刚刚创建的FS
    [root@node03 cephinstall]# ceph -s
    cluster 201cfd95-300b-4ba8-ac1e-654943978e76
    health HEALTH_OK
    monmap e1: 3 mons at {node03=192.168.0.41:6789/0,node04=192.168.0.43:6789/0,node05=192.168.0.45:6789/0}
    election epoch 16, quorum 0,1,2 node03,node04,node05
    fsmap e6: 1/1/1 up {0=node04=up:active}, 2 up:standby
    osdmap e21: 3 osds: 3 up, 3 in
    flags sortbitwise,require_jewel_osds
    pgmap v1224: 84 pgs, 3 pools, 275 MB data, 223 objects
    79544 MB used, 756 GB / 833 GB avail
    84 active+clean
    client io 1004 B/s wr, 0 op/s rd, 7 op/s wr

块设备的使用

  1. 创建image,不加-p选项默认创建在rbd这个pool中

    [root@ceph-node1 ~]# rbd create k8s_storage -s 102400
  2. 查看image

    [root@ceph-node1 ~]# rbd list
    k8s_storage
  3. 映射到内核中(这一步需要在所有k8s的node节点上做)

    [root@ceph-node1 ~]# rbd map k8s_storage
    /dev/rbd0
  4. 格式化(实际使用过程中没有做这一步也成功挂载了)

    [root@ceph-node1 ~]# mkfs.ext4 /dev/rbd0
  5. 查看映射

    [root@ceph-node1 ~]# rbd showmapped
    id pool image snap device
    0 rbd k8s_storage - /dev/rbd0
未经允许不得转载:99ya » ceph持久化存储 (部署过程)