Kubernetes + Ceph 做持久化存储

创建CEPH RBD

  1. 创建一个大小为 1024M 的 ceph image

    rbd create ceph-rbd-pv-test --size 1024
  2. 临时关闭内核不支持的特性

    rbd feature disable ceph-rbd-pv-test exclusive-lock, object-map, fast-diff, deep-flatten
    rbd info ceph-rbd-pv-test
    [root@node03 cephinstall]# rbd info ceph-rbd-pv-test
    rbd image 'ceph-rbd-pv-test':
    size 1024 MB in 256 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.39696b8b4567
    format: 2
    features: layering
    flags:
  3. 把 ceph-rbd-pv-test image 映射到内核

    [root@node03 cephinstall]# rbd map ceph-rbd-pv-test # K8Smount
    /dev/rbd2
    [root@node03 cephinstall]# rbd showmapped
    id pool image snap device
    0 rbd k8s_storage - /dev/rbd0
    1 rbd k8s_busybox - /dev/rbd1
    2 rbd ceph-rbd-pv-test - /dev/rbd2
  4. 创建 CEPH 认证 SECRET

    [root@node03 cephinstall]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
    QVFCbllFQmNlbUpER3hBQXFCSUdpLzVUOHk1aVJSQnRDVEh3aXc9PQ==
    [root@node03 cephinstall]#
  5. k8s 节点 添加 secret

    apiVersion: v1
    kind: Secret
    metadata:
    name: ceph-secret
    data:
    key: QVFCbllFQmNlbUpER3hBQXFCSUdpLzVUOHk1aVJSQnRDVEh3aXc9PQ==
  6. 所有K8S节点安装 CEPH-COMMON

    yum install ceph-common -y
  7. 创建PV

    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: ceph-rbd-pv
    spec:
    capacity:
    storage: 1Gi
    accessModes:
    - ReadWriteOnce
    rbd:
    monitors:
    - 192.168.0.41:6789 # ceph mon ip
    pool: rbd
    image: ceph-rbd-pv-test
    user: admin
    secretRef:
    name: ceph-secret
    fsType: ext4
    readOnly: false
    persistentVolumeReclaimPolicy: Recycle
  8. 查看创建的PV

    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl create -f ceph-pv.yaml
    persistentvolume "ceph-rbd-pv" created
    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl get pv
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    ceph-rbd-pv 1Gi RWO Recycle Available 28s
  9. 创建 PVC

    PVC 很简单,只需指定 accessModes(这里使用 ReadWriteOnce,k8s 对 RBD 只支持 ReadWriteOnce 和 ReadOnlyMany)和对资源的请求大小即可,那么就创建一下 PVC Pod。

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: ceph-rbd-pv-claim
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 1Gi
  10. 查看刚刚创建的 PVC

    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl create -f ceph-pvc.yaml
    persistentvolumeclaim "ceph-rbd-pv-claim" created
    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    ceph-rbd-pv-claim Bound ceph-rbd-pv 1Gi RWO 5s
  11. 创建 测试 pod

    PV 和 PVC 都创建好了,接下来就需要创建挂载该 RBD 的 Pod 了,这里我使用官方示例中的 busybox 容器测试吧!新建 Pod 文件 rbd-pvc-pod1.yaml 如下:

    apiVersion: v1
    kind: Pod
    metadata:
    labels:
    test: rbd-pvc-pod
    name: ceph-rbd-pv-pod1
    spec:
    containers:
    - name: ceph-rbd-pv-busybox
    image: busybox
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-rbd-vol1
    mountPath: /mnt/ceph-rbd-pvc/busybox
    readOnly: false
    volumes:
    - name: ceph-rbd-vol1
    persistentVolumeClaim:
    claimName: ceph-rbd-pv-claim
  12. 查看 刚刚创建的pod

    $ kubectl create -f rbd-pvc-pod1.yaml
    pod "ceph-rbd-pv-pod1" created
    $ kubectl get pod
    NAME READY STATUS RESTARTS AGE
    ceph-rbd-pv-pod1 1/1 Running 0 19s
    rbd1 1/1 Running 0 1h
  13. 往 ceph挂载点写入文件

    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl exec -it ceph-rbd-pv-pod1 sh
    / # cd /mnt/ceph-rbd-pvc/busybox
    /mnt/ceph-rbd-pvc/busybox #
    touch 192.168.0.157
  14. 删除 pod

    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl delete -f test-pvc.yaml
    pod "ceph-rbd-pv-pod1" deleted
  15. 查看PVC

    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    ceph-rbd-pv-claim Bound ceph-rbd-pv 1Gi RWO 13m

    容器删除。 pvc依旧在。

  16. 创建pod 依旧挂载刚才的PVC。 看看文件是否还在

    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl create -f test-pvc.yaml
    pod "ceph-rbd-pv-pod1" created
  17. 如遇到POD起不来。 提示如下信息:

    Warning FailedMount 1m kubelet, 192.168.0.157 Unable to mount volumes for pod "ceph-rbd-pv-pod1_the-right-prefix(644585d7-1ae8-11e9-a4db-5254003c14c4)": timeout expired waiting for volumes to attach/mount for pod "the-right-prefix"/"ceph-rbd-pv-pod1". list of unattached/unmounted volumes=[ceph-rbd-vol1]

    以上信息说明当前使用 当前挂载的 RDB 在其他机器上有绑定。无法释放。通过以下命令可以查看此rbd的状态

    root@vm157:/usr/local/src/k8s-yaml/ceph# rbd status ceph-rbd-pv-test
    Watchers:
    watcher=192.168.0.159:0/3577624344 client.14639 cookie=1
    watcher=192.168.0.160:0/3368059204 client.14730 cookie=1
    watcher=192.168.0.41:0/1043427368 client.14120 cookie=3

    通过使用此脚本 释放挂载

    root@vm157:/usr/local/src/k8s-yaml/ceph# watcher=$(rbd status ceph-rbd-pv-test |sed -n '2p'|awk -F[" "=] '{print $2}')
    root@vm157:/usr/local/src/k8s-yaml/ceph# for x in $watcher;
    > do
    > ceph osd blacklist add $x
    > done
    blacklisting 192.168.0.160:0/3368059204 until 2019-01-18 15:24:58.058157 (3600 sec)
    ceph osd blacklist clear
  18. 进入容器。查看文件是否还在

    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl exec -it ceph-rbd-pv-pod1 sh
    / # cd /mnt/ceph-rbd-pvc/busybox
    /mnt/ceph-rbd-pvc/busybox # ls
    192.168.0.157 lost+found
    /mnt/ceph-rbd-pvc/busybox #

通过storageclass 绑定pvc

  1. 创建sc 使用的secret
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: "kubernetes.io/rbd"
data:
key: QVFCbllFQmNlbUpER3hBQXFCSUdpLzVUOHk1aVJSQnRDVEh3aXc9PQ==
  1. 创建

    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
    name: rbd
    provisioner: kubernetes.io/rbd
    parameters:
    monitors: 192.168.0.41:6789,192.168.0.43:6789,192.168.0.45:6789
    adminId: admin # ceph root
    adminSecretName: ceph-secret # ceph
    adminSecretNamespace: kube-system # namespace
    pool: rbd
    userId: admin
    userSecretName: ceph-secret
    fsType: ext4
    imageFormat: "1"
  2. 查看 刚刚创建的 sc

    root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl get sc
    NAME PROVISIONER AGE
    rbd kubernetes.io/rbd 5s
  3. 通过刚才创建的SC 使用StatefulSet 动态创建PVC 进行绑定

    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: mongodb-ingageapp-com
    labels:
    name: mongo
    spec:
    ports:
    - port: 27017
    targetPort: 27017
    clusterIP: None
    selector:
    role: mongo
    ---
    apiVersion: apps/v1beta1
    kind: StatefulSet
    metadata:
    name: mongo
    spec:
    serviceName: "mongodb-ingageapp-com"
    replicas: 1
    template:
    metadata:
    labels:
    role: mongo
    environment: test
    spec:
    terminationGracePeriodSeconds: 10
    imagePullSecrets:
    - name: aws
    containers:
    - name: mongo
    image: 279437341690.dkr.ecr.cn-north-1.amazonaws.com.cn/mongodb:3.4.2
    command:
    - mongod
    - "--replSet"
    - rs0
    - "--smallfiles"
    - "--noprealloc"
    ports:
    - containerPort: 27017
    volumeMounts:
    - mountPath: /data
    name: db
    - name: mongo-sidecar
    image: cvallance/mongo-k8s-sidecar
    env:
    - name: MONGO_SIDECAR_POD_LABELS
    value: "role=mongo,environment=test"
    volumeClaimTemplates:
    - metadata:
    name: db
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 100Gi
    storageClassName: rbd
  4. 查看刚刚创建的mongodb 的pvc

root@vm157:/usr/local/src/k8s-yaml/ceph# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ceph-rbd-pv-claim Bound ceph-rbd-pv 1Gi RWO 52m
db-mongo-0 Bound pvc-5df48aa6-1aec-11e9-a4db-5254003c14c4 100Gi RWO rbd 7m

CEPH FS使用

  1. ceph 挂载 FSMAp
  2. 需要挂载的客户端使用如下 命令进行CEPH FS 挂载
mount -t ceph 192.168.0.41:6789,192.168.0.43:6789,192.168.0.45:6789:/ /ceph-data -o name=admin,secret=AQDdegdcx6/9GhAAy/CpG+QozuqrMrIuj7Xg8A==

FAQ

  1. 先删除 pvc 再删除POD ,CEPH不会删除 此PVC绑定的image。这样就会造成。 ceph集群 存储浪费。。
未经允许不得转载:99ya » Kubernetes + Ceph 做持久化存储