安装ceph并添加到kubernetes

12月 18, 2019 kubernetes

安装ceph并添加到kubernetes

在线安装

基础环境 (所有节点配置)

关闭防火墙、SELinux

hosts表解析

SSH无密码登陆

时间统一服务

安装ceph的依赖

sudo yum install -y yum-utils \

&& sudo yum-config-manager –add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ \

&& sudo yum install –nogpgcheck -y epel-release && sudo rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 \

&& sudo rm /etc/yum.repos.d/dl.fedoraproject.org*

环境变量设置ceph源

echo “

export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-luminous/el7

export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc

” >> /etc/profile

source /etc/profile

配置yum源

cat << EOF > /etc/yum.repos.d/ceph.repo

[ceph-noarch]

name=Ceph noarch packages

baseurl=http://download.ceph.com/rpm-luminous/el7/noarch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

EOF

安装管理工具

yum -y update && yum -y install ceph-deploy

安装配置文件

cat << EOF > /root/.ssh/config

Host node01

   Hostname node01

   User root

Host node02

   Hostname node02

   User root

Host node03

   Hostname node03

   User root

Host node04

   Hostname node04

   User root

EOF

创建安装目录

mkdir my-cluster;cd my-cluster

安装依赖

yum -y install python-setuptools.noarch

创建集群,设置mon节点

ceph-deploy new node02         

安装ceph软件包

ceph-deploy install –release luminous node0{2..4}

部署初始监视器并收集密钥

ceph-deploy –overwrite-conf  mon create-initial      

使用ceph-deploy将配置文件和管理密钥复制到管理节点和ceph节点

ceph-deploy admin node0{2..4}       

部署管理节点的守护进程

ceph-deploy mgr create node02 

部署osd创建数据设备

ceph-deploy osd create –data /dev/sdc –journal /dev/sdb node03 

去管理节点检查集群健康情况

ssh node02 ceph health                                                      

HEALTH_OK

去管理节点查看集群详细的状况

ssh node02  ceph -s         

  cluster:

    id:     1141d488-9db9-41f6-aefb-e938abca64d8

    health: HEALTH_OK

  services:

    mon: 1 daemons, quorum node02

    mgr: node02(active)

    osd: 4 osds: 4 up, 4 in

  data:

    pools:   0 pools, 0 pgs

    objects: 0 objects, 0 bytes

    usage:   4105 MB used, 77799 MB / 81904 MB avail

    pgs:    

离线安装

制作本地yum源,上传ceph安装包

yum -y install httpd

mkdir /var/www/html/ceph

createrepo /var/www/html/ceph

配置本地yum源到ceph集群机器

cat << EOF >/etc/yum.repos.d/ceph.repo

[ceph_local]

name=ceph-local

baseurl=http://10.10.12.14/ceph

enabled=1

gpgcheck=0

gpgkey=http://10.10.12.14/ceph/release.asc

EOF

所有ceph节点安装

yum -y install ceph ceph-radosgw

安装节点安装

yum -y install ceph-deploy

ceph-deploy new master1

ceph-deploy  mon create-initial

ceph-deploy admin master{1..3}

ceph-deploy mgr create master1

ceph-deploy osd create –data /dev/sdc –journal /dev/sdb master1

ceph health

ceph -s

k8s中使用

所有k8s集群节点安装

yum -y install ceph-common

加载模块

modprobe rbd

ceph中创建k8s使用的存储池及用户

ceph auth add client.kube mon ‘allow r’ osd ‘allow rwx pool=kube’ -o ceph.client.kube.keyring

ceph osd pool create kube 128 128

复制ceph配置文件到k8s

ceph.conf            

ceph.client.admin.keyring

ceph.client.kube.keyring 

在kubernetes集群中安装rbd-provisioner

git clone https://github.com/kubernetes-incubator/external-storage.git

cd external-storage/ceph/rbd/deploy

NAMESPACE=kube-system

sed -r -i “s/namespace: [^ ]+/namespace: \

$NAMESPACE/g” ./rbac/clusterrolebinding.yaml ./rbac/rolebinding.yaml

kubectl -n $NAMESPACE apply -f ./rbac

创建相关用户的secret

vi secrets.yaml

apiVersion: v1

kind: Secret

metadata:

  name: ceph-admin-secret

  namespace: kube-system

type: “kubernetes.io/rbd”

data:

  # ceph auth get-key client.admin | base64   ##ceph管理节点执行得到key

  key: QVFCdng4QmJKQkFsSFJBQWl1c1o0TGdOV250NlpKQ1BSMHFCa1E9PQ==

apiVersion: v1

kind: Secret

metadata:

  name: ceph-secret

  namespace: kube-system

type: “kubernetes.io/rbd”

data:

  # ceph auth get-key client.kube | base64  ##ceph管理节点执行得到key

  key: QVFCTHdNRmJueFZ4TUJBQTZjd1MybEJ2Q0JUcmZhRk4yL2tJQVE9PQ==

kubectl create -f secrets.yaml

vi secrets-default.yaml

apiVersion: v1

kind: Secret

metadata:

  name: ceph-secret

type: “kubernetes.io/rbd”

data:

  # ceph auth add client.kube mon ‘allow r’ osd ‘allow rwx pool=kube’

  # ceph auth get-key client.kube | base64

  key: QVFCTHdNRmJueFZ4TUJBQTZjd1MybEJ2Q0JUcmZhRk4yL2tJQVE9PQ==

kubectl create -f secrets-default.yaml -n default

如果其他namespace需要使用ceph rbd的dynamic provisioning功能的话,要在相应的namespace创建secret来保存client.kube用户key信息

创建storageclass

vi ceph-rbd-sc.yaml

apiVersion: storage.k8s.io/v1beta1

kind: StorageClass

metadata:

  name: ceph-rbd

  annotations:

     storageclass.beta.kubernetes.io/is-default-class: “true”

provisioner: ceph.com/rbd

parameters:

  monitors: 172.16.16.81,172.16.16.82,172.16.16.83

  adminId: admin

  adminSecretName: ceph-admin-secret

  adminSecretNamespace: kube-system

  pool: kube

  userId: kube

  userSecretName: ceph-secret

  fsType: ext4

  imageFormat: “2”

  imageFeatures: “layering”

shijy

作者shijy

发表评论

电子邮件地址不会被公开。 必填项已用*标注