Ceph快速配置
转载, 原文: http://hobbylinux.blog.51cto.com/2895352/1175932
资源:
两台机器:一台server,一台client,安装ubuntu12.04
其中,server安装时,另外分出两个区,作为osd0、osd1的存储,没有的话,系统安装好后,使用loop设备虚拟出两个也可以。
步骤:
1、服务端安装CEPH (MON、MDS、OSD)
2、添加key到APT中,更新sources.list,安装ceph
#sudo wget -q -O- ‘https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc’ | sudo apt-key add -
#sudo echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update && sudo apt-get install ceph
3、查看版本
ceph-v //将显示ceph的版本和key信息
如果没有显示,请执行如下命令
sudo apt-get update && apt-get upgrade
4、在/etc/ceph/下创建ceph.conf配置文件,并将配置文件拷贝到其它服务端。
[list=1]
[/list][list=1]
[list=1]
[global] [/list][list=1]
For version 0.55 and beyond, you must explicitly enable [/list][list=1]
or disable authentication with “auth” entries in [global]. [/list][list=1]
auth cluster required = none [/list][list=1]
auth service required = none [/list][list=1]
auth client required = none [/list][list=1]
[osd] [/list][list=1]
osd journal size = 1000 [/list][list=1]
#The following assumes ext4 filesystem. [/list][list=1]
filestore xattr use omap = true [/list][list=1]
For Bobtail (v 0.56) and subsequent versions, you may [/list][list=1]
add settings for mkcephfs so that it will create and mount [/list][list=1]
the file system on a particular OSD for you. Remove the comment #
[/list][list=1]
character for the following settings and replace the values [/list][list=1]
in braces with appropriate values, or leave the following settings [/list][list=1]
commented out to accept the default values. You must specify the [/list][list=1]
–mkfs option with mkcephfs in order for the deployment script to [/list][list=1]
utilize the following settings, and you must define the ‘devs’ [/list][list=1]
option for each osd instance; see below. [/list][list=1]
osd mkfs type = xfs [/list][list=1]
osd mkfs options xfs = -f # default for xfs is “-f” [/list][list=1]
osd mount options xfs = rw,noatime # default mount option is “rw,noatime” [/list][list=1]
For example, for ext4, the mount option might look like this: [/list][list=1]
#osd mkfs options ext4 = user_xattr,rw,noatime [/list][list=1]
Execute $ hostname to retrieve the name of your host, [/list][list=1]
and replace {hostname} with the name of your host. [/list][list=1]
For the monitor, replace {ip-address} with the IP [/list][list=1]
address of your host. [/list][list=1]
[mon.a] [/list][list=1]
host = compute-01 [/list][list=1]
mon addr = 192.168.4.165:6789 [/list][list=1]
[osd.0] [/list][list=1]
host = compute-02 [/list][list=1]
For Bobtail (v 0.56) and subsequent versions, you may [/list][list=1]
add settings for mkcephfs so that it will create and mount [/list][list=1]
the file system on a particular OSD for you. Remove the comment #
[/list][list=1]
character for the following setting for each OSD and specify [/list][list=1]
a path to the device if you use mkcephfs with the --mkfs option. [/list][list=1]
devs = /dev/sda7 [/list][list=1]
[mds.a] [/list][list=1]
host = compute-01
[/list][/list]
5、创建目录
sudo mkdir -p /var/lib/ceph/osd/ceph-0
sudo mkdir -p /var/lib/ceph/osd/ceph-1
sudo mkdir -p /var/lib/ceph/mon/ceph-a
sudo mkdir -p /var/lib/ceph/mds/ceph-a
6、创建分区与挂载
fdisk /dev/sda //创建sda6分区
mkfs.xfs -f /dev/sda7
mount /dev/sda7 /var/lib/ceph/osd/ceph-0 (第一次必须先挂载分区写入初始化数据)
7、执行初始化
sudo mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.keyring
8、启动
sudo service ceph -a start
9、执行健康检查
sudo ceph health
如果返回的是 HEALTH_OK,代表成功!
出现: HEALTH_WARN 576 pgs stuck inactive; 576 pgs stuck unclean; no osds之类的,请执行:
#ceph pg dump_stuck stale
#ceph pg dump_stuck inactive
#ceph pg dump_stuck unclean
再次健康检查是,应该是OK
注意:重新执行如下命令#sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring前,所有服务端停止ceph服务在清空创建的四个目录:/var/lib/ceph/osd/ceph-0、/var/lib/ceph/osd/ceph-1、 /var/lib/ceph/mon/ceph-a、/var/lib/ceph/mds/ceph-a
#/etc/init.d/ceph stop
rm –frv /var/lib/ceph/osd/ceph-0/*
rm –frv /var/lib/ceph/osd/ceph-1/*
rm –frv /var/lib/ceph/mon/ceph-a/*
rm –frv /var/lib/ceph/mds/ceph-a/*
三、 CephFS的使用
在客户端上操作:
sudo mkdir /mnt/mycephfs
sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs
或者
sudo mkdir /home/{username}/cephfs
sudo ceph-fuse -m {ip-address-of-monitor}:6789 /home/{username}/cephfs