glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持数pb存储容量和数千客户端,通过网络互联成一个并行的网络文件系统,具有可扩展性、高性能、高可用等特点
常用资源:
pool 存储资源池
peer 节点
volume 卷 必须处于start才可用
brick存储单元(硬盘),可增,可减
gluster
glusterfs添加节点是默认本机是localhost,只需要添加其他机器即可,每个节点都是主
glusterfs默认监听49152端口
所有节点配置一致,格式化挂载
1.格式化 [root@192 ~]# mkfs.xfs /dev/sdb [root@192 ~]# mkfs.xfs /dev/sdc 2.获取磁盘UUID 这里我们将uuid写入fstab文件中,而不是磁盘名,防止下次重启机器读错盘符 [root@192 ~]# blkid /dev/sdb /dev/sdc /dev/sdb: UUID="8835164f-78ab-4f6f-a156-9d3afd0132eb" TYPE="xfs" /dev/sdc: UUID="6f86c2be-56cc-4e98-8add-63eb43852d65" TYPE="xfs" 3.编辑/etc/fstab文件 [root@192 ~]# vim /etc/fstab UUID="8835164f-78ab-4f6f-a156-9d3afd0132eb" /gfs/test1 xfs defaults 0 0 UUID="6f86c2be-56cc-4e98-8add-63eb43852d65" /gfs/test2 xfs defaults 0 0 4.挂载 [root@192 ~]# mount -a [root@192 ~]# df -hT | grep gfs /dev/sdb xfs 10G 33M 10G 1% /gfs/test1 /dev/sdc xfs 10G 33M 10G 1% /gfs/test2资源池就相当于集群,往里面添加节点,默认有一个localhost
master节点操作 查看当前的资源池列表 [root@glusterfs01 ~]# gluster pool list UUID Hostname State a2585b8c-7928-4480-9376-25c0d6e88cc0 localhost Connected 增加glusterfs02和glusterfs03节点 [root@glusterfs01 ~]# gluster peer probe glusterfs02 peer probe: success. [root@glusterfs01 ~]# gluster peer probe glusterfs03 peer probe: success. 增加完再查看 [root@glusterfs01 ~]# gluster pool list UUID Hostname State 07502cd5-4c18-4bde-9bcf-7f29f2a68af7 glusterfs02 Connected 5c76e19c-6141-4e95-9446-b3a424cd5f6e glusterfs03 Connected a2585b8c-7928-4480-9376-25c0d6e88cc0 localhost Connected报错
此报错是因为没有做hosts解析和没有关防火墙导致
企业中用的最多的就是分布式复制卷
分布式复制卷可以设置复制的数量,如replica设置的是2,那么就表示上传的文件会复制2份,比如上传10个文件实际上是上传了20个文件,起到一定的备份作用,这20个文件会随机分布在各个节点
创建之前所有节点停止防火墙!!!
[root@glusterfs01 ~]# gluster volume create web_volume01 replica 2 glusterfs01:/gfs/test1 glusterfs01:/gfs/test2 glusterfs02:/gfs/test1 glusterfs02:/gfs/test2 force volume create: web_volume01: success: please start the volume to access data [root@glusterfs01 ~]# gluster volume list web_volume01 gluster 命令关键字 volume 对卷进行操作 create 创建一个卷 web_volume01 卷名 replica 2 副本数 glusterfs01:/gfs/test1 表示glusterfs01节点上的/gfs/test1这个目录加入到卷里 force 强制生成语法:gluster volume add-brick 卷名 节点:磁盘 force
1)扩容 [root@glusterfs01 ~]# gluster volume add-brick web_volume01 glusterfs03:/gfs/test1 glusterfs03:/gfs/test2 force volume add-brick: success 2)查看信息 [root@glusterfs01 ~]# gluster volume info web_volume01 Volume Name: web_volume01 Type: Distributed-Replicate Volume ID: 4327e3a1-c48d-4442-9230-f0f53b04b35c Status: Started Snapshot Count: 0 Number of Bricks: 3 x 2 = 6 #3x2表示有3个节点,每个节点由2块盘,共6个盘 Transport-type: tcp Bricks: Brick1: glusterfs01:/gfs/test1 Brick2: glusterfs01:/gfs/test2 Brick3: glusterfs02:/gfs/test1 Brick4: glusterfs02:/gfs/test2 Brick5: glusterfs03:/gfs/test1 Brick6: glusterfs03:/gfs/test2 Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off 3)客户端刷新 重新执行df命令即可 [root@glusterfs03 ~]# df -hT | grep '/data_gfs' 192.168.81.240:/web_volume01 fuse.glusterfs 30G 404M 30G 2% /data_gfs语法:gluster volume rebalance 卷名 start
[root@glusterfs01 ~]# gluster volume rebalance web_volume01 start volume rebalance: web_volume01: success: Rebalance on web_volume01 has been started successfully. Use rebalance status command to check status of the rebalance process. ID: c8e0f0cf-e1d1-4da5-ae79-90ec6e9db72e缩容后会将所有文件迁移走
语法格式:gluster volume remove-brick 卷 节点名:磁盘名 start
[root@glusterfs01 ~]# gluster volume remove-brick web_volume01 glusterfs03:/gfs/test1 glusterfs03:/gfs/test2 start It is recommended that remove-brick be run with cluster.force-migration option disabled to prevent possible data corruption. Doing so will ensure that files that receive writes during migration will not be migrated and will need to be manually copied after the remove-brick commit operation. Please check the value of the option and update accordingly. Do you want to continue with your current cluster.force-migration settings? (y/n) y volume remove-brick start: success ID: b7ba1075-3bf0-40b3-adaf-9496beee2afc [root@glusterfs01 ~]# ssh 192.168.81.136 "ls /gfs/*" root@192.168.81.136's password: /gfs/test1: /gfs/test2: