Gluster notes

From Wiki
Jump to: navigation, search

To expand a gluster volume you will first need to add new disks to your nodes or add new storage nodes.

Format the disk.

mkfs.xfs -i size=512 /dev/sdc
blkid /dev/sdc

Add to /etc/fstab

UUID=73791765-44b9-4ad4-986a-d305825d3432 /var/mnt/bricks/disk2 xfs defaults 0 0

Mount

mount -a

Create data dir for gluster.

mkdir /var/mnt/bricks/disk2/data

Add bricks to the volume.

gluster volume add-brick gv0 gluster-srv1:/var/mnt/bricks/disk2/data gluster-srv2:/var/mnt/bricks/disk2/data

Geo Replication

Create ssh key.

ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem

copy keys to slave nodes

ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub root@slave-node
ssh-copy-id root@slave-node

Create slave volume on the slave.

gluster volume create slavevol slave-node:/var/mnt/bricks/disk1 force

Set up mountbroker:

mkdir /var/mountbroker-root
gluster system:: execute mountbroker opt mountbroker-root /var/mountbroker-root
gluster system:: execute mountbroker user geoaccount slavevol

Set up bandwidth limit:

gluster volume geo-replication gv0 slave-node::slavevol config rsync-options '--bwlimit=128'

Update volume options - Fix nonexistent gsyncd errors:

gluster volume geo-replication gv0 slave-node::slavevol config remote_gsyncd /usr/libexec/glusterfs/gsyncd

Start replication

gluster volume geo-replication gv0 slave-node::slavevol create push-pem
gluster volume geo-replication gv0 slave-node::slavevol start

Check status

gluster volume geo-replication gv0 slave-node::slavevol status detail

Links

http://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/