Can I use GlusterFS volume storage directly without mounting?

I have setup small cluster of GlusterFS with 3+1 nodes.They're all on the same LAN.There are 3 servers and 1 laptop (via Wifi) that is also GlusterFS node.A laptop often disconnects from the network. ;)Use case I want to achieve is this:I want my laptop to automatically synchronize with GlusterFS filesystem when it reconnects. (That's easy and done.)But, when laptop is disconnected from cluster I still want to access filesystem "offline". Modify, add, remove files..Obviously the only way I can access GlusterFS filesystem when it's offline from ...Read more

GlusterFS - Why is it not recommended to use the root partition?

I am planning to set up a number of nodes to create a distributed-replicated volume using glusterfsI created a gluster replicated volume on two nodes using a directory on the primary (and only) partition.gluster volume create vol_dist-replica replica 2 transport tcp 10.99.0.3:/glusterfs/dist-replica 10.99.0.4:/glusterfs/dist-replicaThis returned the following warningvolume create: vol_dist-replica: failed: The brick 10.99.0.3:/glusterfs/dist-replica is being created in the root partition. It is recommended that you don't use the system's root p...Read more

glusterfs - Gluster remove-brick from volume failed, what to do to remove a brick?

I had a gluster volume named data of distributed mode. I added a brick server1:/vdata/bricks/data to the volume data, However I found that vdata/bricks/data is on the / disk of linux. I wanna remove the brick from volume. So I use gluster volume remove disk data server1:/vdata/bricks/data start. Then I check the status using gluster volume remove-brick data server1:/vdata/bricks/data status but found the status is failed, and the scanned files is always 0. So what could I do to remove this brick without lossing data?...Read more

Glusterfs denied mount

I'm using GlusterFS 3.3.2. Two servers, a brick on each one. The Volume is "ARCHIVE80"I can mount the volume on Server2; if I touch a new file, it appears inside the brick on Server1.However, if I try to mount the volume on Server1, I have an error:Mount failed. Please check the log file for more details.The log gives:[2013-11-11 03:33:59.796431] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-0: changing port to 24011 (from 0)[2013-11-11 03:33:59.796810] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-1: changing port to 2400...Read more

Heketi operations not reflecting in glusterfs server

Started the two glusterfs server and able to create and mount volumes in two servers. I have built heketi from here version 5.0.1. This is how I started the server from heketi after build.cd $GOPATH/src/github.com/heketi/heketi/cp etc/heketi.json heketi.json./heketi --config=heketi.jsonThe server started running on 8080Now I am using the heketi client to interact with heketi server as followexport HEKETI_CLI_SERVER=http://localhost:8080cd $GOPATH/src/github.com/heketi/heketi/client/cli/goAdded the topology.json with data in glusterfs server and...Read more

GlusterFS command for getting replication completeness status

I use GlusterFS in high availability cluster. I need some functionality for getting replication status (replication completeness status). In other words I need to know that cluster now is in protected state (in terms of disk replication) and in the case of the master node failover all the data will not be lost.I already tried gluster volume status, gluster peer status, but they only provide information about connection. P.S.For instance in drbd there was a command drbdadm status which provides information peer-disk:UpToDate (which means that re...Read more

GlusterFS Replication and Failover

I'm looking into to using GlusterFS and am struggling with the GlusterFS_Concepts explanation here. Disadvantages - If you lose a single server, you lose access to all the files that are hosted on that server. This is why distribute is typically graphed to the replicate translator. GlusterFS is a replicated, distributed file system, why would the loss of one server cause you to lose access to all files on that server? I feel I am missing something here. Surely one of the main points of replication is that I would be able to access the files ev...Read more

Create glusterfs Distributed-Replicated

I'm new to glusterfs, it would be much appreciated if someone can explain glusterfs Distributed-Replicated setup.If i have 2 node each with 3 physical disk inside, each physical disk size is 1 TB, i want to create replica of 2, may i know the command below correct ?gluster volume create test-volume replica 2 node1:/exp1/brick1 node2:/exp2/brick2node1:/exp1/brick3 node2:/exp2/brick4node1:/exp1/brick5 node2:/exp2/brick6 below is what i expecting :usable space is 3 TBreplica 3TBnode1:/exp1/brick1 is replicated with node2:/exp2/brick2node1:/exp1/...Read more

can we add a Geo replicated brick to the existing glusterfs volume which already has a normal replicated brick

I have a gluster volume in which presently I have one replicated brick already running.Now I want to set up a geo-replicated brick, so for this do I need to create a new glusterfs volume and then adding a new brick which will be geo-replicated or I can use the existing glusterfs volume that and add a new brick to it with geo-replication to it??...Read more

How to completely delete a GlusterFS volume

I have some GlusterFS(Version 3.7.11) volumes created and started, after some test, I stopped hand deleted the volumes, but they are still remain in the GlusterFS servers.For example, I have 3 servers, and bricks saved under /gfs:[vagrant@gfs-server-2 ~]$ sudo gluster volume create test-vol gfs-server-1:/gfs/test-vol gfs-server-2:/gfs/test-vol gfs-server-3:/gfs/test-vol forcevolume create: test-vol: success: please start the volume to access data[vagrant@gfs-server-2 ~]$ sudo gluster volume start test-volvolume start: test-vol: success[vagrant@...Read more