Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 coredump joined #gluster
00:23 Pupeno_ joined #gluster
00:27 T3 joined #gluster
00:33 wkf joined #gluster
00:35 awerner joined #gluster
00:51 JoeJulian alan^: Use the CLI for setting up and configuring volumes. Hand editing vol files hasn't been a thing for years.
01:10 Pupeno joined #gluster
01:45 dgandhi joined #gluster
01:46 dgandhi joined #gluster
01:47 dgandhi joined #gluster
01:48 dgandhi joined #gluster
02:18 harish joined #gluster
02:36 nangthang joined #gluster
02:40 badone__ joined #gluster
02:40 jmarley joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:51 churnd- joined #gluster
02:56 fyxim joined #gluster
02:58 bjornar joined #gluster
03:02 rcampbel3 joined #gluster
03:26 Pupeno_ joined #gluster
03:29 bharata-rao joined #gluster
03:47 kanagaraj joined #gluster
03:58 RameshN joined #gluster
03:59 itisravi joined #gluster
03:59 shylesh__ joined #gluster
04:01 hagarth joined #gluster
04:01 nangthang joined #gluster
04:03 nangthang joined #gluster
04:05 shubhendu joined #gluster
04:07 atinmu joined #gluster
04:08 bala joined #gluster
04:17 nishanth joined #gluster
04:23 plarsen joined #gluster
04:28 spandit joined #gluster
04:33 gem joined #gluster
04:34 prasanth_ joined #gluster
04:36 anoopcs_ joined #gluster
04:36 jiffin joined #gluster
04:36 ndarshan joined #gluster
04:39 jiffin1 joined #gluster
04:40 rafi joined #gluster
04:40 schandra joined #gluster
04:49 kaushal_ joined #gluster
04:52 deepakcs joined #gluster
04:52 ppai joined #gluster
05:00 schandra joined #gluster
05:01 anil joined #gluster
05:10 anoopcs_ joined #gluster
05:20 Manikandan joined #gluster
05:20 Manikandan_ joined #gluster
05:23 bene_in_BLR joined #gluster
05:24 karnan joined #gluster
05:25 kdhananjay joined #gluster
05:25 overclk joined #gluster
05:34 hagarth joined #gluster
05:39 soumya joined #gluster
05:43 dusmant joined #gluster
05:54 meghanam joined #gluster
05:54 suman_d joined #gluster
06:00 rafi1 joined #gluster
06:12 dgandhi joined #gluster
06:13 anil joined #gluster
06:19 Pupeno joined #gluster
06:31 msciciel joined #gluster
06:31 hagarth joined #gluster
06:33 raghu` joined #gluster
06:33 shubhendu joined #gluster
06:51 atinmu joined #gluster
07:08 itisravi joined #gluster
07:10 rafi joined #gluster
07:12 mbukatov joined #gluster
07:17 hagarth joined #gluster
07:18 aulait joined #gluster
07:22 shubhendu joined #gluster
07:23 atinmu joined #gluster
07:30 huleboer joined #gluster
07:31 jtux joined #gluster
07:41 suman_d joined #gluster
07:41 anrao joined #gluster
07:41 nbalacha joined #gluster
07:48 bala joined #gluster
07:48 fyxim joined #gluster
07:48 kanagaraj joined #gluster
07:48 RameshN joined #gluster
07:48 shylesh__ joined #gluster
07:48 nishanth joined #gluster
07:48 spandit joined #gluster
07:48 gem joined #gluster
07:48 prasanth_ joined #gluster
07:48 ndarshan joined #gluster
07:48 jiffin1 joined #gluster
07:48 kshlm joined #gluster
07:48 deepakcs joined #gluster
07:48 ppai joined #gluster
07:48 schandra joined #gluster
07:48 anoopcs joined #gluster
07:48 Manikandan joined #gluster
07:48 bene_in_BLR joined #gluster
07:48 karnan joined #gluster
07:48 kdhananjay joined #gluster
07:48 overclk joined #gluster
07:48 soumya joined #gluster
07:48 dusmant joined #gluster
07:48 meghanam joined #gluster
07:48 anil joined #gluster
07:49 raghu` joined #gluster
07:49 rafi joined #gluster
07:49 mbukatov joined #gluster
07:49 hagarth joined #gluster
07:49 shubhendu joined #gluster
07:49 atinmu joined #gluster
07:49 bala joined #gluster
07:55 ricky-ticky joined #gluster
08:05 itisravi joined #gluster
08:15 rejy joined #gluster
08:25 [Enrico] joined #gluster
08:30 shubhendu joined #gluster
08:40 bala joined #gluster
08:45 fsimonce joined #gluster
08:47 nbalacha joined #gluster
08:48 jiffin joined #gluster
08:48 schandra joined #gluster
08:51 dusmant joined #gluster
08:54 T0aD joined #gluster
08:55 nishanth joined #gluster
08:59 Slashman joined #gluster
09:02 liquidat joined #gluster
09:09 raatti joined #gluster
09:11 ws2k3 joined #gluster
09:11 awerner joined #gluster
09:21 suman_d joined #gluster
09:21 glusterbot News from newglusterbugs: [Bug 1191919] Disperse volume: Input/output error when listing files/directories under nfs mount <https://bugzilla.redhat.com/show_bug.cgi?id=1191919>
09:26 deniszh joined #gluster
09:28 nshaikh joined #gluster
09:32 kovshenin joined #gluster
09:34 _polto_ joined #gluster
09:35 _polto_ hi all. I have a problem with my glusterfs. it's in production since one week, without any problems and now in some directories I can not write files and I can not supress this directories.
09:36 _polto_ rmdir: failed to remove ‘matches_rig/’: Stale file handle
09:36 _polto_ any idea ?
09:43 anrao joined #gluster
09:48 nangthang joined #gluster
09:49 dusmant joined #gluster
09:50 bala joined #gluster
09:56 nbalacha joined #gluster
10:01 overclk joined #gluster
10:07 anrao joined #gluster
10:20 LebedevRI joined #gluster
10:24 schandra joined #gluster
10:36 meghanam joined #gluster
10:40 suman_d joined #gluster
10:48 veedub1955 joined #gluster
10:48 veedub1955 Hey guys, is anyone else having issues with setting any options in Gluster 3.6.2?
10:49 veedub1955 I cannot set any options at all, comes up with an error to do with 'One or more connected clients cannot support the feature being set'
10:51 badone__ joined #gluster
10:51 dusmant joined #gluster
11:01 ricky-ticky1 joined #gluster
11:08 harish joined #gluster
11:11 bala joined #gluster
11:13 peem joined #gluster
11:14 overclk joined #gluster
11:21 veedub1955 Hey guys, is anyone else having issues with setting any options in Gluster 3.6.2? - I cannot set any options at all, comes up with an error to do with 'One or more connected clients cannot support the feature being set'
11:24 samppah veedub1955: are all clients using same version?
11:25 veedub1955 Yes, all 6 of them are running glusterfs 3.6.2
11:25 samppah hmmh
11:29 RameshN joined #gluster
11:31 veedub1955 samppah are you running glusterfs 3.6.2 at all?
11:33 side_control joined #gluster
11:35 yosafbridge joined #gluster
11:41 kkeithley1 joined #gluster
11:48 side_control joined #gluster
11:51 telmich if I run a vm on top of gluster, what kind of cache settings is recommended?
11:51 telmich (cache as in qemu ... cache= setting)
11:52 ira joined #gluster
11:53 suman_d joined #gluster
11:54 nishanth joined #gluster
11:57 meghanam joined #gluster
11:58 andreask joined #gluster
11:58 andreask left #gluster
12:00 rwheeler joined #gluster
12:04 anoopcs_ joined #gluster
12:12 shubhendu joined #gluster
12:12 kshlm joined #gluster
12:16 mbukatov joined #gluster
12:16 bala joined #gluster
12:26 kanagaraj joined #gluster
12:26 elico joined #gluster
12:33 maveric_amitc_ joined #gluster
12:53 anoopcs_ joined #gluster
12:58 peem glusterfs on aws ec2 instances ? Any hints or advice, pros and cons maybe ?
13:01 bala joined #gluster
13:02 ndarshan joined #gluster
13:11 meghanam joined #gluster
13:18 mbukatov joined #gluster
13:18 maveric_amitc_ joined #gluster
13:29 free_amitc_ joined #gluster
13:34 anoopcs_ joined #gluster
13:34 anoopcs__ joined #gluster
13:38 wkf joined #gluster
13:38 ildefonso joined #gluster
13:41 nishanth joined #gluster
13:48 theron joined #gluster
13:48 theron joined #gluster
13:50 bennyturns joined #gluster
13:54 Pupeno joined #gluster
13:55 bene_in_BLR joined #gluster
13:56 prasanth_ joined #gluster
13:57 churnd joined #gluster
14:00 awerner joined #gluster
14:00 ildefonso joined #gluster
14:18 hagarth joined #gluster
14:18 prasanth_ joined #gluster
14:19 Dasher joined #gluster
14:19 side_control joined #gluster
14:19 ndarshan joined #gluster
14:22 dgandhi joined #gluster
14:24 semiosis peem: use ebs for your storage.  and use ,,(hostnames)
14:24 glusterbot peem: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
14:24 georgeh-LT2 joined #gluster
14:24 semiosis peem: use CreateImage to snapshot your gluster servers
14:24 semiosis peem: put replicas in separate AZs
14:25 dgandhi1 joined #gluster
14:28 Dasher Hello, I'm a new user of GlusterFS. My disk organisation is : 2 server with, on each server, 2 available storages (= 2 bricks/server). I'd like to know how GlusterFS manages its replica count. In my case, with a replica count of 2, will a file be copied on the two different server or can it stay on the same server?
14:29 mrEriksson Dasher: Gluster doesn't really care about servers, just bricks. So it is up to you to make sure that the "paired" bricks are located on different servers. Basically, via the order they are listed when creating a volume. The documentation is pretty good on this subject
14:33 Gill joined #gluster
14:34 calisto joined #gluster
14:37 Dasher Ok, so if I have well understood, the replica count is specific on the bricks you declare, and not on the entire volume. Is that correct?
14:40 mrEriksson No, replica count is per volume
14:41 semiosis https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md#creating-replicated-volumes
14:41 mrEriksson So if you create a volume with four bricks, b1, b2, b3, b4 (in that order), b1 and b3 will be one replica, and b2 and b4 the second replica. (If I remember the syntaxes correctly :))
14:41 semiosis nope
14:42 mrEriksson nope?
14:42 semiosis (b1, b2) and (b3, b4) will be your pairs
14:42 mrEriksson Oh,, I always get those mixed up
14:42 semiosis hehe
14:42 mrEriksson Which is why I ALWAYS visit the docs before creating volumes :-)
14:42 semiosis mrEriksson++
14:42 glusterbot semiosis: mrEriksson's karma is now 1
14:43 mrEriksson Haha
14:43 semiosis brb, coffee
14:45 Dasher Ok I see now, thx for your answer :)
14:45 Dasher (and the doc on github is much better than the wiki ;) )
14:46 nshaikh joined #gluster
14:48 neofob joined #gluster
14:48 [Enrico] joined #gluster
14:49 suman_d joined #gluster
14:59 _polto_ I have a problem with my glusterfs. it's in production since one week, without any problems and now in some directories I can not write files and I can not supress this directories.
14:59 _polto_ rmdir: failed to remove ‘matches_rig/’: Stale file handle
14:59 _polto_ any idea ?
15:00 _Bryan_ joined #gluster
15:00 semiosis _polto_: using fuse or nfs client?
15:00 _polto_ fuse
15:00 _polto_ I mount with -t glusterfs
15:01 semiosis can you put your client log file on pastie.org or similar?  client log file is usually /var/log/glusterfs/the-mount-point.log
15:01 _polto_ 2s
15:04 jiperdepip joined #gluster
15:06 shubhendu joined #gluster
15:09 jiperdepip Hi folks, i hope i can ask my gluster question here, i'm following the 'getting started guide', when i 'create' the volume as instructed i get this error;  volume create: gv_global: failed: Host 10.50.1.251 is not in 'Peer in Cluster' state
15:09 jiperdepip can anyone point me in the right direction?
15:10 jiperdepip from the other node i get this; peer probe: failed: Error through RPC layer, retry again later
15:10 jiperdepip (while doing: gluster peer probe 10.50.2.251)
15:13 _polto_ semiosis: not that good : http://pastie.org/9942105
15:14 semiosis jiperdepip: not familiar with that specific error but are you sure the necessary ,,(ports) are open in both directions?  for the probe that's port tcp/24007
15:14 glusterbot jiperdepip: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
15:15 plarsen joined #gluster
15:17 jiperdepip everything is wide open now
15:17 jiperdepip but i'll do a double chack
15:17 jiperdepip *check
15:17 jiperdepip thnx
15:17 jobewan joined #gluster
15:19 suman_d joined #gluster
15:19 jiperdepip port 24008 is not open, 24007 is reachable from, and to both servers
15:19 jiperdepip even RPC is open on both
15:22 _polto_ semiosis: have you seen the log ? any idea how can I repare this ?
15:22 frakt joined #gluster
15:23 glusterbot News from newglusterbugs: [Bug 1134050] Glfs_fini() not freeing the resources <https://bugzilla.redhat.com/show_bug.cgi?id=1134050>
15:23 glusterbot News from newglusterbugs: [Bug 1192075] libgfapi clients hang if glfs_fini is called before glfs_init <https://bugzilla.redhat.com/show_bug.cgi?id=1192075>
15:23 glusterbot News from resolvedglusterbugs: [Bug 1091335] libgfapi clients hang if glfs_fini is called before glfs_init <https://bugzilla.redhat.com/show_bug.cgi?id=1091335>
15:23 virusuy joined #gluster
15:23 virusuy joined #gluster
15:24 frakt we started getting lots of spam in our gluster logs recently, anyone know what it means?
15:24 frakt [2015-02-12 15:17:00.052324] I [dht-common.c:1000:dht_lookup_everywhere_done] 0-storage-dht: STATUS: hashed_subvol storage-replicate-0 cached_subvol null
15:25 frakt glusterfs 3.4.6
15:29 tru_tru joined #gluster
15:29 B21956 joined #gluster
15:34 soumya joined #gluster
15:36 brunoc joined #gluster
15:36 lmickh joined #gluster
15:38 meghanam joined #gluster
15:41 MacWinner joined #gluster
15:44 Pupeno joined #gluster
15:45 brunoc Hello, I would need some help
15:47 brunoc I'm trying to set up my first gluster isntallation, I followed the documentation, but gluster keeps saying "volume create: gv0: failed"
15:47 brunoc my volumes are cleaned
15:51 peem semiosis: thanks
15:51 semiosis _polto_: looks like maybe a network problem, or possibly a problem with the brick disk on a server
15:52 semiosis frakt: dont know what that means
15:52 bennyturns joined #gluster
15:52 semiosis brunoc: check your glusterd log file, usually /var/log/glusterfs/etc-glusterfs-glusterd.log.  put it on pastie.org or similar if you want
15:53 the-me joined #gluster
15:56 brunoc @semiosis: http://pastie.org/private/iolbzqhumzosz0afpzn7a
15:57 semiosis path or a prefix of it is already part of a volume
15:57 glusterbot semiosis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
15:57 semiosis brunoc: ^^^
15:57 brunoc here it says that it's already part of volume, but no command ever worked
15:57 semiosis you need to clear an xattr to use that brick in a new volume.  see joe's blog post linked above
15:57 brunoc I mean I've never been able to create a volume. I going to try that. thanks
15:58 semiosis gluster sets the xattr first, then if creation fails, it does not remove the xattr :(
15:58 semiosis that's annoying, but it can be fixed
15:58 semiosis once you get past this error, if you get another, let us know
16:01 brunoc ok but to use Joe Julian's commands "setfattr", I need to give a volume number, but I haven't been able to create any...
16:02 semiosis brunoc: the directory you're trying to use has an xattr on it that needs to be removed.  you can use the getfattr command to see what xattrs it has
16:02 semiosis ,,(extended attributes)
16:02 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
16:03 laudo joined #gluster
16:04 brunoc and what about formatting that brick, is tha enough?
16:04 brunoc that*
16:05 semiosis maybe
16:05 semiosis try it
16:05 laudo Is gluster instant snapshot incremental?
16:12 theron joined #gluster
16:12 _polto_ semiosis: yep, some bricks are unavailable : http://pastie.org/9942215 , but /export/sda/brick is mounted on 192.168.98.1
16:17 nshaikh joined #gluster
16:18 brunoc @semiosis : I keep having the same error apttern : right after I re-created a brick, I only get the "failed" error, and if I try again, the error is "volume create: gv0: failed: /export/sdb1/brick or a prefix of it is already part of a volume"
16:20 deniszh1 joined #gluster
16:24 brunoc enven if no volumes are present in gluster....
16:24 glusterbot News from newglusterbugs: [Bug 1192114] NFS I/O error when copying a large amount of data <https://bugzilla.redhat.com/show_bug.cgi?id=1192114>
16:27 kshlm joined #gluster
16:28 tberch_nbriter joined #gluster
16:29 glusterbot joined #gluster
16:30 semiosis brunoc: chech the log after you get the failed message
16:32 brunoc @semiosis: http://pastie.org/private/xwqwg1ca0quskbxloy7bpg
16:33 semiosis Brick may be containing or be contained by an existing brick
16:33 semiosis maybe a parent of the brick directory has an xattr
16:34 semiosis as joe's blog post says, you need to remove that xattr from the directory and all its parents
16:35 brunoc Yes but using getfattr, I can see that there not any xattr
16:36 kshlm joined #gluster
16:36 jmarley joined #gluster
16:37 semiosis must be there somewhere, i dont know what else to suggest :/
16:37 brunoc ok
16:37 brunoc I'm tired of this, I'm just gonna re-install the OS, we'll see...
16:37 semiosis good idea
16:38 _polto_ semiosis: do you have any idea how to repair this ? some bricks are unavailable : http://pastie.org/9942215 , but /export/sda/brick is mounted on 192.168.98.1
16:38 brunoc it's been two days I'm on it trying everything I see, no result, so I guess there is something wrong somewhere else
16:38 semiosis _polto_: gluster volume start $volname force
16:38 semiosis _polto_: that should try to start the missing brick ,,(processes)
16:38 glusterbot _polto_: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
16:39 hagarth joined #gluster
16:39 semiosis then you should get errors in the brick logs if the bricks fail to start
16:39 semiosis brick logs are on the servers in /var/log/glusterfs/bricks iirc
16:40 stickyboy I was running a find / stat on a FUSE directory to start a healing, and after 2 hours it crashed, out of memory. Eeek.
16:40 stickyboy I didn't see the node's memory spike, maybe it was just a crash?
16:40 semiosis maybe
16:41 semiosis if you're using glusterfs 3.4 or newer you can do 'gluster volume heal $volname full' iirc to heal everything
16:41 stickyboy semiosis: It's a big volume, I was just trying to get a few users' home dirs.
16:41 stickyboy Would a stack trace be interesting?
16:41 stickyboy (from fuse mount log?)
16:42 semiosis not to me but maybe someone else
16:42 stickyboy Ok. :)
16:42 _polto_ semiosis: thanks ! repaired !
16:42 stickyboy semiosis: Yah, we're on 3.5.3.
16:43 stickyboy Anyways, more worrying, the glusterfs process died and held the file handle for the mount point. I had to restart the machine to re-mount the volume.
16:43 stickyboy It doesn't always happen, usually I just kill glusterfs PID and remount.
16:46 rotbeard joined #gluster
16:47 _polto_ semiosis: hmmm... bricks are started, but I still can not remove the files...
16:47 _polto_ rmdir: failed to remove ‘test_1’: Stale file handle
16:47 semiosis _polto_: try making a new client mount and see if it works there.  if it does, then you can remount the existing client
16:47 _polto_ I did remount
16:47 _polto_ umount
16:48 _polto_ and mount
16:48 semiosis hrm, well then i dont know
16:48 semiosis maybe client log file has more info
16:50 _polto_ semiosis: interesting ... from another client I have another error : rmdir: failed to remove ‘test_1/’: Transport endpoint is not connected
16:50 semiosis bricks died?
16:50 semiosis client log will say exactly which endpoint (brick) is disconnected
16:51 _polto_ gluster volume status fasterdata
16:51 _polto_ say all are online
16:51 semiosis that too
16:51 _polto_ ok, 2s
16:51 CyrilPeponnet hey guys, is there a proper way to bump op-version on 3.5.x ? for an unknown reason my op version is 2 and should be at least 3
16:51 CyrilPeponnet (or a script somewhere
16:52 CyrilPeponnet )
16:52 semiosis ,,(op version)
16:52 glusterbot I do not know about 'op version', but I do know about these similar topics: 'op-version'
16:52 semiosis ,,(op-version)
16:52 glusterbot The operating version represents the RPC and translator capabilities required to accommodate the volume settings ( http://gluster.org/community/documentation/index.php/OperatingVersions ). To allow older version clients to connect to newer servers, reset any volume options that require the newer op-version.
16:52 semiosis yay glusterbot!
16:52 CyrilPeponnet I know what it is :)
16:52 semiosis well i dont know anything about it
16:53 ndevos CyrilPeponnet: maybe like this? (untested) gluster volume set $VOLUME cluster.op-version 3
16:53 CyrilPeponnet 3.6 only
16:54 CyrilPeponnet :(
16:54 ndevos ah :-/
16:55 CyrilPeponnet I know that update could bump it
16:55 _polto_ semiosis: got it :    0 d--------- 2 root  root        6 fév 12 16:49 test_1
16:55 glusterbot _polto_: d-------'s karma is now -1
16:55 CyrilPeponnet the point is that 3.5.2 should be obversion 3
16:55 _polto_ LOL
16:57 ndevos CyrilPeponnet: I dont know much about the op-version changing, did you send an email about it to the list yet?
16:57 CyrilPeponnet long time ago but no responses so far
16:57 ndevos kshlm and kp would be amoung the ones that should be able to help out
16:58 * ndevos probably misspelled "amoung"?
16:58 meghanam joined #gluster
16:59 ndevos CyrilPeponnet: if you can find it on http://www.gluster.org/pipermail/gluster-users/ or in an other archive, I can ask the glusterd devs to send a response
17:00 CyrilPeponnet sure
17:00 CyrilPeponnet thanks
17:04 CyrilPeponnet is upgrading from 3.5.2 to 3.6 worth it for a production setup ?
17:06 tdasilva joined #gluster
17:20 gem joined #gluster
17:20 calisto joined #gluster
17:21 johnbot joined #gluster
17:23 rcampbel3 joined #gluster
17:25 PeterA joined #gluster
17:32 ron-slc joined #gluster
17:35 ekman- joined #gluster
17:48 Gill Hi JoeJulian you got a second?
17:56 semiosis Gill: just ask, when joe gets in he'll see it
17:57 Gill ah he helped me with something the other night. My reads to gluisterFS are really really slow
17:57 Gill I was wondering if theres a performance option I left out to make it read async or something
17:57 Gill my blocks are going across the internet though (VPN)
17:58 Gill think it makes more sense to create 2 nodes in each data center and then rsync across glusterFS nodes?
17:58 Gill will that even work?
18:03 jmarley joined #gluster
18:03 jmarley joined #gluster
18:16 SOLDIERz joined #gluster
18:17 dbruhn joined #gluster
18:17 Rapture joined #gluster
18:17 SOLDIERz joined #gluster
18:21 SOLDIERz joined #gluster
18:22 SOLDIERz joined #gluster
18:22 SOLDIERz joined #gluster
18:24 SOLDIERz joined #gluster
18:29 MacWinner joined #gluster
18:36 jiffin joined #gluster
18:40 ghenry joined #gluster
19:13 CyrilPeponnet Is there a way to allow writes only for certains IP ? and RO for others ?
19:13 CyrilPeponnet either in nfs or glusterfs
19:14 CyrilPeponnet If not can I set nfs RO and fuse glisters RW (and restrict allowed clients by subnet on glustersfs fuse)
19:16 ndevos CyrilPeponnet: in 3.7 this would make it possible for nfs: http://www.gluster.org/community/documentation/index.php/Features/Exports_Netgroups_Authentication
19:17 CyrilPeponnet Ok in the mean time can I set a vol as nfs RO and mount using fuse as RW on a dedicated host ?
19:18 ndevos yeah, I think that should work
19:18 ndevos or, you could use NFS-Ganesha and configure the export there, or well, Samba could do it too, I guess
19:24 glusterbot News from resolvedglusterbugs: [Bug 1095596] doc: Stick to IANA standard while allocating brick ports <https://bugzilla.redhat.com/show_bug.cgi?id=1095596>
19:24 glusterbot News from resolvedglusterbugs: [Bug 1095330] Disabling NFS causes E level errors in nfs.log. <https://bugzilla.redhat.com/show_bug.cgi?id=1095330>
19:24 glusterbot News from resolvedglusterbugs: [Bug 1117256] [3.4.4] mounting a volume over NFS (TCP) with MOUNT over UDP fails <https://bugzilla.redhat.com/show_bug.cgi?id=1117256>
19:31 n-st joined #gluster
19:32 jmarley joined #gluster
19:39 badone__ joined #gluster
19:54 glusterbot News from newglusterbugs: [Bug 1022535] Default context for GlusterFS /run sockets is wrong <https://bugzilla.redhat.com/show_bug.cgi?id=1022535>
19:54 glusterbot News from newglusterbugs: [Bug 1036009] Client connection to Gluster daemon stalls <https://bugzilla.redhat.com/show_bug.cgi?id=1036009>
19:54 glusterbot News from newglusterbugs: [Bug 880241] Basic security for glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=880241>
19:54 glusterbot News from newglusterbugs: [Bug 892808] [FEAT] Bring subdirectory mount option with native client <https://bugzilla.redhat.com/show_bug.cgi?id=892808>
19:54 glusterbot News from resolvedglusterbugs: [Bug 1184658] Debian client package not depending on attr <https://bugzilla.redhat.com/show_bug.cgi?id=1184658>
19:54 glusterbot News from resolvedglusterbugs: [Bug 1086460] Ubuntu code audit results (blocking inclusion in Ubuntu Main repo) <https://bugzilla.redhat.com/show_bug.cgi?id=1086460>
20:02 abyss^ I have question about fix layout. I'd like to do full fix layout because I added new bricks and I have to balance all data on all bricks.... But doing full fix layout gluster takes alot of resources and servers work really slow, so clients get data very slow as well... This is right behavior? Can I do something to mitigate the whole process?
20:09 abyss^ JoeJulian: semiosis ?;)
20:20 partner fix layout shouldn't take much resources
20:20 partner not that i've experienced rebalance to take much either, its not that fast
20:21 partner if things are saturated already that might make a difference
20:22 partner but these are two completely separate things, a rebalance and (rebalance) fix-layout
20:23 partner unless "full fix layout" is something new i don't know what that means, i'm stuck with older version
20:24 glusterbot News from newglusterbugs: [Bug 1044648] glfs_set_volfile_server() should accept NULL as transport <https://bugzilla.redhat.com/show_bug.cgi?id=1044648>
20:24 glusterbot News from newglusterbugs: [Bug 1057295] glusterfs doesn't include firewalld rules <https://bugzilla.redhat.com/show_bug.cgi?id=1057295>
20:24 glusterbot News from newglusterbugs: [Bug 1098786] glfs_init() returns 1 if brick host cannot be resolved and doesn't set errno <https://bugzilla.redhat.com/show_bug.cgi?id=1098786>
20:24 glusterbot News from newglusterbugs: [Bug 1098787] when compiled with --disable-xml-output "gluster" cli tool returns human readable output instead of error <https://bugzilla.redhat.com/show_bug.cgi?id=1098787>
20:24 glusterbot News from newglusterbugs: [Bug 1133073] High memory usage by glusterfs processes <https://bugzilla.redhat.com/show_bug.cgi?id=1133073>
20:24 glusterbot News from newglusterbugs: [Bug 1093217] [RFE] Gluster module (purpleidea) to support HA installations using Pacemaker <https://bugzilla.redhat.com/show_bug.cgi?id=1093217>
20:24 glusterbot News from resolvedglusterbugs: [Bug 1081018] glusterd needs xfsprogs and e2fsprogs packages <https://bugzilla.redhat.com/show_bug.cgi?id=1081018>
20:24 glusterbot News from resolvedglusterbugs: [Bug 1109917] prefer gf_time_fmt() instead of strftime() <https://bugzilla.redhat.com/show_bug.cgi?id=1109917>
20:24 glusterbot News from resolvedglusterbugs: [Bug 1125245] GlusterFS 3.4.6 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1125245>
20:24 glusterbot News from resolvedglusterbugs: [Bug 1123289] crash on fsync <https://bugzilla.redhat.com/show_bug.cgi?id=1123289>
20:24 glusterbot News from resolvedglusterbugs: [Bug 1112844] OOM: observed for fuse client process (glusterfs) when one brick from replica pairs were offlined and high IO was in progress from client <https://bugzilla.redhat.com/show_bug.cgi?id=1112844>
20:24 glusterbot News from resolvedglusterbugs: [Bug 1133266] remove unused parameter and correctly handle mem alloc failure <https://bugzilla.redhat.com/show_bug.cgi?id=1133266>
20:27 B21956 joined #gluster
20:36 coredump joined #gluster
20:39 abyss^ partner: I meant rebalance without fix layout - my bad
20:39 abyss^ so fix layout and migrate data
20:40 abyss^ when I do gluster volume rebalance myvolume start on 4 TB disks the glusterfs is very slow
20:42 Gill Hey guys… is it possible to rsync gluster volumes?
20:48 noddingham joined #gluster
20:49 pstallworth left #gluster
20:50 dockbram joined #gluster
21:05 madebymarkca joined #gluster
21:10 abyss^ Gill: two different volumes?
21:11 Gill yes
21:11 Gill 2 local volumes basically
21:11 Gill then i need a way to syn between the DCs
21:13 andreask1 joined #gluster
21:14 andreask joined #gluster
21:14 SOLDIERz joined #gluster
21:17 abyss^ Gill: only between clients, so, the answer is yes. You have to mount first volume, then second and rsync
21:18 Gill ah ok cool thanks abyss^
21:20 abyss^ np;)
21:24 madebymarkca missed the start of this convo, but rsnapshot is also good
21:25 dbruhn Gill is there any reason you aren't using the built in replication in gluster
21:25 Gill because there is crazy latency between datacenters
21:25 dbruhn The asynchronous one, not the synchronous one.
21:26 Gill didnt know theres an async one
21:26 dbruhn Yep Geo-replication
21:26 dbruhn it's for exactly what you are talking about
21:26 Gill ah but thats master-slave
21:31 dbruhn I've never used the Geo-Rep personally, but might be worth testing and seeing if you can make it work how you want. It's based on rsync but with the newer version they have broken it out to be multi threaded instead of single threaded. Which improve performance a ton.
21:31 dbruhn At least last time I read anything about Geo-Rep
21:34 badone__ joined #gluster
21:57 tinymouse joined #gluster
21:59 Pupeno joined #gluster
21:59 Pupeno joined #gluster
22:07 Gill dbruhn: thanks man ill give it a read
22:11 Pupeno_ joined #gluster
22:17 partner geo-rep is kind of "intelligent rsync" that syncs only changed files and does not attempt to do the whole volume on regular basis as "normal" rsync would
22:43 MugginsM joined #gluster
22:44 Pupeno joined #gluster
22:44 Pupeno joined #gluster
22:58 Rapture joined #gluster
23:18 siel joined #gluster
23:20 gildub_ joined #gluster
23:20 social joined #gluster
23:27 gildub joined #gluster
23:32 theron joined #gluster
23:43 NeVR-C joined #gluster
23:51 alexosaurus joined #gluster
23:51 alexosaurus hi, what are the guidelines for interoperability of gluster releases? any problems with using 3.5 client with 3.6 server?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary