Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 justinmburrous joined #gluster
00:43 Gorian say, any reason both brick in a redundant volume would suddenly go offline? Especially when the other bricks for other volumes on the same disks on the same nodes are online?
00:53 justinmburrous joined #gluster
00:54 RioS2 joined #gluster
01:13 jiffe joined #gluster
01:13 harish joined #gluster
01:17 topshare joined #gluster
01:23 RicardoSSP joined #gluster
01:24 rjoseph joined #gluster
01:48 haomaiwa_ joined #gluster
01:51 verdurin joined #gluster
01:51 lyang0 joined #gluster
01:53 Kins joined #gluster
01:56 justinmburrous joined #gluster
02:02 msmith_ joined #gluster
02:04 haomai___ joined #gluster
02:16 haomaiwang joined #gluster
02:28 justinmburrous joined #gluster
02:30 bharata-rao joined #gluster
02:35 justinmburrous joined #gluster
03:24 julim joined #gluster
03:24 kanagaraj joined #gluster
03:26 RameshN joined #gluster
03:35 SOLDIERz joined #gluster
03:48 itisravi joined #gluster
03:54 rjoseph joined #gluster
03:59 nbalachandran joined #gluster
04:06 calisto joined #gluster
04:10 nbalachandran joined #gluster
04:21 bigred15 joined #gluster
04:33 anoopcs joined #gluster
04:34 ndarshan joined #gluster
04:35 bigred15 joined #gluster
04:36 bigred15 Any ideas on what I can do to look/troubleshoot this error: Launching heal operation to perform index self heal on volume has been unsuccessful?
04:38 Gorian joined #gluster
04:40 jiffin joined #gluster
04:42 brettnem joined #gluster
04:45 lalatenduM joined #gluster
04:52 ppai joined #gluster
04:54 spandit joined #gluster
04:57 saurabh joined #gluster
05:07 shubhendu_ joined #gluster
05:08 Gorian joined #gluster
05:10 nishanth joined #gluster
05:14 lmickh joined #gluster
05:15 kdhananjay joined #gluster
05:18 atinmu joined #gluster
05:18 overclk joined #gluster
05:22 tryggvil joined #gluster
05:24 SOLDIERz joined #gluster
05:25 SOLDIERz joined #gluster
05:28 tryggvil joined #gluster
05:30 karnan joined #gluster
05:30 ramteid joined #gluster
05:30 justinmburrous joined #gluster
05:33 cristov_mac joined #gluster
05:33 cristov_mac joined #gluster
05:34 cristov_mac joined #gluster
05:35 atinmu joined #gluster
05:39 aravindavk joined #gluster
05:50 pkoro_ joined #gluster
05:50 RaSTar joined #gluster
05:51 karnan joined #gluster
06:03 dusmant joined #gluster
06:04 sputnik13 joined #gluster
06:04 soumya joined #gluster
06:08 kshlm joined #gluster
06:13 atalur joined #gluster
06:17 shubhendu_ joined #gluster
06:19 ndarshan joined #gluster
06:19 nishanth joined #gluster
06:23 Slydder joined #gluster
06:26 Philambdo joined #gluster
06:28 rgustafs joined #gluster
06:34 raghu joined #gluster
06:35 ricky-ticky joined #gluster
06:35 anoopcs joined #gluster
06:43 kumar joined #gluster
06:55 atinmu joined #gluster
06:58 rjoseph joined #gluster
06:58 Gorian joined #gluster
07:00 spandit joined #gluster
07:17 Fen2 joined #gluster
07:20 LebedevRI joined #gluster
07:24 nishanth joined #gluster
07:24 shubhendu_ joined #gluster
07:27 ndarshan joined #gluster
07:29 nshaikh joined #gluster
07:31 anoopcs joined #gluster
07:36 zerick joined #gluster
07:42 rjoseph joined #gluster
07:42 SOLDIERz joined #gluster
07:54 atinmu joined #gluster
08:07 liquidat joined #gluster
08:08 kumar joined #gluster
08:12 Slashman joined #gluster
08:31 deepakcs joined #gluster
08:31 mageru joined #gluster
08:33 shubhendu joined #gluster
08:40 glusterbot` joined #gluster
08:43 glusterbot joined #gluster
08:47 vimal joined #gluster
08:49 aravindavk joined #gluster
08:49 Alpinist joined #gluster
08:49 atinmu joined #gluster
08:51 dusmant joined #gluster
08:52 rjoseph joined #gluster
08:54 dusmant joined #gluster
08:55 Gorian joined #gluster
08:55 spandit joined #gluster
08:59 Gorian joined #gluster
09:10 bharata_ joined #gluster
09:23 rgustafs joined #gluster
09:24 aravindavk joined #gluster
09:24 DV joined #gluster
09:33 glusterbot New news from newglusterbugs: [Bug 1154017] Fix race between rdma_disconnect and other dependant operations <https://bugzilla.redhat.com/show_bug.cgi?id=1154017>
09:36 atinmu joined #gluster
09:38 Gorian joined #gluster
09:39 DV joined #gluster
09:40 anands joined #gluster
09:40 SOLDIERz joined #gluster
09:43 xandrea joined #gluster
09:43 Gorian joined #gluster
09:43 xandrea hi all
09:44 xandrea just a question… but with replica 2.. are not the two bricks syncronized bi-directional ?
09:50 Champi joined #gluster
09:50 Gorian joined #gluster
09:58 Gorian joined #gluster
09:59 calum_ joined #gluster
10:00 haomaiwang joined #gluster
10:03 glusterbot New news from newglusterbugs: [Bug 1151384] Rebalance fails to complete - stale file handles after 202,908 files <https://bugzilla.redhat.com/show_bug.cgi?id=1151384>
10:08 lalatenduM joined #gluster
10:09 calisto joined #gluster
10:16 haomaiw__ joined #gluster
10:19 ndarshan joined #gluster
10:20 Gorian joined #gluster
10:21 haomaiwa_ joined #gluster
10:36 charta joined #gluster
10:37 haomaiw__ joined #gluster
10:44 rjoseph joined #gluster
10:47 sijis joined #gluster
11:05 lalatenduM joined #gluster
11:06 spandit joined #gluster
11:08 virusuy joined #gluster
11:09 dusmant joined #gluster
11:09 nishanth joined #gluster
11:15 mojibake joined #gluster
11:15 harish joined #gluster
11:16 Gorian joined #gluster
11:17 ppai joined #gluster
11:21 ndarshan joined #gluster
11:22 Alpinist joined #gluster
11:24 fattaneh1 joined #gluster
11:28 ctria joined #gluster
11:28 Gorian joined #gluster
11:30 bala joined #gluster
11:39 XpineX joined #gluster
11:40 nishanth joined #gluster
11:41 fattaneh1 left #gluster
11:49 ppai joined #gluster
11:50 klaxa joined #gluster
11:56 klaxa left #gluster
11:59 davemc joined #gluster
12:01 bala joined #gluster
12:02 Gorian joined #gluster
12:03 kanagaraj joined #gluster
12:04 calum_ joined #gluster
12:04 soumya joined #gluster
12:10 ppai joined #gluster
12:13 itisravi joined #gluster
12:16 inodb joined #gluster
12:17 plarsen joined #gluster
12:18 theron joined #gluster
12:20 dguettes joined #gluster
12:22 plarsen joined #gluster
12:25 Gorian joined #gluster
12:28 siel joined #gluster
12:30 B21956 joined #gluster
12:31 kanagaraj joined #gluster
12:38 Gorian joined #gluster
12:44 bennyturns joined #gluster
12:55 charta joined #gluster
13:01 capri joined #gluster
13:03 mojibake1 joined #gluster
13:05 diegows joined #gluster
13:06 Gorian joined #gluster
13:08 SOLDIERz joined #gluster
13:10 haomaiwang joined #gluster
13:10 ira joined #gluster
13:11 cristov_mac joined #gluster
13:12 cristov_mac joined #gluster
13:13 cristov_mac joined #gluster
13:14 cristov_mac joined #gluster
13:17 Gorian joined #gluster
13:21 Fen1 joined #gluster
13:25 rolfb joined #gluster
13:27 cjanbanan joined #gluster
13:27 Gorian joined #gluster
13:30 edong23 joined #gluster
13:37 cjanbanan How will I know that my filter script is in use? The path given in http://www.gluster.org/community/documentation/index.php/Glusterfs-filter seems a bit strange, as I had to create some of the directories as well as the script.
13:37 mat1010 joined #gluster
13:39 doubt joined #gluster
13:39 _Bryan_ joined #gluster
13:41 msmith_ joined #gluster
13:42 Gorian joined #gluster
13:49 mojibake joined #gluster
13:50 davidhadas joined #gluster
13:50 soumya__ joined #gluster
13:52 Alpinist joined #gluster
13:53 sauce joined #gluster
13:53 sauce joined #gluster
13:55 mojibake joined #gluster
13:58 cristov_mac joined #gluster
13:59 kkeithley cjanbanan: why do you think the path looks strange?
13:59 cristov_mac joined #gluster
14:00 kkeithley If you build from source and run `make install` then it might want to be /usr/_local_/....
14:00 cristov_mac joined #gluster
14:00 BlackPanx joined #gluster
14:00 BlackPanx hello guys
14:00 BlackPanx does gluster report thru inotify when a file gets successfully written to disk ?
14:01 BlackPanx or something similar ?
14:01 cristov_mac joined #gluster
14:01 BlackPanx so that all nodes have the file
14:01 BlackPanx and there will be no problem if i access any of the nodes for this file immediately after it was written.
14:01 kkeithley but if you install RPMs or DPKGs from download.gluster.org or from semiosis' PPA then /usr/lib(64)/glusterfs/... is correct
14:01 cristov_mac joined #gluster
14:02 cristov_mac joined #gluster
14:03 cjanbanan Well, I had to create the directories $VERSION (3.4.2) and filter. I expected them to exist already.
14:05 cjanbanan I can't see any sign of my script being used, that's why I suspect that my path is wrong.
14:06 kkeithley @later tell semiosis IIRC I only removed usr/sbin/glfsheal from glusterfs-server.install
14:06 glusterbot kkeithley: The operation succeeded.
14:06 bennyturns joined #gluster
14:12 bene joined #gluster
14:22 ekuric joined #gluster
14:23 kkeithley no, nothing creates the /usr/lib64/glusterfs/3.x.y/filter directory. glusterd will opendir to see if it exists though, and then readdir to find any filters. It calls access(path, X_OK); so if it doesn't have the exec bit set it won't even try to run it.
14:25 jobewan joined #gluster
14:26 davemc joined #gluster
14:28 cjanbanan Well in my case it's /usr/lib.
14:29 cjanbanan ...and all required bits are set.
14:31 kkeithley 32-bit machine?
14:32 cjanbanan VirtualBox. Guess I configured it that way some time ago.
14:34 glusterbot New news from newglusterbugs: [Bug 1154098] Bad debian sources.list configuration in multiarch context <https://bugzilla.redhat.com/show_bug.cgi?id=1154098>
14:36 AndChat-81776 joined #gluster
14:41 sputnik13 joined #gluster
14:47 cjanbanan Any log file which will tell me if the script was executed or not?
14:47 failshell joined #gluster
14:48 kkeithley cjanbanan: seems to be workingfor me. fedora20 vm, 64-bit
14:49 cjanbanan How can you tell?
14:50 cjanbanan Do I have to restart the volume?
14:51 kkeithley I created my brick in /var/tmp/bricks. the sed cmd is `sed -i -e s/tmp/tmpfoo/ "$1"`
14:52 kkeithley I did a `gluster volume set $volname nfs.disable true`
14:52 kkeithley after that all the lines /var/tmp/bricks/... were changed to /var/tmpfoo/bricks/...
14:52 kkeithley in the volfile
14:53 cjanbanan Where is the volfile?
14:53 shubhendu joined #gluster
14:53 kkeithley /var/lib/glusterd/vols/$volname/$volname.$hostname.var-tmp-bricks*.vol
14:55 sputnik13 joined #gluster
14:57 cjanbanan In that dir I have two vol files describing the bricks and one which probably describes the glusterfs volume.
14:59 cjanbanan The latter is the only one which contains the word replicate, but no sign of my filtering inside. I've verified the script manually though.
15:01 semiosis :O
15:01 cjanbanan My vol file is called gvol-fuse.vol and then there's even a trusted-gvol-fuse.vol.
15:01 cjanbanan All these vol files make me confused.
15:02 theron joined #gluster
15:03 doctorwedgeworth joined #gluster
15:04 doctorwedgeworth is there any way to get gluster client 3.4 to talk to gluster server 3.2?
15:09 kkeithley *-fuse.vol (and trusted-*-fuse.vol) is the volfile that's sent to the clients for gluster native (fuse) mounts
15:11 bala joined #gluster
15:11 jobewan joined #gluster
15:12 cjanbanan Guess I get this (kind of a) mess because I run the server and client on the same VM. Two VMs though to implement a replicated volume.
15:12 anands joined #gluster
15:19 cjanbanan In that case I guess my vol files are the ones called gvol.mpa.data.vol and gvol.mpb.data.vol. But I expect nothing to be changed in those as the subvolume which includes the keyword replicated is not part of those.
15:20 cjanbanan (Hosts are called mpa and mpb and the /data are the bricks.)
15:21 kkeithley yes, those are the server-side vol files used by the glusterfsd servers
15:22 lpabon joined #gluster
15:22 cjanbanan But I would expect the favorite-child option to exist on the server side(?).
15:23 kkeithley if you're using gluster native mounts for the client (versus NFS) then you'll see glusterfs fuse bridge daemons, and they use the gvol-fuse.vol files
15:23 cjanbanan My script won't affect those server-side vol files.
15:24 cjanbanan That makes sense. I don't use NFS so far.
15:29 SOLDIERz joined #gluster
15:30 cjanbanan I thought that the favorite-child option should be placed within the volume gvol-replicate-0 ... endvolume part of the volume file.
15:31 kkeithley I don't know, I'm not very familiar with that option
15:32 cjanbanan OK. Thanks anyway for helping me out. Time to quit for today...
15:33 JoeJulian Good news kkeithley. cjanbanan says your day is over already. ;)
15:33 kkeithley woohoo
15:33 kkeithley I'll just mention that to my boss
15:33 vimal joined #gluster
15:33 cjanbanan It's 'already' getting dark here. :-)
15:33 kkeithley on my way out the door
15:34 kkeithley "here" is where?
15:34 cjanbanan Sweden.
15:34 kkeithley east and north, double whammy
15:35 cjanbanan Take care guys! :-)
15:35 _dist joined #gluster
15:36 jskinner joined #gluster
15:40 msmith_ joined #gluster
15:43 daMaestro joined #gluster
15:50 MacWinner joined #gluster
15:54 aravindavk joined #gluster
15:56 doctorwedgeworth kkeithley, sorry I got a little waylaid and forgot I had a window open here. So mounting from a newer client should work if I replace "mount server:files /files" with "mount server:files-fuse.vol /files"?
16:02 zerick joined #gluster
16:04 semiosis doctorwedgeworth: kkeithley was speaking to cjanbanan.  unfortunately for you 3.2 and 3.4 are incompatible :(
16:06 doctorwedgeworth :( what about 3.2 and 3.3? or can I serve the same gluster mount with two versions of gluster at the same time? I can't update all of the servers at once but I'm not sure how else I can approach this. Possibly do the fileservers first and backport gluster
16:07 dtrainor joined #gluster
16:10 semiosis doctorwedgeworth: ,,(3.3 upgrade notes)
16:10 glusterbot doctorwedgeworth: http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
16:10 semiosis 1) GlusterFS 3.3.0 is not compatible with any earlier released versions. Please make sure that you schedule a downtime before you upgrade.
16:10 doctorwedgeworth oh ick. Thanks though
16:10 semiosis yw
16:11 semiosis also, fwiw, ,,(3.4 upgrade notes)
16:11 glusterbot http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
16:25 sputnik13 joined #gluster
16:26 soumya__ joined #gluster
16:34 kumar joined #gluster
16:43 cmtime joined #gluster
16:50 msmith_ joined #gluster
16:51 lmickh joined #gluster
16:58 lalatenduM joined #gluster
16:59 PeterA joined #gluster
17:11 calisto joined #gluster
17:18 coredump joined #gluster
17:21 doctorwedgeworth semiosis (or anyone): if I export the gluster mount as NFS, is that likely to cause problems? it's complaining so far. I've seen a couple of workarounds but I wanted to check in case those limitations are there for a reason
17:21 semiosis gluster has a built in nfs server
17:21 semiosis ,,(nfs)
17:21 glusterbot To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
17:24 doctorwedgeworth says requested NFS version or transport protocol is not supported with nfs-kernel-server stopped
17:24 glusterbot doctorwedgeworth: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
17:24 doctorwedgeworth this is a good bot
17:25 semiosis wow, i've never seen glusterbot do that before!
17:26 doctorwedgeworth gluster> volume set files nfs.disable off      Set volume unsuccessful
17:27 kumar doctorwedgeworth: can u check glusterd logs and see why command is unsuccessful
17:29 doctorwedgeworth heh
17:29 doctorwedgeworth so I tailed all the log files and ran it again looking through all the logs to see why it failed, nothing there
17:29 doctorwedgeworth because it hadn't failed the second time, and now mounting works
17:29 doctorwedgeworth thanks a lot!
17:29 semiosis yw
17:31 sputnik13 joined #gluster
17:46 sputnik13 joined #gluster
17:54 eromero joined #gluster
17:54 eromero Hi folks is it possible to rename a volume on 3.5.2 ?
17:59 cjanbanan joined #gluster
18:05 sputnik13 joined #gluster
18:06 davidhadas_ joined #gluster
18:14 fattaneh joined #gluster
18:16 jobewan joined #gluster
18:21 _dist joined #gluster
18:22 _dist I was wondering if there is any way to stop a single brick, even if it's a hack
18:23 msvbhat eromero: renaming a volume is not possible.
18:24 msvbhat _dist: You can kill the glusterfsd process exporting that brick.
18:25 eromero thanks msvbhat, looks like I would have to do it by hand on the fs.
18:25 _dist msvbhat: the problem is I want to stop a brick on a server that runs other bricks I don't want to stop
18:26 msvbhat eromero: You will have to rename all the volfiles and it's related config files.
18:26 msvbhat eromero: But just curious, why do you want to rename a volume?
18:27 msvbhat _dist: Yeah, find the pid of the brick and kill it using 'kill' command
18:28 sputnik13 joined #gluster
18:28 msvbhat _dist: Use ps to list the pid of glusterfsd process
18:28 msvbhat _dist: And then use 'kill $PID' to kill the process
18:28 eromero msvbhat: well, first time using glusterfs so, I was testing and got everything configured and working, but used a dumb volume name (non descriptive)… but now that everything is working I was wondering if I could just rename it to something useful. I saw there was volume rename on previous versions of gluster> (man page still shows it)
18:30 eromero now, into a real problem: I'm using auth.allow: 10.132.210.91,10.132.210.92 to limit the access to the volume to those two clients (so far)
18:30 eromero if I add it  10.132.210.91 can no longer connect
18:30 msvbhat eromero: I'm sure that is outdated. That feature is not there.
18:31 eromero if I set it to * with gluster> volume set datapoint auth.allow * 10.132.210.91  can connect again
18:32 eromero I see in the logs that the connection is comming from that ip: 0-datapoint-client-1: Connected to 10.132.210.91
18:32 eromero so not sure what's going on
18:32 msvbhat eromero: When you say you added new IP, DID you set it using all three IP's or just one?
18:33 chirino joined #gluster
18:33 msvbhat eromero: Because AFAIK, If you run gluster> volume set datapoint auth.allow 10.132.210.91
18:33 eromero I set it with: gluster> volume set datapoint auth.allow 10.132.210.91,10.132.210.92
18:33 msvbhat It will just allow that IP.
18:34 eromero and verify that shows on info: http://fpaste.org/142970/57085414/
18:34 glusterbot Title: #142970 Fedora Project Pastebin (at fpaste.org)
18:34 msvbhat eromero: And then you can't connect from that client?
18:34 eromero yeah, just as soon as I add that
18:35 eromero (umont/mount again)
18:35 eromero I get: 0-datapoint-client-1: SETVOLUME on remote-host failed: Authentication failed after trying to mount again on the logs
18:39 msvbhat eromero: Hmm... Can you share last few lines of client mount logs?
18:40 msvbhat eromero: And probably send a mail to gluster-users-list as well?
18:41 eromero sure thing, just was wondering if the auth.allow was in the right format
18:41 eromero (using 3.5.2)
18:42 msvbhat eromero: Well, I'm not 100% sure. Someone would know.
18:43 * msvbhat tries to search the right format for auth.allow
18:43 eromero I also tried with: auth.allow: 10.132.210.* but same behavior
18:43 eromero what I'm going to try is setting up another client, see if it's client-specific
18:46 chirino joined #gluster
18:46 sputnik13 joined #gluster
18:47 msvbhat eromero: client-specific meaning?
18:47 eromero something I f*ckd up setting up on that client xD
18:48 msvbhat eromero: Oh :D
18:48 eromero hehe
18:49 eromero i'll get back with the info
18:49 eromero man, I missed IRC ^_^
18:50 _dist left #gluster
18:50 msvbhat eromero: Sure. Send a mail as well. If not resolved.
18:50 eromero will do
18:50 _dist joined #gluster
18:50 msvbhat to gluster users list
18:51 eromero does the bot knows the list email?
18:51 eromero @glusterbot help
18:52 eromero LoL guess I'll google it, I already know google's interface
18:52 msvbhat eromero: gluster-users@gluster.org
18:52 eromero thanks again msvbhat!
18:53 semiosis @mailing list
18:53 glusterbot semiosis: the gluster general discussion mailing list is gluster-users, here: http://www.gluster.org/mailman/listinfo/gluster-users
18:53 JoeJulian You're doing it right. I'm not sure why it's not working. Did you check the brick log?
18:53 semiosis glusterbot: thx
18:53 glusterbot semiosis: you're welcome
18:53 JoeJulian Or... no... probably the glusterd log.
18:54 msvbhat Yeah. Check glusterd logs of the node from which you are trying to mount
18:54 msvbhat eromero: ^^
18:56 eromero @pastebin
18:56 glusterbot eromero: I do not know about 'pastebin', but I do know about these similar topics: 'paste', 'pasteinfo'
18:56 eromero @pasteinfo
18:56 glusterbot eromero: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:57 firemanxbr joined #gluster
18:59 eromero JoeJulian: https://dpaste.de/QBuM
18:59 glusterbot Title: dpaste.de: Snippet #287551 (at dpaste.de)
19:00 anands joined #gluster
19:02 sputnik13 joined #gluster
19:03 msmith__ joined #gluster
19:04 msvbhat eromero: The logs seems to be from client mount logs. Aren't they?
19:04 eromero yeah
19:04 msvbhat eromero: Can you  check glusterd logs from the node which you specify in mount command?
19:04 msvbhat it might have a clue why the authentication failed
19:07 dtrainor joined #gluster
19:08 eromero https://dpaste.de/Mfoo
19:08 glusterbot Title: dpaste.de: Snippet #287553 (at dpaste.de)
19:08 eromero i see this: 0-auth: no authentication module is interested in accepting remote-client (null)
19:13 msvbhat eromero: 10.132.210.92 Is a server or client? Or both?
19:13 sputnik13 joined #gluster
19:13 eromero both
19:13 eromero both boxes are client and server so far
19:14 msvbhat eromero: Oh, I'm not entirely sure how auth.allow works.
19:15 eromero I guess I'll left it open and just block it via firewall
19:15 eromero while I get it figured out
19:17 sputnik13 joined #gluster
19:18 nshaikh joined #gluster
19:18 msvbhat eromero: AFAIK, the mounting from trusted storage pool doesn't trigger authentication.
19:19 msvbhat eromero: but from external IP, they authenticate the client.
19:19 msvbhat eromero: This might be a bug as well.
19:21 eromero there's a bug already but marked as closed/applied already: https://bugzilla.redhat.com/show_bug.cgi?id=810179
19:21 glusterbot Bug 810179: high, unspecified, 3.3.0beta, kaushal, CLOSED CURRENTRELEASE, auth.allow/reject is not working as expected when list of ip_address is specified
19:21 eromero I'll dig into it, thanks again folks!
19:23 sputnik13 joined #gluster
19:32 DV joined #gluster
19:33 nshaikh joined #gluster
19:37 DV joined #gluster
19:42 pkoro_ joined #gluster
19:46 dtrainor joined #gluster
19:50 ctria joined #gluster
19:54 harish joined #gluster
19:54 sputnik13 joined #gluster
20:01 theron joined #gluster
20:01 calisto joined #gluster
20:02 dtrainor joined #gluster
20:10 a2 joined #gluster
20:10 cjanbanan joined #gluster
20:14 davemc joined #gluster
20:16 dtrainor joined #gluster
20:17 chirino joined #gluster
20:31 theron joined #gluster
20:32 theron joined #gluster
20:38 theron joined #gluster
20:55 chirino joined #gluster
20:57 theron joined #gluster
20:59 davidhadas__ joined #gluster
21:14 anands joined #gluster
21:22 theron joined #gluster
21:25 theron joined #gluster
21:45 theron joined #gluster
21:46 _dist left #gluster
21:53 semiosis not really a gluster issue, but seeing a weird behavior from an XFS filesystem where I can read any file by name but listing any directory just hangs
21:54 semiosis this filesystem is from a snapshot taken of a live & writable mounted filesystem
22:02 theron joined #gluster
22:05 calisto joined #gluster
22:34 calisto1 joined #gluster
22:49 cjanbanan joined #gluster
22:51 sputnik13 joined #gluster
23:21 dtrainor joined #gluster
23:29 badone joined #gluster
23:36 DV joined #gluster
23:44 cjanbanan joined #gluster
23:46 bala joined #gluster
23:48 dtrainor joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary