Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 JoeJulian kminooie: 3.2 to 3.6 shouldn't be too much of a jump. I would wait for 3.6.3, though, before making that jump.
00:13 ckannan joined #gluster
00:21 kminooie jackdpeterson: mount your fs rw ? maybe? are you getting this when you are trying to write in your mount point on the client?
00:24 nitro3v joined #gluster
00:34 JoeJulian kminooie: Once splitbrain successfully retrieved the vol file and created the mount directories and tmp files, it should have been able to mount. If it failed, logs should be in /var/log/glusterfs/. I'm suspecting you have a network problem.
00:34 JoeJulian Thanks for helping with this, btw. I want it to just work.
00:36 ckannan left #gluster
00:59 nitro3v joined #gluster
00:59 dgandhi joined #gluster
01:04 kminooie joined #gluster
01:08 badone_ joined #gluster
01:13 diegows joined #gluster
01:20 chirino joined #gluster
01:36 kminooie JoeJulian:  so after installing glusterfs-fuse package mounting works. But ...
01:38 kminooie it only works on volumes when I set the  diagnostics.brick-log-level to DEBUG ( it didn't work when it was in warning , I didn't test for the in between values )
01:40 harish_ joined #gluster
01:41 kminooie I can't imagine what difference that possibly makes but I have tested it couple of time now when is is on warning it can't get vol file but when it is on DEBUG, it does. go figure :D
01:44 kminooie for reference this is the output when I tried it on the volume that currently is having split-brain issue http://ur1.ca/jtfcj  ( diagnostics.client-log-level is on warning )
01:45 kminooie sorry before I said diagnostics.brick-log-level .  I meant  diagnostics.client-log-level.
02:08 kminooie anyway have a good weekend everyone
02:10 nangthang joined #gluster
02:19 Pupeno joined #gluster
02:21 haomaiwa_ joined #gluster
02:43 harish_ joined #gluster
02:49 ilbot3 joined #gluster
02:49 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:03 kevein joined #gluster
03:11 nitro3v joined #gluster
03:16 shubhendu joined #gluster
03:19 nitro3v_ joined #gluster
04:03 bala joined #gluster
04:14 Bhaskarakiran joined #gluster
04:21 hagarth joined #gluster
04:40 kdhananjay joined #gluster
04:44 hagarth joined #gluster
04:45 shubhendu joined #gluster
05:51 harish_ joined #gluster
06:30 cfeller joined #gluster
06:30 _Bryan_ joined #gluster
07:01 hchiramm__ joined #gluster
07:41 LebedevRI joined #gluster
07:49 AnxiousGarlic joined #gluster
07:49 AnxiousGarlic left #gluster
07:49 shubhendu joined #gluster
08:18 Philambdo joined #gluster
08:25 kovshenin joined #gluster
08:28 soumya joined #gluster
08:29 ekuric joined #gluster
08:36 ghenry joined #gluster
08:42 glusterbot News from newglusterbugs: [Bug 1155181] Lots of compilation warnings on OSX.  We should probably fix them. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1155181>
08:48 lyang0 joined #gluster
09:08 javi404 joined #gluster
09:13 glusterbot News from newglusterbugs: [Bug 1197308] do not depend on "killall", use "pkill" instead <https://bugzilla.redhat.co​m/show_bug.cgi?id=1197308>
09:21 soumya joined #gluster
10:10 badone__ joined #gluster
11:52 Pupeno joined #gluster
12:14 Folken_ joined #gluster
12:38 sprachgenerator joined #gluster
13:03 misc joined #gluster
13:27 pelox joined #gluster
13:33 nitro3v joined #gluster
14:07 plarsen joined #gluster
14:33 hchiramm__ joined #gluster
14:37 sprachgenerator joined #gluster
14:41 pelox .0.3
14:55 ekuric left #gluster
15:14 B21956 joined #gluster
15:14 B21956 left #gluster
15:14 kovshenin joined #gluster
15:57 bala joined #gluster
16:18 anoopcs joined #gluster
16:19 anoopcs joined #gluster
16:23 anoopcs joined #gluster
16:23 jiffin joined #gluster
16:27 anoopcs joined #gluster
16:30 anoopcs joined #gluster
16:56 jiffin joined #gluster
17:16 jiffin joined #gluster
17:24 anoopcs1 joined #gluster
17:28 nitro3v joined #gluster
18:02 h4rry joined #gluster
18:05 h4rry_ joined #gluster
18:06 h4rry__ joined #gluster
18:06 T3 joined #gluster
18:16 jiffin joined #gluster
18:19 hani joined #gluster
18:19 hani hello ?
18:19 hani having an issue with mounting a gluster 3.5.3 volume on OEL 6.6
18:28 jiffin1 joined #gluster
18:31 jiffin joined #gluster
18:48 Philambdo joined #gluster
18:50 jiffin joined #gluster
19:16 jiffin joined #gluster
19:38 Philambdo joined #gluster
20:06 JoeJulian hani: Check firewall, selinux, network connectivity, hostname lookup, and client logs. Good luck, I'll be out 'till Monday.
20:07 hani checked all those (no firewall, selinux is disabled, gluster peer status is good, NFS mounts work without issue … thanks )
20:28 DV joined #gluster
21:07 Philambdo joined #gluster
21:35 T3 joined #gluster
21:44 badone__ joined #gluster
21:49 sputnik13 joined #gluster
22:04 kovshenin joined #gluster
22:07 hani fyi: looked like the 'mount.glusterfs' wrapper needed a 'sleep 5' just before the 'stat' check to prevent a false negative
22:09 wkf joined #gluster
22:51 theron joined #gluster
23:14 T0aD joined #gluster
23:23 mbelaninja joined #gluster
23:47 partner joined #gluster
23:52 partner joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary