Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 ppai joined #gluster
00:08 ws2k3 joined #gluster
00:20 wushudoin joined #gluster
00:40 julim joined #gluster
00:44 itisravi joined #gluster
00:45 DV joined #gluster
00:56 badone_ joined #gluster
01:12 julim joined #gluster
01:15 plarsen joined #gluster
01:23 gnudna left #gluster
01:46 pdrakeweb joined #gluster
01:55 nangthang joined #gluster
02:12 kdhananjay joined #gluster
02:18 itisravi joined #gluster
02:18 harish joined #gluster
02:36 pdrakeweb joined #gluster
02:45 itisravi joined #gluster
02:54 DV joined #gluster
02:55 Guest72040 joined #gluster
03:18 kumar joined #gluster
03:29 pdrakeweb joined #gluster
03:34 kanagaraj joined #gluster
03:39 [7] joined #gluster
03:43 autoditac joined #gluster
03:44 sripathi joined #gluster
03:47 overclk joined #gluster
03:48 itisravi joined #gluster
03:54 pdrakeweb joined #gluster
03:57 suliba joined #gluster
03:57 shubhendu joined #gluster
04:08 sakshi joined #gluster
04:11 nbalacha joined #gluster
04:13 yazhini joined #gluster
04:19 glusterbot News from newglusterbugs: [Bug 1223185] [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rhel6.6 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223185>
04:20 spandit joined #gluster
04:35 prabu joined #gluster
04:37 pppp joined #gluster
04:39 Bhaskarakiran joined #gluster
04:42 nishanth joined #gluster
04:43 rejy joined #gluster
04:47 ramteid joined #gluster
04:48 ashiq joined #gluster
04:48 sakshi joined #gluster
04:49 glusterbot News from newglusterbugs: [Bug 1222869] [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rhel6.6 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1222869>
04:51 meghanam joined #gluster
04:52 sakshi joined #gluster
04:53 rafi joined #gluster
05:16 ndarshan joined #gluster
05:20 harish_ joined #gluster
05:24 atinmu joined #gluster
05:24 rafi1 joined #gluster
05:26 rjoseph joined #gluster
05:27 Manikandan joined #gluster
05:28 anrao joined #gluster
05:29 rafi joined #gluster
05:29 hgowtham joined #gluster
05:31 rafi joined #gluster
05:35 kdhananjay joined #gluster
05:35 poornimag joined #gluster
05:38 raghu joined #gluster
05:39 kdhananjay1 joined #gluster
05:39 Anjana joined #gluster
05:42 DV joined #gluster
05:43 hagarth joined #gluster
05:46 gem joined #gluster
05:50 wtracz2 left #gluster
05:59 glusterbot News from resolvedglusterbugs: [Bug 1212084] Data Tiering: Detaching tier from a normal disperse volume seems to destroy/lose EC(disperse) properties and shows volume type as dist-disperse <https://bugzilla.redhat.co​m/show_bug.cgi?id=1212084>
06:00 pdrakeweb joined #gluster
06:02 pdrakeweb joined #gluster
06:04 pdrakeweb joined #gluster
06:04 dusmant joined #gluster
06:06 pdrakeweb joined #gluster
06:08 pdrakeweb joined #gluster
06:08 nangthang joined #gluster
06:10 pdrakeweb joined #gluster
06:10 overclk joined #gluster
06:12 pdrakeweb joined #gluster
06:13 pdrakeweb joined #gluster
06:15 jtux joined #gluster
06:15 pdrakewe_ joined #gluster
06:17 pdrakeweb joined #gluster
06:19 pdrakeweb joined #gluster
06:21 pdrakeweb joined #gluster
06:22 prabu joined #gluster
06:22 Philambdo joined #gluster
06:23 pdrakewe_ joined #gluster
06:25 pdrakeweb joined #gluster
06:26 rafi1 joined #gluster
06:26 sakshi joined #gluster
06:27 pdrakeweb joined #gluster
06:29 pdrakewe_ joined #gluster
06:29 haomaiwa_ joined #gluster
06:31 pdrakeweb joined #gluster
06:33 pdrakeweb joined #gluster
06:34 liquidat joined #gluster
06:35 pdrakeweb joined #gluster
06:37 pdrakewe_ joined #gluster
06:39 pdrakeweb joined #gluster
06:41 pdrakeweb joined #gluster
06:42 pdrakeweb joined #gluster
06:44 pdrakewe_ joined #gluster
06:46 pdrakeweb joined #gluster
06:47 aravindavk joined #gluster
06:48 pdrakeweb joined #gluster
06:48 atinm joined #gluster
06:49 glusterbot News from newglusterbugs: [Bug 1223213] gluster volume status fails with locking failed error message <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223213>
06:49 glusterbot News from newglusterbugs: [Bug 1223215] gluster volume status fails with locking failed error message <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223215>
06:50 pdrakeweb joined #gluster
06:50 ira joined #gluster
06:52 pdrakeweb joined #gluster
06:54 pdrakeweb joined #gluster
06:56 pdrakewe_ joined #gluster
06:57 rafi joined #gluster
06:57 jcastill1 joined #gluster
06:58 pdrakeweb joined #gluster
07:00 pdrakeweb joined #gluster
07:00 kshlm joined #gluster
07:02 jcastillo joined #gluster
07:04 pdrakeweb joined #gluster
07:06 pdrakeweb joined #gluster
07:08 pdrakeweb joined #gluster
07:10 pdrakewe_ joined #gluster
07:12 pdrakewe_ joined #gluster
07:13 pdrakeweb joined #gluster
07:14 deniszh joined #gluster
07:14 nsoffer joined #gluster
07:15 pdrakeweb joined #gluster
07:17 pdrakewe_ joined #gluster
07:19 pdrakeweb joined #gluster
07:19 Slashman joined #gluster
07:21 pdrakeweb joined #gluster
07:23 pdrakewe_ joined #gluster
07:23 poornimag joined #gluster
07:25 pdrakeweb joined #gluster
07:27 pdrakeweb joined #gluster
07:29 pdrakeweb joined #gluster
07:31 pdrakeweb joined #gluster
07:31 fsimonce joined #gluster
07:35 pdrakeweb joined #gluster
07:35 jiffin joined #gluster
07:36 pdrakeweb joined #gluster
07:38 pdrakeweb joined #gluster
07:40 pdrakeweb joined #gluster
07:42 pdrakeweb joined #gluster
07:44 pdrakeweb joined #gluster
07:45 schandra joined #gluster
07:46 pdrakewe_ joined #gluster
07:48 pdrakeweb joined #gluster
07:50 pdrakeweb joined #gluster
07:50 atinm joined #gluster
07:52 pdrakeweb joined #gluster
07:52 ctria joined #gluster
07:54 pdrakeweb joined #gluster
07:55 xiu joined #gluster
07:56 pdrakewe_ joined #gluster
07:56 DV joined #gluster
07:58 pdrakeweb joined #gluster
07:59 pdrakeweb joined #gluster
08:01 pdrakewe_ joined #gluster
08:03 pdrakeweb joined #gluster
08:06 pdrakeweb joined #gluster
08:06 autoditac joined #gluster
08:08 pdrakeweb joined #gluster
08:09 pdrakeweb joined #gluster
08:11 pdrakeweb joined #gluster
08:12 dastar joined #gluster
08:13 pdrakeweb joined #gluster
08:15 pdrakeweb joined #gluster
08:17 pdrakeweb joined #gluster
08:18 mbukatov joined #gluster
08:19 pdrakeweb joined #gluster
08:21 pdrakeweb joined #gluster
08:23 Norky joined #gluster
08:23 pdrakewe_ joined #gluster
08:25 pdrakeweb joined #gluster
08:27 pdrakeweb joined #gluster
08:29 pdrakeweb joined #gluster
08:30 karnan joined #gluster
08:31 pdrakeweb joined #gluster
08:34 atalur joined #gluster
08:46 anrao joined #gluster
08:46 dusmant joined #gluster
08:47 hgowtham joined #gluster
08:51 shubhendu joined #gluster
08:52 ndarshan joined #gluster
08:52 LebedevRI joined #gluster
08:58 Bhaskarakiran joined #gluster
09:12 ndarshan joined #gluster
09:14 dusmant joined #gluster
09:16 kevein joined #gluster
09:17 nbalacha joined #gluster
09:18 spalai joined #gluster
09:19 [Enrico] joined #gluster
09:21 shubhendu joined #gluster
09:34 ghenry joined #gluster
09:36 ctria joined #gluster
09:39 harish_ joined #gluster
09:42 anti[Enrico] joined #gluster
09:45 nishanth joined #gluster
09:46 dusmant joined #gluster
09:52 s19n joined #gluster
09:57 s19n Hi all. The message "[2015-05-20 09:53:06.928187] W [server-resolve.c:419:resolve_anonfd_simple] 0-server: inode for the gfid (889e3169-d0f4-4c95-91af-2052cd97c196) is not found. anonymous fd creation failed" is flooding my brick logs. "gluster volume heal myvol info split-brain" says there are no split-brains. In the bricks (4x2 distributed replicated) I can see the files under the ".glusterfs" directory, and they are identical between the tw
09:59 ashiq ndevos++
09:59 glusterbot ashiq: ndevos's karma is now 15
10:00 ndevos ashiq++ too :)
10:00 glusterbot ndevos: ashiq's karma is now 1
10:15 dusmant joined #gluster
10:15 s19n I've searched the archives and found the following: https://botbot.me/freenode/gluster/​2014-08-10/?msg=19468175&amp;page=1 but it all insists on a split-brain
10:20 glusterbot News from newglusterbugs: [Bug 1117888] Problem when enabling quota : Could not start quota auxiliary mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1117888>
10:37 haomaiwang joined #gluster
10:39 DV joined #gluster
10:40 harish_ joined #gluster
11:04 nishanth joined #gluster
11:06 spandit joined #gluster
11:06 harish_ joined #gluster
11:08 overclk joined #gluster
11:11 Prilly joined #gluster
11:13 spandit joined #gluster
11:29 DV joined #gluster
11:40 liquidat joined #gluster
11:43 nishanth joined #gluster
11:48 anrao joined #gluster
11:49 ctria joined #gluster
11:49 hgowtham joined #gluster
11:57 rgustafs joined #gluster
12:02 hagarth1 joined #gluster
12:02 badone_ joined #gluster
12:05 ndevos REMINDER: Gluster Community Meeting *now* happening in #gluster-meeting
12:08 ashiq joined #gluster
12:10 autoditac joined #gluster
12:21 prabu left #gluster
12:21 prabu joined #gluster
12:24 prabu show
12:25 prabu \
12:31 ctria joined #gluster
12:31 itisravi joined #gluster
12:32 aaronott joined #gluster
12:45 pdrakeweb joined #gluster
12:51 glusterbot News from newglusterbugs: [Bug 1223378] gfid-access: Remove dead increment (dead store) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1223378>
13:07 ctria joined #gluster
13:12 B21956 joined #gluster
13:15 firemanxbr joined #gluster
13:18 Twistedgrim joined #gluster
13:22 rgustafs joined #gluster
13:23 nsoffer joined #gluster
13:25 badone_ joined #gluster
13:27 vimal joined #gluster
13:33 wushudoin joined #gluster
13:34 raghu` joined #gluster
13:34 hagarth @channelstats
13:34 glusterbot hagarth: On #gluster there have been 417411 messages, containing 15829506 characters, 2584522 words, 9106 smileys, and 1285 frowns; 1819 of those messages were ACTIONs.  There have been 194791 joins, 4783 parts, 190398 quits, 29 kicks, 2585 mode changes, and 8 topic changes.  There are currently 243 users and the channel has peaked at 276 users.
13:38 pdrakeweb joined #gluster
13:43 plarsen joined #gluster
13:44 klaxa|work joined #gluster
13:46 aaronott joined #gluster
13:49 neofob joined #gluster
13:51 glusterbot News from newglusterbugs: [Bug 1168897] Attempt remove-brick after node has terminated in cluster gives error: volume remove-brick commit force: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1168897>
13:51 jcastill1 joined #gluster
13:56 jcastillo joined #gluster
14:01 [Enrico] joined #gluster
14:06 jcastill1 joined #gluster
14:08 tuxle joined #gluster
14:08 tuxle hi all
14:08 tuxle i have a strange cluster
14:09 anrao ndevos++
14:09 glusterbot anrao: ndevos's karma is now 16
14:09 tuxle after it crashed the second machine kills the whole cluster
14:09 tuxle i can reignite the gluster volume by shutting down the second node
14:09 tuxle any help?
14:16 purpleidea joined #gluster
14:19 s19n https://bugzilla.redhat.co​m/show_bug.cgi?id=1168897 are you aware of any workaround?
14:19 glusterbot Bug 1168897: medium, medium, ---, bugs, NEW , Attempt remove-brick after node has terminated in cluster gives error: volume remove-brick commit force: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600.
14:20 julim joined #gluster
14:23 jcastillo joined #gluster
14:37 coredump joined #gluster
14:41 jmarley joined #gluster
14:42 kumar joined #gluster
14:42 tuxle is there a way to diagnose an I/O error after opening a file in a gluster volume?
14:42 tuxle it is perfectly fine when i access it throu the brick
14:47 rwheeler joined #gluster
14:48 ndevos tuxle: there probably is a message in the logs for the mountpoint (in case of fuse) or the nfs.log (in case of nfs)
15:01 Anjana joined #gluster
15:07 ctria joined #gluster
15:19 kkeithley left #gluster
15:20 kkeithley joined #gluster
15:27 ctria joined #gluster
15:27 plarsen joined #gluster
15:36 JoeJulian tuxle: "gluster volume heal $volname info"
15:38 dgandhi joined #gluster
15:43 mringwal joined #gluster
15:44 tuxle Brick gampen.rdmsys.ch:/brick/virtStorage/
15:44 tuxle Number of entries: 5
15:44 tuxle Brick umbrail.rdmsys.ch:/brick/virtStorage/
15:44 tuxle Number of entries: 3
15:44 tuxle i think i have some split brain files
15:45 tuxle can i resolve them by simply overwriting the attr of file on the failing node with 0x000...000?
15:46 gnudna joined #gluster
15:48 pdrakewe_ joined #gluster
15:57 JoeJulian That's one way of doing it, yes.
15:57 JoeJulian There's also ,,(splitmount)
15:57 glusterbot https://github.com/joejulian/glusterfs-splitbrain
15:59 haomaiwa_ joined #gluster
16:06 pdrakeweb joined #gluster
16:11 pdrakewe_ joined #gluster
16:12 hagarth joined #gluster
16:13 tuxle thanks
16:16 tuxle i hope it regenerates till tomorrow
16:16 tuxle thanks all :)
16:23 kanagaraj joined #gluster
16:23 Gill joined #gluster
16:38 DV joined #gluster
17:08 s19n left #gluster
17:13 Rapture joined #gluster
17:22 kshlm joined #gluster
17:25 plarsen joined #gluster
17:32 glusterbot News from resolvedglusterbugs: [Bug 1146523] glusterfs.spec.in - sync minor diffs with fedora dist-git glusterfs.spec <https://bugzilla.redhat.co​m/show_bug.cgi?id=1146523>
17:32 glusterbot News from resolvedglusterbugs: [Bug 1159970] glusterfs.spec.in: deprecate *.logrotate files in dist-git in favor of the upstream logrotate files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1159970>
17:42 Gill_ joined #gluster
17:42 coredump joined #gluster
17:43 Gill__ joined #gluster
17:54 julim joined #gluster
18:09 hpekdemir joined #gluster
18:09 hpekdemir hi
18:09 glusterbot hpekdemir: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:10 hpekdemir after I created a volume I can not start it. I get the following error in logs cli.log: "[input.c:36:cli_batch] 0-: Exiting with: -1"
18:10 hpekdemir any hints?
18:10 hpekdemir I can force it to start tough. but then I can't mount the endpoint on any client machine.
18:11 hpekdemir so I guess there is something wrong with the starting in the first place (when forcing). I guess I need to be able to start it the normal (non forced) way.
18:12 JoeJulian check /var/log/etc-glusterfs-glusterd.vol.log for why it's failing.
18:13 hpekdemir well I don't have that logfile
18:13 JoeJulian oops
18:13 JoeJulian check /var/log/glusterfs/etc-glusterfs-glusterd.vol.log for why it's failing.
18:13 hpekdemir but a tail -F /var/log/glusterfs/* just outputs the same error message as above.
18:14 hpekdemir I see.
18:14 hpekdemir removing brick (null) on port 49169. Unable to start brock myhost:/data/volume. Commit for operation "Volume Start" failed on localhost.
18:14 hagarth joined #gluster
18:14 hpekdemir that's a hell of a detailed error message.
18:15 JoeJulian heh
18:15 JoeJulian So if the brick failed, that should be in /var/log/glusterfs/bricks/
18:16 hpekdemir ok more detailed version. let me nopaste
18:17 hpekdemir http://pastebin.com/iqfn42jw
18:17 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:18 JoeJulian Oh, interesting. That looks like a packaging bug.
18:18 hpekdemir http://fpaste.org/223934/32145870/
18:18 hpekdemir this is freebsd 10.1 RELEASE.
18:19 JoeJulian The way I read it, the 'features/changelog' library cannot be found.
18:20 JoeJulian I could be wrong though. I'd have to check the source for how that error message can happen.
18:20 hagarth looks like it is a porting bug .. symbol changelog_select_event is not found?
18:21 lolo joined #gluster
18:21 JoeJulian Oh, right. And that's why it thinks it didn't load the translaotr.
18:21 hpekdemir ok. I am trying to port glusterfs to freebsd. would be nice if you could help me out
18:21 hpekdemir where to search and what to look for.
18:22 hpekdemir in this specific case.
18:22 lolo Hello all, is this an appropriate place for asking for help? I keep getting status failed when I try to rebalance and I'm looking for some help trying to diagnose why.
18:23 haomaiwa_ joined #gluster
18:23 JoeJulian The best place to get feedback on porting issues would be the gluster-devel mailing list. The #gluster-dev channel to a lesser degree.
18:23 hpekdemir ok I will check that out.
18:23 hagarth hpekdemir: I don't see any obvious reason why it should not find the symbol .. besides Emmanuel has been successful with a NetBSD port of 3.7.0.
18:23 hpekdemir any other hints or something you know we could do now?
18:24 hagarth hpekdemir: Emmanuel usually responds to queries on gluster-devel ML .. so maybe worth a shot there.
18:24 hpekdemir hagarth: ok. I'm using 3.7.0 too
18:24 hpekdemir alright.
18:24 hagarth hpekdemir: we in fact have continuous integration tests running with NetBSD. So it should not be so broken on FreeBSD.
18:25 hpekdemir is it available for public?
18:27 kshlm hagarth, this seems to be something that's not a port specific thing.
18:28 hagarth hpekdemir: indeed, http://build.gluster.org/job/racks​pace-netbsd7-regression-triggered/
18:28 kshlm I hit the same when I was trying to test mt-shd port.
18:28 kshlm I thought it was an issue with my system and just removed changelog from the volfiles to get the volume to start.
18:29 kshlm I also had a lot of warnings for changelog while compiling
18:30 hagarth kshlm: I wonder if it is due to the inline void definition
18:30 kshlm Build log is https://gist.github.com/kshlm/ef0db7337be871478805
18:31 kshlm The warnings are for inline function declarations.
18:31 hagarth hpekdemir: maybe you can nuke all inline keywords in changelog-helpers.c and try?
18:32 julim joined #gluster
18:33 hpekdemir I will try.
18:37 pdrakeweb joined #gluster
18:38 hpekdemir kshlm: I remember getting those warnings while compiling, too.
18:38 hpekdemir I removed the inline statements. building right now.
18:38 kkeithley at one point 3.6.x compiled on FreeBSD. We can't be that far ff
18:38 kkeithley far off
18:39 hpekdemir kkeithley: and also working?
18:39 hpekdemir because I can successfully compile.
18:39 kkeithley yes, and worked
18:40 hpekdemir which version was it?
18:40 lolo http://pastebin.com/hRYwdU83 is my log file from bad node incase anyone wouldn't mind checking it out and pointing in a helpful direction =)
18:40 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:42 JoeJulian lolo: Sorry, that does explain what happened, but no idea why.
18:42 JoeJulian [2015-05-20 18:38:37.017452] I [dht-common.c:3539:dht_setxattr] 0-vm_root-dht: fixing the layout of /38c0ea87-715f-4e32-82f9-88bf1682be1a/dom_md
18:42 JoeJulian [2015-05-20 18:38:37.017707] E [dht-rebalance.c:2368:gf_defrag_settle_hash] 0-vm_root-dht: fix layout on /38c0ea87-715f-4e32-82f9-88bf1682be1a/dom_md failed
18:45 lolo JoeJulian: I see the first message on the other nodes, but then they get fixed.
18:46 hpekdemir argh. note to self: remove inline also from the header files.
18:46 kkeithley Dunno at this point. We have a fix for a broken build on FreeBSD as recent as May. Another in December 2014, so 3.6.1 or 3.6.2.
18:47 lolo heres the log from the same time/action on a node that was successful; http://fpaste.org/223946/14761214/
19:00 hpekdemir kkeithley: can you give me the URL to the sources?
19:00 hpekdemir if it's not http://bits.gluster.org/pub/gluster/glusterfs/src/
19:01 kshlm lolo, the bricks could have more information. gf_defrag_settle_hash prints that log when it fails to perform a required setxattr.
19:02 kshlm The local brick logs might have some more clue.
19:04 rotbeard joined #gluster
19:06 lolo kshlm: thanks, /var/log/gluster/bricks/glusterfs-vm_root.log really don't have much interesting though
19:06 lolo http://fpaste.org/223951/14869014/ incase you wanna see it
19:08 kshlm lolo, could you look at the brick logs aroun 18:38 when the rebalance failure happened?
19:08 kshlm the logs you provided are not of the same period.
19:10 lolo sorry about that, here are the ones for the same period; http://fpaste.org/223953/49014143/
19:12 kshlm well, you were right. Nothing interesting there too. I was just hoping we could find something there.
19:13 lolo I'm just not sure whats wrong or where to look at this point. Everything was working fine before I upgraded to 3.7
19:17 kshlm lolo, Sorry I can't help you any further. I'm not really knowledgable w.r.t DHT and rebalance.
19:18 lolo kshlm: that is ok, thank you for trying though =)
19:27 autoditac joined #gluster
19:28 ShaunR I'm reading a doc talking about performance tuning gluster, it talks about editing /etc/glusterfs/glusterfs.vol however i dont have that file, the closest i have is /etc/glusterfs/glusterfsd.vol i assume this is the correct file?
19:30 ShaunR sorry /etc/glusterfs/glusterd.vol is the file name
19:31 CyrilPeponnet @ShaunR everything is set using gluster vol set vol_name parmam value now
19:34 ShaunR ya, thats what it looks like.  How can i pull those values.. there is no get from what i can see
19:39 sadbox joined #gluster
19:43 neofob left #gluster
19:43 Gill_ joined #gluster
19:47 CyrilPeponnet gluster vol set help
19:47 CyrilPeponnet will show you the defaults values and descriptions
19:48 nsoffer joined #gluster
19:54 ShaunR I'm looking for the current set value, not the default.
20:06 JoeJulian volume info will show the value if it's changed from the default.
20:15 ShaunR Whats the ideal fs type to use for a brick these days
20:16 gnudna i have been using xfs with no issue
20:23 PaulCuzner joined #gluster
20:31 ndevos CyrilPeponnet: are you comfortable building your own glusterfs fuse client? I have a patch for your readdirp bug
20:42 plarsen joined #gluster
20:44 ndevos CyrilPeponnet: well, http://paste.fedoraproject.org/223987/54620143 is the patch for 3.5, it's completely compile tested ;-)
20:45 nsoffer joined #gluster
21:00 gnudna left #gluster
21:01 ppai joined #gluster
21:08 CyrilPeponnet @ndevos never tried :p if you can provide a vagrand / docker image with required dependencies, this could be great :)
21:09 ndevos CyrilPeponnet: I never tried that either :P
21:10 ndevos CyrilPeponnet: do you take RPMs?
21:10 CyrilPeponnet Sure ! for centos-7
21:13 ShaunR Any recommended mount options for mounting a XFS brick?
21:13 badone_ joined #gluster
21:13 CyrilPeponnet align strip with your raid if any ?
21:14 CyrilPeponnet I mean for mkxfs
21:14 CyrilPeponnet for mounting I don't specify anything
21:15 ndevos CyrilPeponnet: http://koji.fedoraproject.org​/koji/taskinfo?taskID=9810286 is building now
21:15 CyrilPeponnet cool
21:15 ndevos CyrilPeponnet: when its ready, you can download the packages from http://kojipkgs.fedoraproject.​org/scratch/devos/task_9810286
21:15 CyrilPeponnet Awesome
21:16 CyrilPeponnet I will report back to the Bugzilla
21:16 ndevos yes please, I'll be afk now :)
21:17 CyrilPeponnet Thanks to you !
21:20 Rapture joined #gluster
21:35 Prilly joined #gluster
21:44 Gill_ joined #gluster
21:59 gildub joined #gluster
22:32 ppai joined #gluster
22:50 maZtah joined #gluster
22:53 doctorray joined #gluster
23:16 spot joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary