Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 jag3773 joined #gluster
00:12 churnd joined #gluster
00:28 bala joined #gluster
00:32 sprachgenerator joined #gluster
00:36 jmarley joined #gluster
00:36 jmarley joined #gluster
00:39 Yennelin joined #gluster
00:39 Yennelin left #gluster
00:39 jag3773 joined #gluster
00:43 B21956 joined #gluster
00:54 VeggieMeat_ joined #gluster
00:56 DV joined #gluster
00:59 vpshastry joined #gluster
01:01 rypervenche joined #gluster
01:06 jmarley joined #gluster
01:06 jmarley joined #gluster
01:17 gdubreui joined #gluster
01:26 sprachgenerator joined #gluster
01:33 Honghui joined #gluster
01:45 JoseBravoHome joined #gluster
01:48 harish joined #gluster
01:52 DV joined #gluster
01:57 haomaiwang joined #gluster
02:13 haomaiwa_ joined #gluster
02:20 harish joined #gluster
02:32 hagarth joined #gluster
02:35 haomai___ joined #gluster
02:53 bharata-rao joined #gluster
02:58 [o__o] joined #gluster
03:15 kanagaraj joined #gluster
03:16 RameshN joined #gluster
03:17 jag3773 joined #gluster
03:22 davinder joined #gluster
03:24 saurabh joined #gluster
03:37 shubhendu joined #gluster
03:39 pingitypong joined #gluster
03:47 ravindran1 joined #gluster
03:48 itisravi joined #gluster
03:48 theron joined #gluster
03:52 haomaiwa_ joined #gluster
03:59 pingitypong joined #gluster
04:13 ndarshan joined #gluster
04:22 rastar joined #gluster
04:23 sprachgenerator joined #gluster
04:35 bala joined #gluster
04:36 jvandewege_ joined #gluster
04:42 cyber_si_ joined #gluster
04:45 gdubreui joined #gluster
04:45 ThatGraemeGuy joined #gluster
04:45 kdhananjay joined #gluster
04:46 meghanam joined #gluster
04:46 meghanam_ joined #gluster
04:46 atinmu joined #gluster
04:48 deepakcs joined #gluster
04:52 prasanthp joined #gluster
04:54 haomaiwang joined #gluster
04:57 kasturi joined #gluster
04:58 kumar joined #gluster
05:03 gdubreui joined #gluster
05:07 Honghui joined #gluster
05:11 benjamin_____ joined #gluster
05:12 rahulcs joined #gluster
05:17 theron joined #gluster
05:19 rjoseph joined #gluster
05:19 vpshastry joined #gluster
05:25 AaronGreen joined #gluster
05:32 nshaikh joined #gluster
05:32 hagarth joined #gluster
05:36 glusterbot New news from newglusterbugs: [Bug 1092840] Glusterd crashes and core-dumps when starting a volume in FIPS mode. <https://bugzilla.redhat.com/show_bug.cgi?id=1092840> || [Bug 1092841] barrier enable/disable returns success even if barrier is already enabled/disabled <https://bugzilla.redhat.com/show_bug.cgi?id=1092841>
05:41 d3vz3r0 joined #gluster
05:42 dusmant joined #gluster
05:46 Honghui joined #gluster
05:50 aravindavk joined #gluster
05:51 rahulcs joined #gluster
05:55 rjoseph joined #gluster
05:56 pingitypong left #gluster
06:02 a2_ joined #gluster
06:03 wgao joined #gluster
06:03 lalatenduM joined #gluster
06:06 a2 joined #gluster
06:07 haomaiwa_ joined #gluster
06:09 DV joined #gluster
06:11 ricky-ticky joined #gluster
06:13 surabhi joined #gluster
06:16 pk joined #gluster
06:24 gdubreui joined #gluster
06:30 aravindavk joined #gluster
06:31 rahulcs joined #gluster
06:31 rjoseph joined #gluster
06:32 verdurin joined #gluster
06:34 dusmant joined #gluster
06:40 kasturi deepakcs, ping
06:40 glusterbot kasturi: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
06:40 pk left #gluster
06:41 deepakcs kasturi, yes
06:41 kasturi deepakcs, regarding the BZ 1086782
06:41 deepakcs kasturi, yes
06:41 kasturi deepakcs, this is for the gluster cluster
06:41 kasturi deepakcs, confirmed with lala
06:42 deepakcs kasturi, u mean glsuter volume mgmt, rite ?
06:42 kasturi deepakcs, i mean to manage gluster nodes
06:42 deepakcs kasturi, in which case u shud own it :)
06:42 kasturi deepakcs, :)
06:42 kasturi deepakcs, will take it up
06:42 deepakcs kasturi, great, thanks
06:43 kasturi deepakcs, np
06:43 shubhendu joined #gluster
06:44 lalatenduM kasturi, thanks :)
06:44 kasturi lalatenduM, :-)
06:49 ekuric joined #gluster
06:51 hagarth joined #gluster
06:53 rastar joined #gluster
06:58 Andy5 joined #gluster
07:00 ctria joined #gluster
07:06 psharma joined #gluster
07:17 bharata-rao joined #gluster
07:19 keytab joined #gluster
07:20 eseyman joined #gluster
07:22 ktosiek joined #gluster
07:27 rjoseph joined #gluster
07:28 atinmu joined #gluster
07:33 fsimonce joined #gluster
07:41 prasanth|offline joined #gluster
07:44 haomaiw__ joined #gluster
07:44 andreask joined #gluster
07:49 fsimonce joined #gluster
07:51 GabrieleV joined #gluster
07:55 haomaiwang joined #gluster
08:04 rastar joined #gluster
08:09 liquidat joined #gluster
08:13 ngoswami joined #gluster
08:43 Chewi joined #gluster
08:44 edward2 joined #gluster
08:47 Philambdo joined #gluster
08:48 saravanakumar joined #gluster
08:49 ninkotech joined #gluster
08:52 dusmant joined #gluster
08:54 RameshN joined #gluster
08:58 haomaiwang joined #gluster
09:01 MrAbaddon joined #gluster
09:04 rahulcs joined #gluster
09:21 RameshN joined #gluster
09:26 dusmant joined #gluster
09:49 haomaiwa_ joined #gluster
10:04 rahulcs joined #gluster
10:07 glusterbot New news from newglusterbugs: [Bug 1092841] [barrier] barrier enable/disable returns success even if barrier is already enabled/disabled <https://bugzilla.redhat.com/show_bug.cgi?id=1092841>
10:08 smithyuk1 joined #gluster
10:20 atinmu joined #gluster
10:22 Debolaz joined #gluster
10:28 haomaiw__ joined #gluster
10:29 DV joined #gluster
10:36 rwheeler joined #gluster
10:37 atinmu joined #gluster
10:40 ira joined #gluster
10:42 hchiramm_ joined #gluster
11:00 Debolaz Its been a while since I tried out GlusterFS. Last time, I had the issue where stat() calls would require communicating with all servers in the network before returning, significantly breaking performance for PHP (And similar applications). I see several new versions of GlusterFS has been released, has any workaround for this problem been released?
11:01 Debolaz A PHP-side workaround is of course using something like an opcode cache, but there are several drawbacks to doing this.
11:03 andreask joined #gluster
11:04 rjoseph joined #gluster
11:07 glusterbot New news from newglusterbugs: [Bug 1090363] Add support in libgfapi to fetch volume info from glusterd. <https://bugzilla.redhat.com/show_bug.cgi?id=1090363>
11:18 rahulcs joined #gluster
11:23 aravindavk joined #gluster
11:25 lpabon joined #gluster
11:29 rahulcs joined #gluster
11:30 haomaiwa_ joined #gluster
11:33 Norky joined #gluster
11:34 rahulcs joined #gluster
11:36 mohan joined #gluster
11:36 harish joined #gluster
11:42 Slasheri joined #gluster
11:42 Slasheri joined #gluster
11:42 gdubreui joined #gluster
11:46 edward2 joined #gluster
11:59 haomaiwang joined #gluster
12:02 haomaiwa_ joined #gluster
12:03 bennyturns joined #gluster
12:12 tdasilva left #gluster
12:13 rahulcs joined #gluster
12:13 itisravi joined #gluster
12:21 mkzero joined #gluster
12:27 hagarth joined #gluster
12:30 sroy_ joined #gluster
12:33 rahulcs joined #gluster
12:36 churnd joined #gluster
12:37 rahulcs joined #gluster
12:40 rahulcs joined #gluster
12:50 Slashman joined #gluster
12:53 eightyeight woah. red hat is acquiring intank: http://ceph.com/community/red-hat-to-acquire-inktank/
12:53 eightyeight curious how that is going to affect glusterfs, if anything?
12:57 rahulcs joined #gluster
12:59 samppah eightyeight: now that's intresting indeed
13:07 japuzzo joined #gluster
13:07 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.com/show_bug.cgi?id=1075611>
13:11 kkeithley doesn't affect glusterfs at all, AFAIK
13:13 rwheeler joined #gluster
13:15 rahulcs joined #gluster
13:16 failshell joined #gluster
13:21 B21956 joined #gluster
13:24 aravindavk joined #gluster
13:27 lalatenduM eightyeight, samppah kkeithley , jdarcy published his thoughts in this blog http://pl.atyp.us/2014-04-inktank-acquisition.html
13:27 glusterbot Title: Inktank Acquisition (at pl.atyp.us)
13:30 ngoswami joined #gluster
13:33 itisravi joined #gluster
13:33 hagarth joined #gluster
13:35 failshell joined #gluster
13:41 Durzo guys im seeing a lot of strange errors in my brick logs, anyone know what this means? "Base index is not createdunder index/base_indices_holder" and more - http://paste.ubuntu.com/7366341/
13:41 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:43 zaitcev joined #gluster
13:44 lalatenduM Durzo, ur yesterday issues got resolved?
13:45 Durzo lalatenduM, no.. ditched libgfapi
13:45 Durzo using fuse mounts
13:45 Durzo and its horrible
13:45 lalatenduM Durzo, ohh...:(
13:46 Durzo if i create a new vm disk image using qemu-img create, the kvm server locks up, processes hang and i get stack traces in the dmesg log about qemu-system-x86 blocked on write
13:46 Durzo its a total godam mess
13:46 lalatenduM Humble, hchiramm_ any idea on Durzo's issues http://paste.ubuntu.com/7366341/
13:46 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:46 Durzo regretting every second of it
13:47 rwheeler_ joined #gluster
13:47 lalatenduM Durzo, this should be reported as a bug, it will help others when they search the issue
13:48 Durzo yeah
13:49 dbruhn joined #gluster
13:50 nishanth joined #gluster
13:52 lalatenduM Durzo, I have no clue abt ur issue, hope somebody will look in to it
13:52 nthomas joined #gluster
13:57 itisravi joined #gluster
13:59 aravindavk joined #gluster
14:00 ira joined #gluster
14:01 LoudNoises joined #gluster
14:01 shubhendu joined #gluster
14:02 coredump joined #gluster
14:03 shubhendu joined #gluster
14:03 gkleiman joined #gluster
14:08 kmai007 joined #gluster
14:08 kmai007 guys, when i see this in my logs, should i be concerned?  0-mem-pool: Mem pool is full. Callocing mem
14:12 diegows joined #gluster
14:13 jmarley joined #gluster
14:13 jmarley joined #gluster
14:13 kmai007 in the gluster statedump, what would be sections of the report that i should pay close attention to?
14:16 surabhi joined #gluster
14:18 dusmant joined #gluster
14:19 rjoseph joined #gluster
14:19 jag3773 joined #gluster
14:19 RameshN joined #gluster
14:19 kanagaraj joined #gluster
14:22 andreask joined #gluster
14:22 kaptk2 joined #gluster
14:25 MrAbaddon joined #gluster
14:27 jobewan joined #gluster
14:31 andreask joined #gluster
14:32 dewey joined #gluster
14:41 gmcwhistler joined #gluster
14:42 gmcwhistler joined #gluster
14:46 prasanthp joined #gluster
14:57 ppai joined #gluster
14:59 kanagaraj joined #gluster
14:59 kkeithley Gluster Community meeting in #gluster-meeting starting now
15:01 JustinClift Gluster Community Meeting time (in #gluster-meeting)
15:01 JustinClift Heh, snap
15:02 primechuck joined #gluster
15:02 scuttle_ joined #gluster
15:02 jskinner joined #gluster
15:07 rjoseph joined #gluster
15:07 RameshN joined #gluster
15:09 melkore joined #gluster
15:15 gkleiman joined #gluster
15:20 sprachgenerator joined #gluster
15:24 xymox joined #gluster
15:25 tdasilva joined #gluster
15:27 sadbox joined #gluster
15:27 sks joined #gluster
15:29 mjsmith2 joined #gluster
15:30 theron joined #gluster
15:33 xymox joined #gluster
15:33 siel joined #gluster
15:35 lalatenduM joined #gluster
15:37 sadbox joined #gluster
15:38 daMaestro joined #gluster
15:39 lalatenduM_ joined #gluster
15:41 ira joined #gluster
15:42 Guest81953 joined #gluster
15:45 aravindavk joined #gluster
15:46 MrAbaddon joined #gluster
16:00 ppai joined #gluster
16:04 theron_ joined #gluster
16:12 purpleidea @firewall
16:12 purpleidea @ports
16:12 glusterbot purpleidea: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
16:26 Mo__ joined #gluster
16:29 zerick joined #gluster
16:36 systemonkey joined #gluster
16:37 vpshastry joined #gluster
16:45 vpshastry left #gluster
16:46 qdk joined #gluster
16:49 semiosis i wonder if the inktank acquisition will generate the same "will it remain open source" questions as the gluster acq did
16:49 theron_ joined #gluster
16:49 Matthaeus joined #gluster
16:56 MrAbaddon joined #gluster
17:03 kmai007 inktank is ceph enterprise ?
17:13 vpshastry joined #gluster
17:14 vpshastry left #gluster
17:18 kkeithley Ceph will absolutely remain open source.
17:18 kkeithley @learn firewall as  glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
17:18 glusterbot kkeithley: The operation succeeded.
17:19 gmcwhistler joined #gluster
17:21 chirino joined #gluster
17:25 theron joined #gluster
17:40 dbruhn joined #gluster
17:40 davinder joined #gluster
17:44 P0w3r3d joined #gluster
17:48 chirino joined #gluster
17:51 chirino_m joined #gluster
17:56 LessSeen joined #gluster
17:57 vpshastry joined #gluster
17:59 MrAbaddon joined #gluster
17:59 systemonkey Joejulian: hey Joe, I'm having some issues with duplicated files in directories. Do you know a way to clean up dupes? Any pointer is much appreciated.
18:04 MrAbaddon joined #gluster
18:14 ricky-ticky1 joined #gluster
18:17 systemonkey Anyone experience this duplicate files issue before or currently going through it?
18:26 jag3773 joined #gluster
18:26 Andy5 joined #gluster
18:27 JoeJulian systemonkey: need to see client log
18:33 semiosis kkeithley: of course it will.  seemed to me when redhat acquired gluster there was some FUD going around
18:36 systemonkey joejulian: which log is client log? I see /var/log/cli.log, mnt.log, etc-glusterfs-glusterd.vol.log.1, satus.log, status.log etc.
18:37 JoeJulian /var/log/glusterfs/{mountpoint with / replaced by -}.log
18:43 systemonkey joejulian: http://hastebin.com/yuwizogoci.coffee
18:43 glusterbot Title: hastebin (at hastebin.com)
18:44 ndevos systemonkey: you could check with 'ls -li $PATH' if those duplicate files do not have different inodes
18:45 ndevos systemonkey: if they have, it is likely that they also have different gfid xattrs on the bricks on the storage servers
18:45 JoeJulian Is /var/log full? Odd that there's nothing in the logs since it was last unmounted on 4/3.
18:46 JoeJulian ... I guess I know one thing I'll probably be doing tomorrow afternoon... :D
18:46 systemonkey ndevos: thanks. I'll check it in a moment.
18:47 systemonkey joejulian: it's not full. not sure why that's all I see.
18:48 JoeJulian Are there duplicate entries on all clients, or just this one?
18:48 systemonkey JoeJulian: there are other compressed log files. is there something I should grep for?
18:48 JoeJulian gfid mismatch
18:48 JoeJulian ' E '
18:53 jag3773 joined #gluster
18:57 systemonkey I have three bricks but only tank-1 has that log file name. other two doesn't have it. strange. I don't see any mismatch with the logs
18:57 zerick joined #gluster
19:04 JoeJulian systemonkey: Oh! On the servers, stat $brickdir/.glusterfs/00/00/00*0001 and make sure it's a symlink instead of a directory
19:09 Paul-C joined #gluster
19:09 daMaestro joined #gluster
19:09 kkeithley semiosis: indeed, fud will abound. Thus my response
19:10 systemonkey http://hastebin.com/ezaqutezun.coffee
19:10 glusterbot Title: hastebin (at hastebin.com)
19:10 systemonkey http://hastebin.com/etivurixis.vhdl
19:10 glusterbot Title: hastebin (at hastebin.com)
19:12 systemonkey is this split-brain issue? I read your blog about it but wasn't sure if I was having one or not...
19:13 semiosis @link
19:13 semiosis @link files
19:13 JoeJulian @whatis link
19:13 glusterbot JoeJulian: Error: No factoid matches that key.
19:13 JoeJulian @whatis dht
19:13 glusterbot JoeJulian: I do not know about 'dht', but I do know about these similar topics: 'dd'
19:13 * JoeJulian throws up his hands...
19:13 semiosis the ---T are link files, they point to another brick
19:14 JoeJulian ... and they shouldn't be showing up on the client.
19:14 JoeJulian Or are those on the brick?
19:14 semiosis JoeJulian: s/shouldn't/cant/
19:14 systemonkey semiosis: thanks.
19:14 JoeJulian semiosis: it's happened...
19:14 JoeJulian semiosis: usually some afr breakage
19:15 systemonkey joejulian: I went into each $brickdir/path which had dupes.
19:15 semiosis JoeJulian: why cant you just let me ignore facts that don't fit?
19:15 JoeJulian Ah, ok.
19:15 semiosis :)
19:15 JoeJulian lol
19:16 JoeJulian systemonkey: getfattr -m . -d -e hex $path_to/dev_mp_head_k_helmet_l_tr.ffd on each server.
19:18 systemonkey i don't have getfattr command installed.
19:19 semiosis you should install it
19:19 systemonkey working on it :0
19:20 ira joined #gluster
19:27 rwheeler joined #gluster
19:29 systemonkey http://hastebin.com/amajurocar.avrasm
19:29 glusterbot Title: hastebin (at hastebin.com)
19:29 systemonkey tank-2 can't find the file. I used factory_computer_screensaver.bik instead since this file is a duplicate.
19:30 _BryanHM_ joined #gluster
19:31 semiosis systemonkey: please also ,,(pasteinfo)
19:31 glusterbot systemonkey: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:32 semiosis what version of glusterfs are you using?  what distro?
19:32 systemonkey http://fpaste.org/98249/13988863/
19:32 glusterbot Title: #98249 Fedora Project Pastebin (at fpaste.org)
19:33 ctria joined #gluster
19:33 systemonkey glusterfs 3.4.1
19:34 systemonkey using ubuntu 13.04
19:34 Debolaz Bah... I saw stat-prefetch and for a moment, I had hoped the stat issue had been solved. :(
19:35 semiosis systemonkey: could you please hastebin the output of stat on both of those factory_computer_screensaver.bik files (from tank-1 & tank-3)
19:36 systemonkey http://fpaste.org/98252/86563139/
19:36 glusterbot Title: #98252 Fedora Project Pastebin (at fpaste.org)
19:37 systemonkey gid is unknown on tank-3
19:38 semiosis JoeJulian: are you aware of a gfid mismatch bug in 3.4?
19:39 semiosis systemonkey: notice the very different times also
19:39 semiosis mode is different
19:40 systemonkey it is different.
19:40 jmarley joined #gluster
19:40 jmarley joined #gluster
19:41 systemonkey I recently had to change the gid on the files over the weekend since we implemented AD integration with samba.
19:41 systemonkey gid was changed through gluster mount
19:42 JoeJulian No, I thought all the gfid mismatch bugs were fixed back in 3.2
19:44 nhm joined #gluster
19:44 semiosis me too
19:45 semiosis so, how could the same file arrive on two different bricks with different gfids?
19:45 JoeJulian split-brain
19:45 semiosis this is not a replicated volume
19:45 semiosis so idk what you mean by split-brain
19:46 JoeJulian systemonkey: Were bricks ever added or removed from this volume with or without a rebalance?
19:47 JoeJulian Let's also do that getfattr command for that build directory.
19:47 kmai007 so if is distrip. and you have 3 bricks, i don't know how you'd have the same file anywhere
19:47 Debolaz Has there been added some way of disabling automatic healing on stat() since version 3.2?
19:47 JoeJulian (apparently semiosis expects me to READ everything in scrollback... <eyeroll>)
19:48 semiosis at least the hastebins
19:48 JoeJulian Debolaz: there is a way... I think it's in ,,(undocumented options)
19:48 glusterbot Debolaz: Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
19:48 semiosis i just click the links to catch up on a conv
19:48 ghenry joined #gluster
19:48 ghenry joined #gluster
19:48 systemonkey JoeJulian: all bricks got added on same day. but one time, one of the servers went down and I just turned it back on.
19:48 semiosis a ha!
19:49 JoeJulian But it should have errored if the brick it was trying to create the file on wasn't there...
19:49 JoeJulian unless....
19:49 semiosis UNLESS....
19:49 JoeJulian Unless it created the file on a brick that WAS there, then renamed it.
19:49 JoeJulian file a bug
19:49 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
19:49 semiosis this should be easy enough to test/reproduce
19:49 JoeJulian Actually, I should probably see if I can repro that before I file the bug report.
19:49 semiosis +1
19:50 kmai007 systemonkey: so of the 3 servers, how long was it down before you turned it back on?
19:51 systemonkey possibly few hours but not whole day.
19:51 JoeJulian systemonkey: So... that said... I think the thing to do would be to see if splitmount works without replication because I didn't test that. If it does, use that to remove the version of the file you think is irrelevant.
19:52 systemonkey Joejulian: that sounds pretty good but I don't know what splitmount means.
19:53 JoeJulian @lucky glusterfs splitmount
19:53 glusterbot JoeJulian: https://www.gluster.org/category/howtos/
19:54 Debolaz JoeJulian: iam-self-heal-daemon...?
19:54 semiosis true
19:55 semiosis actually no
19:55 systemonkey cool. Thanks. I saw that page earlier but I wasn't too confident about doing it since I had no idea. I'll give that a try and get back to you guys. Thanks JoeJulian and semiosis and other peeps for the support ;)
19:56 semiosis Debolaz: back in 3.1 there was an undoc option for that, see http://gluster.org/community/documentation/index.php/Gluster_3.1:_Undocumented_Volume_Options - cluster.*-self-heal
19:56 glusterbot Title: Gluster 3.1: Undocumented Volume Options - GlusterDocumentation (at gluster.org)
19:56 semiosis but idk if those are still available
19:57 JoeJulian I wonder if that's the page jbrooks was looking at when he tweeted "spit-braaaiins!" https://twitter.com/jasonbrooks/status/460795743293304832
19:57 glusterbot Title: Twitter / jasonbrooks: split-braaaiins! (at twitter.com)
19:58 jbrooks JoeJulian: I was looking at one of my volumes :|
19:58 jbrooks And one of your blog posts :)
19:58 JoeJulian semiosis, Debolaz: oh, actually it's no longer undocumented. Look at cluster.*-self-heal in "gluster volume set help"
19:58 JoeJulian jbrooks: Hehe, I thought so. :D
19:59 dbruhn_ joined #gluster
19:59 edward2 joined #gluster
20:01 Debolaz :o
20:02 kmai007 looks like cluster.self-heal-daemon: default = off
20:03 kmai007 on 3.4.2
20:04 JoeJulian kmai007: That's to disable the daemon. To disable it at the client, you have three things that you'd have to disable: cluster.{metadata,data,entry}-self-heal
20:04 kmai007 Thanks JoeJulian i didn't know that
20:05 JoeJulian It's one of those things we kind-of hope you find on your own. Once you're that level of expert we figure you're not as likely to eff it all up.
20:06 kmai007 always learning
20:08 Debolaz Use case being to avoid the performance issue of PHP statting every bloody file every request. Xcache is of course another possibility, but that means files can't be updated realtime at all.
20:09 MikeLeman joined #gluster
20:09 JoeJulian Do you update php scripts realtime in production?
20:09 Debolaz JoeJulian: WordPress selfupdate is an example of something that might go screwy. Which is largely in the hands of the customers.
20:10 JoeJulian Ah, I see.
20:14 Debolaz I have to continue this tomorrow or friday, head is borderline bbq now. Anyway, thanks a lot JoeJulian and semiosis, those options were just what I was looking for.... And will probably cause a minor explosion somewhere when I get around to trying them. :)
20:16 semiosis good luck
20:16 kmai007 ok this sounds pretty junior of me
20:18 kmai007 but if i'm reading the description of the cluster.self-heal-daemon: default = off, and the description reads : Description: This option applies to only self-heal-daemon. Index directory crawl and automatic healing of fileswill not be performed if this option is turned off.
20:18 kmai007 should the default value  = on
20:19 kmai007 b/c i know its running now b/c of the glustershd.log
20:19 kmai007 and PID
20:19 SFLimey joined #gluster
20:28 chirino joined #gluster
20:43 tryggvil joined #gluster
20:48 tryggvil joined #gluster
20:51 ccha joined #gluster
21:05 mjsmith2 joined #gluster
21:06 chirino joined #gluster
21:11 ira joined #gluster
21:22 chirino joined #gluster
21:24 gdubreui joined #gluster
21:30 daMaestro joined #gluster
22:05 mjrosenb does gluster support any client-side cacheing?
22:05 mjrosenb like, I want to try to buffer a full file that is larger than memory on the client, someplace on disk.
22:14 jobewan joined #gluster
22:15 lmickh joined #gluster
22:22 jskinner_ joined #gluster
22:24 tryggvil joined #gluster
22:24 systemonkey JoeJulian: no luck with splitmount. I guess the splitmount only mounts replication volumes... mine is distributed volume.
22:26 badone joined #gluster
22:26 JoeJulian systemonkey: Thanks for uncovering that bug. :/
22:27 JoeJulian systemonkey: You can still do it the old way: http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
22:27 glusterbot Title: Fixing split-brain with GlusterFS 3.3 (at joejulian.name)
22:29 systemonkey joejulian: thanks! I appreciate it.
22:29 semiosis JoeJulian: did you reproduce it?
22:30 JoeJulian semiosis: not yet, I was referring to my own bug with splitmount where I was focused on replicated volumes.
22:30 semiosis ah
22:32 mjsmith2 joined #gluster
22:37 systemonkey I guess alot of people are using glusterfs as replication services... since my environment is distributed environment the command "gluster volume heal $VOLUME info split-brain" doesn't work. It outputs "Volume dist-vol is not of type replicate"
22:39 cogsu joined #gluster
22:39 JoeJulian Right, there is no self-heal for dht. Theoretically none should be needed.
22:41 systemonkey yah. I guess i need to come up with a way to clean the mess up. stay tuned...
22:44 Mneumonik For any windows guys using the windows nfs client out there, I think I identified the issue i was having previously. Anything server 2008/win7 seems to report "incorrect parameter" sporadically where the windows 2012 server/win8 does not, and works as expected. The workaround I had to do for windows 2008 server was set the AnonymousGid and AnonymousUid to whatever value i wanted, chown the
22:44 Mneumonik nfs share to that in linux, restart the glusterd services and restart the server.
22:45 Mneumonik those settings are in the nfs client registry, you need to add the 2 DWORD values
22:46 JoeJulian Mneumonik: Could you add something to the wiki for that please?
22:46 Mneumonik on gluster.org? sure i can do that
22:49 jbd1 mjrosenb: if you're talking about FSCache in Linux, no.  Support would need to be present in FUSE for glusterfs FUSE mount to take advantage of it.  If you're talking about NFS client caching, you can just mount your GlusterFS volume via NFS to take advantage of that
22:54 chirino joined #gluster
23:06 premera joined #gluster
23:06 systemonkey Oh! I got another situation. my glusterfs is running on top of zfs with xattr enabled to write to hidden directory and it is painfully slow. I read on countless blogs about setting xattr=sa on zfs volumes to speed up the read time by placing the xattr to the inode. I created a sub-pool on zfs to test this settings, I was impressed with the read speed. HOwever, I couldn't open, copy or delete the files in this
23:06 systemonkey test pool since glusterfs had problem when action was performed on the file. I'm guessing the gluster volume is getting confused by zfs xattr setting enabled on top volume and by trying to access the subvolume with different xattr location. I'm tempted to recreate the volume by dropping it but I'm scared that this action may lose the data. Any thoughts on this matter would be very helpful.
23:09 JoeJulian My first thought is...
23:09 JoeJulian Congratulations on being one of the 1% that use lose instead of loose. :D
23:10 mjrosenb jbd1: I was thinking of just mmapping a file, and shoving part of the cache in there.
23:11 JoeJulian systemonkey: I would check for errors on the brick logs. I suspect a zfs bug. There was a time when zfs didn't support xattrs on symlinks. Perhaps that bug still exists when you use that setting.
23:11 systemonkey :D
23:12 systemonkey yah. I my search found this post made by Todd. http://www.gluster.org/pipermail/gluster-users/2013-August/037093.html
23:12 glusterbot Title: [Gluster-users] ZFS (zfsonlinux) xattr=sa mode will corrupt symlinks (at www.gluster.org)
23:13 Durzo would it be possible to downgrade from 3.5.0 to the latest 3.4 on an existing gluster brick?
23:13 Durzo like would 3.4 just start and see the volume or ??
23:13 JoeJulian Durzo: yes.
23:14 systemonkey but I'm thinking my situation is dealing with actual file and not symlink. Interesting thing is I can create a file in this volume but any manipulation results in "cannot find the file"
23:14 Durzo gettin brick crashes on 3.5.0 when high i/o
23:14 Durzo cant even use it
23:14 JoeJulian systemonkey: directories are symlinks in the .glusterfs tree, so that's probably why.
23:15 JoeJulian Durzo: Yuck. Did you file a bug report?
23:15 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
23:15 Durzo JoeJulian, no
23:15 JoeJulian Can you please? Include the crash log and if you happen to have any core dumps, that would be helpful as well.
23:15 systemonkey ah ok. thanks JoeJulian! I'll give that a try.
23:17 Durzo where would gluster save its core files to?
23:19 JoeJulian Durzo: I think / iirc.
23:19 Durzo JoeJulian, is it possible to have a member of a volume 3.4 and one 3.5 ?
23:20 JoeJulian Moving upwards, yes. I don't think so for downgrades.
23:20 JoeJulian I wouldn't.
23:20 Durzo so id have to shutdown the whole volume.. damn
23:29 badone joined #gluster
23:36 diegows joined #gluster
23:45 cyberbootje joined #gluster
23:54 sprachgenerator joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary