Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 Rapture joined #gluster
00:20 mkzero joined #gluster
01:04 jvandewege_ joined #gluster
01:26 plarsen joined #gluster
01:31 kkeithley_ joined #gluster
01:32 stickyboy joined #gluster
01:38 harish joined #gluster
01:41 corretico joined #gluster
02:07 jvandewege joined #gluster
02:17 DV joined #gluster
02:26 bharata-rao joined #gluster
02:28 DV_ joined #gluster
02:30 nangthang joined #gluster
02:43 CyrilPeponnet Is there a thread limit with ssh transfert for geo-rep in changelog mode?
02:44 CyrilPeponnet It seems I can't have more that 3 ssh in the same time to transfert data
02:48 DV joined #gluster
02:50 dgbaley Is there any support for uid/gid mapping? Or is the only option to make sure users are the same on all hosts
02:59 DV_ joined #gluster
03:01 kdhananjay joined #gluster
03:11 overclk joined #gluster
03:15 [7] joined #gluster
03:23 glusterbot News from newglusterbugs: [Bug 1225279] Different client can not execute "for((i=0;i<1000;i++));do ls -al;done" in a same directory at the sametime <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225279>
03:23 glusterbot News from newglusterbugs: [Bug 1225283] Disperse volume: Input/output  errors on nfs and fuse mounts during delete operation <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225283>
03:23 JoeJulian dgbaley: no user mapping, no.
03:23 glusterbot News from newglusterbugs: [Bug 1225284] Disperse volume: I/O error on client when USS is turned on <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225284>
03:23 gildub joined #gluster
03:36 itisravi joined #gluster
03:46 hagarth joined #gluster
03:50 shubhendu joined #gluster
03:57 pppp joined #gluster
04:00 ashiq joined #gluster
04:06 pppp joined #gluster
04:08 atinmu joined #gluster
04:17 rjoseph joined #gluster
04:21 vishvendra joined #gluster
04:26 RameshN joined #gluster
04:40 ndarshan joined #gluster
04:41 stickyboy joined #gluster
04:43 RameshN joined #gluster
04:49 ramteid joined #gluster
04:50 rafi joined #gluster
04:50 jiffin1 joined #gluster
04:52 jiffin1 joined #gluster
04:53 jiffin joined #gluster
04:55 maveric_amitc_ joined #gluster
04:55 lexi2 joined #gluster
05:07 schandra joined #gluster
05:07 hgowtham joined #gluster
05:13 Manikandan joined #gluster
05:17 sakshi joined #gluster
05:19 hagarth joined #gluster
05:21 Bhaskarakiran joined #gluster
05:22 ppai joined #gluster
05:23 poornimag joined #gluster
05:24 aravindavk joined #gluster
05:27 dusmant joined #gluster
05:28 deepakcs joined #gluster
05:32 karnan joined #gluster
05:32 gem joined #gluster
05:34 Manikandan joined #gluster
05:34 itisravi joined #gluster
05:35 surabhi joined #gluster
05:35 soumya joined #gluster
05:37 meghanam joined #gluster
05:40 dusmant joined #gluster
05:41 vimal joined #gluster
05:48 kdhananjay joined #gluster
05:48 arao joined #gluster
05:54 Anjana joined #gluster
05:56 Apeksha joined #gluster
05:58 atalur joined #gluster
05:59 CyrilPeponnet Hey guys :) any clue why geo-rep in changelog mode doesn't react when I touch an existing file ?
05:59 CyrilPeponnet works fine when creating a new file
05:59 kanagaraj joined #gluster
06:01 nangthang joined #gluster
06:06 raghu joined #gluster
06:08 kdhananjay joined #gluster
06:08 CyrilPeponnet and by the way: enforce a full sync of the data by erasing the index. How to do this ?
06:15 nsoffer joined #gluster
06:17 CyrilPeponnet I delete the geo-rep and reset geo-replication.indexing using force (set it as off does not work) Then I recreate the geo-rep and it;s crawling in hybrid mode now
06:17 CyrilPeponnet we will see
06:30 jtux joined #gluster
06:36 Saravana joined #gluster
06:39 meghanam joined #gluster
06:42 soumya joined #gluster
06:51 lalatenduM joined #gluster
06:53 glusterbot News from newglusterbugs: [Bug 1225320] ls command failed with features.read-only on while mounting ec volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225320>
06:53 glusterbot News from newglusterbugs: [Bug 1225323] Glusterfs client crash during fd migration after graph switch <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225323>
06:53 glusterbot News from newglusterbugs: [Bug 1225328] afr: unrecognized option in re-balance volfile <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225328>
06:57 anrao joined #gluster
07:03 kotreshhr joined #gluster
07:15 LebedevRI joined #gluster
07:23 kshlm joined #gluster
07:24 hchiramm_ joined #gluster
07:26 nangthang joined #gluster
07:30 fsimonce joined #gluster
07:34 Slashman joined #gluster
07:37 meghanam joined #gluster
07:37 soumya joined #gluster
07:45 [Enrico] joined #gluster
07:46 ninkotech joined #gluster
07:46 ninkotech_ joined #gluster
07:50 arao joined #gluster
07:57 al joined #gluster
08:08 Norky joined #gluster
08:11 lalatenduM joined #gluster
08:12 ctria joined #gluster
08:14 Philambdo joined #gluster
08:19 nsoffer joined #gluster
08:28 [Enrico] joined #gluster
08:34 glusterbot News from resolvedglusterbugs: [Bug 1201284] tools/glusterfind: Use Changelogs more effectively for GFID to Path conversion <https://bugzilla.redhat.co​m/show_bug.cgi?id=1201284>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1201289] tools/glusterfind: Support Partial Find feature <https://bugzilla.redhat.co​m/show_bug.cgi?id=1201289>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1201294] tools/glusterfind: Output format flexibility <https://bugzilla.redhat.co​m/show_bug.cgi?id=1201294>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1208520] [Backup]: Glusterfind not working with change-detector as 'changelog' <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208520>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1209138] [Backup]: Packages to be installed for glusterfind api to work <https://bugzilla.redhat.co​m/show_bug.cgi?id=1209138>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1209843] [Backup]: Crash observed when multiple sessions were created for the same volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1209843>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1207028] [Backup]: User must be warned while running the 'glusterfind pre' command twice without running the post command <https://bugzilla.redhat.co​m/show_bug.cgi?id=1207028>
08:34 glusterbot News from resolvedglusterbugs: [Bug 1208404] [Backup]: Behaviour of backup api in the event of snap restore - unknown <https://bugzilla.redhat.co​m/show_bug.cgi?id=1208404>
08:48 nishanth joined #gluster
08:50 Saravana joined #gluster
08:52 s19n joined #gluster
08:53 lexi2 joined #gluster
08:53 arao joined #gluster
08:56 jcastill1 joined #gluster
08:59 meghanam joined #gluster
08:59 arao joined #gluster
09:01 jcastillo joined #gluster
09:06 meghanam_ joined #gluster
09:07 meghanam joined #gluster
09:08 soumya joined #gluster
09:24 glusterbot News from newglusterbugs: [Bug 1225383] Quota: with NFS brick process hangs with too many parallel IOs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225383>
09:24 hgowtham joined #gluster
09:28 jcastill1 joined #gluster
09:33 jcastillo joined #gluster
09:35 nangthang joined #gluster
09:36 dave___ joined #gluster
09:49 hagarth joined #gluster
09:54 glusterbot News from newglusterbugs: [Bug 1220173] SEEK_HOLE support (optimization) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220173>
10:15 Lee- joined #gluster
10:24 glusterbot News from newglusterbugs: [Bug 1206539] Tracker bug for GlusterFS documentation Improvement. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206539>
10:29 meghanam joined #gluster
10:31 Manikandan joined #gluster
10:42 autoditac joined #gluster
10:42 ira joined #gluster
10:44 gildub joined #gluster
10:44 harish joined #gluster
10:45 ira joined #gluster
10:51 Manikandan joined #gluster
10:52 rafi1 joined #gluster
10:52 monotek1 joined #gluster
10:54 glusterbot News from newglusterbugs: [Bug 1225424] [Backup]: Misleading error message when glusterfind delete is given with non-existent volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225424>
10:54 rafi joined #gluster
11:03 stickyboy joined #gluster
11:04 tw_ joined #gluster
11:04 glusterbot News from resolvedglusterbugs: [Bug 1225383] Quota: with NFS brick process hangs with too many parallel IOs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225383>
11:15 autoditac joined #gluster
11:18 soumya joined #gluster
11:23 firemanxbr joined #gluster
11:28 T0aD- joined #gluster
11:30 ndevos REMINDER: Gluster Community meeting starts in 30 minutes from now in #gluster-meeting
11:30 karnan_ joined #gluster
11:31 xrsanet_ joined #gluster
11:33 Kins_ joined #gluster
11:36 karnan_ joined #gluster
11:37 harish joined #gluster
11:37 bjornar joined #gluster
11:39 lyang0 joined #gluster
11:39 poornimag joined #gluster
11:42 [Enrico] joined #gluster
11:48 spalai joined #gluster
11:49 [Enrico] joined #gluster
11:50 ndarshan joined #gluster
11:58 ndevos REMINDER: Gluster Community meeting starts in 2 minutes from now in #gluster-meeting
12:03 jdarcy joined #gluster
12:04 rafi1 joined #gluster
12:05 Trefex joined #gluster
12:07 surabhi joined #gluster
12:08 kaushal_ joined #gluster
12:08 ppai joined #gluster
12:12 Trefex joined #gluster
12:13 hagarth joined #gluster
12:14 rafi joined #gluster
12:17 sankarshan joined #gluster
12:21 Philambdo joined #gluster
12:24 TvL2386 joined #gluster
12:25 glusterbot News from newglusterbugs: [Bug 1206429] Maintainin local transaction peer list in op-sm framework <https://bugzilla.redhat.co​m/show_bug.cgi?id=1206429>
12:27 Trefex joined #gluster
12:36 rafi1 joined #gluster
12:38 vishvendra joined #gluster
12:39 kotreshhr left #gluster
12:40 ashiq joined #gluster
12:40 vishvendra joined #gluster
12:48 julim joined #gluster
12:49 bene2 joined #gluster
13:02 ndevos @ppa
13:02 glusterbot ndevos: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
13:10 aaronott joined #gluster
13:10 B21956 joined #gluster
13:14 jcastill1 joined #gluster
13:17 Trefex joined #gluster
13:19 jcastillo joined #gluster
13:20 kaushal_ joined #gluster
13:24 squizzi joined #gluster
13:25 glusterbot News from newglusterbugs: [Bug 1113460] after enabling quota, peer probing fails on glusterfs-3.5.1 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1113460>
13:25 rafi joined #gluster
13:26 squizzi joined #gluster
13:31 dgandhi joined #gluster
13:34 Philambdo joined #gluster
13:35 arcolife joined #gluster
13:35 glusterbot News from resolvedglusterbugs: [Bug 988943] ACL doesn't work with FUSE mounted GlusterFS <https://bugzilla.redhat.com/show_bug.cgi?id=988943>
13:36 sankarshan joined #gluster
13:45 blaorg joined #gluster
13:45 blaorg left #gluster
13:46 plarsen joined #gluster
13:46 arao joined #gluster
13:48 hamiller joined #gluster
13:48 Twistedgrim joined #gluster
13:49 sankarshan joined #gluster
14:04 klaxa|work joined #gluster
14:06 plarsen joined #gluster
14:07 atinmu joined #gluster
14:11 nsoffer joined #gluster
14:24 georgeh-LT2 joined #gluster
14:27 wushudoin joined #gluster
14:35 ashiq joined #gluster
14:36 jmarley joined #gluster
14:36 plarsen joined #gluster
14:38 gem joined #gluster
14:43 rtalur56 joined #gluster
14:49 nsoffer joined #gluster
14:52 coredump joined #gluster
14:55 papamoose joined #gluster
14:57 Slashman joined #gluster
15:01 arao joined #gluster
15:05 kshlm joined #gluster
15:09 kshlm joined #gluster
15:09 vovcia hi :) do You know of any good reason why overlayfs doesnt like glusterfs? im trying to overlayfs on top of glusterfs and it says "overlayfs: filesystem of upperdir is not supported"
15:12 hagarth vovcia: haven't checked that. does overlayfs work with nfs as the upperdir?
15:13 neofob joined #gluster
15:14 vovcia hagarth: i dont think so - in overlayfs source there is comment: "* We don't support:
15:14 vovcia *  - filesystems with revalidate (FIXME for lower layer)
15:14 vovcia i believe glusterfs is filesystem with revalidate, right?
15:20 hagarth vovcia: checking overlayfs src now :)
15:22 arao joined #gluster
15:22 RameshN joined #gluster
15:28 Vortac joined #gluster
15:37 Pupeno joined #gluster
15:37 Pupeno joined #gluster
15:39 jiffin joined #gluster
15:43 lexi2 joined #gluster
15:46 coredump joined #gluster
15:46 Prilly joined #gluster
15:47 kdhananjay joined #gluster
15:48 Norky joined #gluster
15:53 corretico joined #gluster
15:55 spalai joined #gluster
15:59 gem joined #gluster
16:00 deepakcs joined #gluster
16:02 CyrilPeponnet Hey guys :)
16:02 CyrilPeponnet @hagarth For the record: So exposing the slave volume to consumers as RO is always a good idea. It doesn't affect geo-rep as it internally mounts in RW.
16:02 atinmu joined #gluster
16:03 hagarth CyrilPeponnet: saw the note from kotresh but I was thinking of 2 ways to do RO:
16:03 CyrilPeponnet I end up by erasing the indexing and processing a new hybrid crawl as too many files where missing
16:03 CyrilPeponnet Oh ok :)
16:03 hagarth 1. enforcement on the server side (volume set <volname> read-only enable)
16:03 julim joined #gluster
16:03 CyrilPeponnet yep
16:03 hagarth 2. enforcement on the clients (mount with -o ro)
16:04 CyrilPeponnet I can do both
16:04 CyrilPeponnet on master we are using client side RO
16:04 hagarth I think 2. will work fine with geo-rep. I suspect that geo-rep will fail syncing with 1.
16:04 CyrilPeponnet but it doesn't prevent people to mount it by hand
16:04 CyrilPeponnet well I asked him few minutes ago
16:05 CyrilPeponnet " It doesn't affect geo-rep as it internally mounts in RW."
16:05 hagarth CyrilPeponnet: right.. been intending to respond to that thread but been busy with a bunch of other stuff.
16:05 CyrilPeponnet @hagarth I can imagine :) it's fine, you are kind of to answer to me !
16:05 CyrilPeponnet s/of/enough
16:06 hagarth neither do I see any special code to enable geo-rep to sync if server side enforcement is turned on.
16:06 CyrilPeponnet and doing geo-rep between to continent doesn't help ! (due do latency and bandwidth)
16:07 CyrilPeponnet @hagarth hmmm
16:07 hagarth CyrilPeponnet: I have done that before across AWS regions and it was quite helpful for my use case :)
16:07 CyrilPeponnet @hagarth I will wait for his answer. But I can't prevent used to mount the vol a rw and touching files...
16:08 CyrilPeponnet @hagarth I hope this will for us too !
16:08 hagarth CyrilPeponnet: this seems like a nice enhancement request (allowing geo-rep to work with server side ro enforcement)
16:08 CyrilPeponnet but having you guys, help a LOT :p
16:09 hagarth CyrilPeponnet: happy to help whenever we can :)
16:10 Vortac I have two data centers close together... ~1-2ms latency. Is regular repl volume okay or geo-repl required?
16:11 hagarth Vortac: synchronous replication should work fine at that latency
16:11 msvbhat CyrilPeponnet: If you enable the volume to be read-only geo-rep will fail for sure. I remember trying that while ago
16:11 meghanam joined #gluster
16:11 Vortac hagarth: Okay thanks.. What's the upper limit for sychronous?
16:11 CyrilPeponnet @msvbhat too bad on with release ?
16:12 hagarth CyrilPeponnet: maybe you should file a enhancement request for that?
16:12 CyrilPeponnet I can
16:12 CyrilPeponnet I will
16:12 hagarth Vortac: depends on use cases. I have seen deployments where latency is of the order of 20ms or so.
16:13 Vortac hagarth: Okay thanks.
16:13 msvbhat CyrilPeponnet: About an year ago, I think in 3.6 release or might be bit older
16:14 CyrilPeponnet ok thanks
16:14 msvbhat CyrilPeponnet: But yeah, as hagarth pointed out, you should file an enhancement request
16:14 Norky joined #gluster
16:19 rstalur joined #gluster
16:19 MontyCarleau joined #gluster
16:20 CyrilPeponnet Done: https://bugzilla.redhat.co​m/show_bug.cgi?id=1225546
16:20 glusterbot Bug 1225546: high, unspecified, ---, bugs, NEW , Pass slave volume in geo-rep as read-only
16:26 glusterbot News from newglusterbugs: [Bug 1225546] Pass slave volume in geo-rep as read-only <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225546>
16:27 s19n left #gluster
16:34 maveric_amitc_ joined #gluster
16:38 MontyCarleau joined #gluster
16:43 julim joined #gluster
16:44 soumya joined #gluster
16:55 neofob joined #gluster
16:56 glusterbot News from newglusterbugs: [Bug 1225572] nfs-ganesha: Getting issues for nfs-ganesha on new nodes of glusterfs,error is /etc/ganesha/ganesha-ha.conf: line 11: VIP_<hostname with fqdn>=<ip>: command not found <https://bugzilla.redhat.co​m/show_bug.cgi?id=1225572>
17:14 julim joined #gluster
17:22 Prilly joined #gluster
17:23 Rapture joined #gluster
17:28 squizzi joined #gluster
17:28 marcoswk joined #gluster
17:32 karnan joined #gluster
17:35 marcoswk Hi folks. I've got a bit of a puzzler with our Gluster 3.6.1 install
17:36 marcoswk `gluster volume heal haikuvol info` is giving us a list of gfids that are never healing
17:37 marcoswk The gfids that won't heal seem to be referencing directories
17:37 marcoswk For our clients, trying to list files in these directories results in "Stale file handle" errors
17:38 marcoswk And digging into the .glusterfs directory, a symlink for the directory exists on one brick in the pair but does *not* exist on the other brick in the pair
17:38 marcoswk So far we've tried the following
17:39 julim joined #gluster
17:39 marcoswk 1) stopping, waiting, and then starting glusterd and glusterfsd on each server where our brick pairs live
17:39 marcoswk 2) manually creating the symlink in the .glusterfs directory on the server where it wasn't
17:40 marcoswk Neither of those worked
17:40 marcoswk Any suggestions? Any other details I could provide here that might help with troubleshooting?
17:48 JoeJulian marcoswk: check the ,,(extended attributes) on those directories.
17:48 glusterbot marcoswk: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/​2011/04/glusterfs-extended-attributes/
17:48 squizzi joined #gluster
17:48 marcoswk looking...
17:48 JoeJulian 3.6 has some new split-brain resolving methods, too, that I'm not up to speed on.
17:51 marcoswk Looking at the extended attributes on the actual file in the brick, the attributes are identical on both bricks
17:51 marcoswk [root@web26 ~]# getfattr -m .  -d -e hex /mnt/data00/brick1/data/allen​stevenson_data/import_files/
17:51 marcoswk getfattr: Removing leading '/' from absolute path names
17:51 marcoswk # file: mnt/data00/brick1/data/allen​stevenson_data/import_files/
17:51 marcoswk trusted.afr.dirty=0x000000000000000000000000
17:51 marcoswk trusted.afr.haikuvol-client-​2=0x000000000000000000000000
17:51 marcoswk trusted.afr.haikuvol-client-​3=0x000000000000000000000000
17:51 marcoswk trusted.gfid=0x0fe5e9c4422c46e19bb1db02ad4f444e
17:51 marcoswk trusted.glusterfs.dht=0x0000​0001000000007fff1149ffffffff
17:51 marcoswk [root@web27 ~]# getfattr -m .  -d -e hex /mnt/data00/brick1/data/allen​stevenson_data/import_files/
17:51 marcoswk getfattr: Removing leading '/' from absolute path names
17:51 marcoswk # file: mnt/data00/brick1/data/allen​stevenson_data/import_files/
17:51 JoeJulian gaj!
17:51 marcoswk trusted.afr.dirty=0x000000000000000000000000
17:51 marcoswk trusted.afr.haikuvol-client-​2=0x000000000000000000000000
17:51 marcoswk trusted.afr.haikuvol-client-​3=0x000000000000000000000000
17:51 marcoswk trusted.gfid=0x0fe5e9c4422c46e19bb1db02ad4f444e
17:51 marcoswk trusted.glusterfs.dht=0x0000​0001000000007fff1149ffffffff
17:52 marcoswk sorry! I'm newish to IRC, is there a better way to paste that in?
17:52 JoeJulian @paste
17:52 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
17:52 JoeJulian Or just go to fpaste.org
17:52 marcoswk got it
17:53 JoeJulian Well, those are not split-brain.
17:53 JoeJulian Looks perfectly healthy from here.
17:54 partner hey Joe, whassup?
17:54 JoeJulian The cost of living.
17:56 marcoswk Here's the extended attr info for the symlinks in .glusterfs dir
17:56 marcoswk https://gist.github.com/met​avida/dbeecf69d49300b4ee3c
17:56 partner i hear you..
17:56 marcoswk ah... a co-worker manually created the symlinks in the .gusterfs dir
17:56 marcoswk Everything looking identical
17:56 marcoswk and yet...
17:57 marcoswk https://gist.github.com/met​avida/3d87b96a583b2ee8b0f6
17:57 marcoswk still shows up on the `gluster volume heal haikuvol info` list as unhealthy
17:59 marcoswk and client still gets "stale file handel"
17:59 marcoswk https://gist.github.com/metavida/3d8​7b96a583b2ee8b0f6#file-client-access
18:00 marcoswk So the missing .glusterfs symlink must only be part of the problem? because clearly `gluster volume heal haikuvol info` is looking at some other info to know that things are still wrong
18:03 JoeJulian Check the nfs.log (you're mounting via nfs, right?) or glustershd.log (the latter would need checked on every server) for that directory. See if there's a hint as to what's failing.
18:03 JoeJulian Also check gluster volume status to make sure everything is up.
18:05 marcoswk We're using gluster client to mount, so I'm not seeing an nfs.log on clients
18:06 JoeJulian Whoah... you're getting an ESTALE from the fuse client? That's odd.
18:07 JoeJulian Now that I've said that... I seem to remember some bug related to that...
18:08 marcoswk https://gist.github.com/met​avida/665a0f77423512d75db0
18:08 marcoswk Interesting results in /var/log/glusterfs/glhaiku.log
18:08 marcoswk "gfid different on haikuvol-replicate-0."
18:09 JoeJulian Aha
18:10 JoeJulian You know how to fix that with setfattr, I assume.
18:11 marcoswk So you're saying that the extended attributes are pointing at a different gfid on the 2 bricks?
18:11 marcoswk (or rather, that's what the error is implying?)
18:13 JoeJulian It's saying that there's a gfid mismatch on that between bricks. The two you pasted were the same, but maybe not on the other two.
18:14 JoeJulian ... and I'm not sure about the local vs subvol thing.
18:14 gem joined #gluster
18:15 marcoswk ah! yeah so I just confirmed that the 2 bricks in our other pair do indeed have different gfid values in extended attributes
18:15 marcoswk So as far as fixing with setfattr. Does it matter which gfid we pick?
18:15 JoeJulian Pick one, make them all the same, delete the errant symlinks.
18:16 marcoswk I also see that the "trusted.glusterfs.dht" value is different between between the replicants
18:16 marcoswk Does that value matter?
18:16 JoeJulian That should be.
18:16 marcoswk ok
18:16 JoeJulian They're hash maps.
18:16 JoeJulian @lucky dht misses are expensive
18:16 glusterbot JoeJulian: https://joejulian.name/blog​/dht-misses-are-expensive/
18:16 JoeJulian See that article for details.
18:17 marcoswk awesome! Going to give setfattr a try, and thanks for the follow-up reading on dhts
18:17 JoeJulian You're welcome. :)
18:23 Gill joined #gluster
18:28 spalai joined #gluster
18:36 Pupeno joined #gluster
18:43 karnan joined #gluster
18:44 Philambdo1 joined #gluster
18:53 jmarley joined #gluster
19:21 hagarth joined #gluster
19:37 Gill joined #gluster
19:39 Gill_ joined #gluster
19:40 nishanth joined #gluster
19:41 lpabon joined #gluster
20:09 dgbaley joined #gluster
20:11 dgbaley Hey. Does mounting over NFS have any limitations on the fs side? e.g., ACLs, hardlinks, extended attributes, etc. I'd like to try out using gluster for the NFS roots of my pxe-booted computer lab.
20:20 Prilly joined #gluster
20:30 Vortac joined #gluster
20:36 badone_ joined #gluster
21:04 gildub joined #gluster
21:16 mbukatov joined #gluster
21:24 plarsen joined #gluster
21:40 marbu joined #gluster
21:41 lkoranda joined #gluster
21:57 lkoranda joined #gluster
21:58 marbu joined #gluster
21:58 tom[] left #gluster
22:03 mbukatov joined #gluster
22:03 Prilly joined #gluster
22:09 lkoranda joined #gluster
22:46 ShaunR JoeJulian: you around? i'm still mucking with this php cluster.  I've implemented php-fpm, apc and nfs and the speed has improved but is still slow.  Have you done much testing with using SSD's instead.
23:04 JoeJulian ShaunR: have you moved sessions into memcached? Disk based sessions suck.
23:11 ShaunR JoeJulian: joomla defaults to database stored sessions
23:12 cholcombe joined #gluster
23:12 ShaunR as of now the db queries only accounts for 50 ms of load time
23:14 ShaunR i created another volume with 2 ssd bricks and going to test that.  copying right now the data from the first volume to the second and the network is only running at 10-15 mbit
23:22 ShaunR hmm, looks to have cut the load time down by a 30-40
23:22 ShaunR %
23:23 ShaunR still sucks though compared to a native sata drive
23:30 JoeJulian It's an entirely different job.
23:30 JoeJulian Don't compare apples to orchards.
23:35 plarsen joined #gluster
23:37 ShaunR JoeJulian: I'm just having a hard time with this... The idea of this cluster is to add redundancy and performance, so that the site can handle a ton of traffic and expand as needed easily.  The problem is that the storage right now is so slow, that a bench on the app can barely handle anything at all
23:38 ShaunR i mean, i could probably cut out of 512 MB ram vps, with a single sata disk and it would handle more traffic
23:38 ShaunR Maybe gluster isnt ment for this, if thats the case any idea what is?
23:39 JoeJulian rsync?
23:39 JoeJulian You're right. Two machines could handle less traffic with a clustered filesystem.
23:40 JoeJulian You add redundancy at the cost of latency.
23:41 JoeJulian Compound that latency with tools that constantly touch the disk and you're up the proverbial creek without the appropriate propulsion device.
23:42 JoeJulian Take that same workload, expand it to 10 servers or 100. Now you can take many multiples of the same traffic.
23:43 JoeJulian The apples and orchards analogy: If you have a basket of apples, you can feed 50 people very quickly. Now if you have 2000 people, it's much more efficient to send them in to the orchard to pick their own. Yes, each one is going to pay a performance penalty, but the entire workload will be done in a fraction of the time over you going out and picking each apple yourself.
23:44 JoeJulian Perhaps your not at the scale where you'll see a benefit from clustering.
23:45 JoeJulian But if you plan to be, use the tools available to you and design a system that satisfies your needs and the needs of your expected growth.
23:45 JoeJulian That's engineering.
23:46 ShaunR the only benifit i see is redundancy.  even if this cluster was 10 servers large, my gut tells me that 2 servers (1 web, 1 mysql) would out perform it
23:46 JoeJulian And I don't know if gluster is the right tool for you. I've used it with php sites effectively. My colleagues have used it with java, python, ruby, and even php effectively.
23:48 JoeJulian ... so... with apc and apc.stat=0, what do you have that's still hitting storage?
23:48 ShaunR JoeJulian: i notice you in ceph too, cephfs looks to be in no state for production use... but have you played with block mounts with gfs2 or anything like that at all?
23:50 ShaunR JoeJulian: unsure, does gluster have a tool to show me hits
23:51 JoeJulian gfs2 is nearly as bad as cephfs
23:52 JoeJulian And even as a block device, ceph has slower throughput than gluster.
23:55 ShaunR i was thinking it may handle small files better
23:55 JoeJulian It's not so much the size of the file as the excessive steps that php uses to get to a file.
23:57 JoeJulian Depending on your pathing, php can do (frequently does) 100 lookups per file.
23:58 ShaunR ya, thats alot considering most php apps these days use a ton of files
23:59 julim joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary