Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 onair joined #gluster
00:16 onair Hello! Is there a way to sync replicated volumes other than self-heal? I have a 3 node replicated gluster. Some time in the past, nodes #2 and #3 had an unnoticed problem and disconnected from the cluster. All the clients were connected to node #1. Now, node #1 has ~1.2M files more than the other two. If I trigger self-healing with find ... | xargs stat, clients start to hang for minutes when reading folder contents. As this is a production e
00:25 JoeJulian onair: What version?
00:25 onair JoeJulian: 3.3.1
00:37 bala joined #gluster
00:44 JoeJulian onair: Sorry, got called away.... 3.3.2 is current and the self-heal daemon will heal in the background. No need to do a "find".
00:44 JoeJulian If you wanted to do the find, mount the volume again in some other directory and do your find there.
00:47 onair JoeJulian: if I run the find, things get worse. Even if I don't run the find, if I run a ls on the client, it hangs for a few minutes (on a folder which has ~5k files... and all the folders tend to have around 5k files)
00:47 JoeJulian Right, because that client is trying to heal 5k files and can only background-heal 16.
00:48 JoeJulian If you mount another client, that independent client can block on heals and not affect your user client.
00:49 onair Is there a way to be sure that only that client will block on heals?
00:51 JoeJulian Turn off self-heal
00:53 JoeJulian That's something else I don't like but I'm not sure how to solve. Almost every tool that lists directories also does a stat of each file it finds. That stat causes a lookup() which triggers a self-heal.
00:53 JoeJulian If you were in one of those unhealed directories and just did an "echo *" it would return all the filenames instantly with no blocking.
00:54 onair I see
00:56 onair so, if I disable self-heal on the volume and mount an additional client, how will I trigger the sync?
00:56 onair sorry if I'm asking stupid questions
00:59 JoeJulian I was pondering the same thing. You would have to mount the volume using the gluster command (instead of mount) setting translator options. The only thing is, I think the server's vol settings will override the command line.
01:00 JoeJulian I guess the only way to do it would be to copy the $volname-fuse.vol file from the server to the client you want to use for healing, edit it and remove the options to disable self-heal, and mount using that vol file.
01:00 onair I'll try that in a test environment
01:01 onair if I want to accelerate the healing process, is it OK to increase cluster.background-self-heal-count?
01:01 onair now it's set to 32
01:02 onair and I don't see any excessive I/O or network traffic
01:07 JoeJulian Sure. Watch memory usage and load but as long as it works for you, go for it.
01:08 onair is there anything else that I can tune to accelerate the healing process?
01:10 onair and one last question... how cand I mount the volume using the gluster command? I haven't done this before and I can't find this info in the docs right now.
01:11 JoeJulian mount -t glusterfs filename.vol $path
01:11 onair thank you!
01:16 onair one more thing... If I change cluster.background-self-heal-count, do I have to remount the volume on the clients or restert gluster's daemons?
01:50 sjm joined #gluster
02:01 harish joined #gluster
02:16 rturk joined #gluster
02:19 jcsp1 joined #gluster
02:20 haomaiwang joined #gluster
02:20 rturk joined #gluster
02:26 MugginsM joined #gluster
02:34 rturk joined #gluster
02:40 davinder6 joined #gluster
02:42 vpshastry joined #gluster
02:48 harish joined #gluster
03:02 jag3773 joined #gluster
03:11 aurigus joined #gluster
03:11 aurigus joined #gluster
03:41 haomaiwa_ joined #gluster
03:46 xymox joined #gluster
03:46 hchiramm_ joined #gluster
03:46 ThatGraemeGuy joined #gluster
03:47 aurigus joined #gluster
03:47 aurigus joined #gluster
03:53 jwww_ joined #gluster
03:56 primusinterpares joined #gluster
03:58 vpshastry left #gluster
04:00 mjsmith2 joined #gluster
04:01 haomaiwa_ joined #gluster
04:46 sputnik13 joined #gluster
04:53 vpshastry joined #gluster
05:16 mortuar joined #gluster
05:26 vpshastry joined #gluster
05:42 sputnik13 joined #gluster
06:18 ctria joined #gluster
07:37 ProT-0-TypE joined #gluster
07:41 edward1 joined #gluster
07:56 StarBeas_ joined #gluster
07:58 ThatGraemeGuy joined #gluster
08:17 StarBeast joined #gluster
08:21 ramteid joined #gluster
08:44 vpshastry joined #gluster
08:56 vpshastry left #gluster
09:03 abhi joined #gluster
09:06 abhi I have a bluster cluster setup. This is exposed to windows as a NFS mount which is used by our web application. Now, the issue is the web application is reporting a 0 byte file if it tries to read the file it just wrote to. gluster is at 3.4.2. All settings are at their defaults on both the NFS client and Gluster server. Any ideas
09:39 overclk joined #gluster
09:39 borreman_dk joined #gluster
09:39 neoice_ joined #gluster
09:41 suliba_ joined #gluster
09:41 k3rmat joined #gluster
09:47 harish joined #gluster
09:49 zerick joined #gluster
10:23 abhi joined #gluster
10:25 abhi left #gluster
10:28 abhi joined #gluster
10:28 abhi left #gluster
10:51 markd_ joined #gluster
11:11 markd_ joined #gluster
11:24 mdavidson joined #gluster
11:43 silky joined #gluster
11:45 decimoe joined #gluster
12:13 recidive joined #gluster
12:38 dusmant joined #gluster
12:41 ProT-O-TypE joined #gluster
12:48 vpshastry joined #gluster
12:57 diegows joined #gluster
13:09 vpshastry joined #gluster
13:17 edward1 joined #gluster
13:18 haomaiwang joined #gluster
13:25 haomaiw__ joined #gluster
13:44 hagarth joined #gluster
14:01 mjsmith2 joined #gluster
14:06 aravindavk joined #gluster
14:11 chirino joined #gluster
14:36 sputnik13 joined #gluster
14:37 jrcresawn joined #gluster
14:57 sjm joined #gluster
15:01 qdk_ joined #gluster
15:05 mjsmith2 joined #gluster
15:06 hagarth joined #gluster
15:21 _abhi joined #gluster
15:22 _abhi I am having a problem with files less than 100MB appearing as 0 bytes files from my application running on windows. running 3.4.3
15:23 _abhi this happens when the application tries to read the file immediately after it is written to gluster.
15:26 _abhi gluster is mounted as NFS usign the NFS client for windows feature
15:34 in_ joined #gluster
15:39 foobar probably an nfs caching issue... try it against a regular NFS server (not being gluster) ... probably you will see the same issues
15:53 vpshastry joined #gluster
15:54 foobar i/win 14
15:54 davinder6 joined #gluster
16:15 jag3773 joined #gluster
16:53 rotbeard joined #gluster
16:54 vpshastry left #gluster
16:59 Pavid7 joined #gluster
17:06 hagarth joined #gluster
17:08 _abhi http://paste.ubuntu.com/7560041/
17:08 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:08 _abhi I get this error in nfs.log when windows 2008 R2 machine tries to access it
17:08 _abhi Win 2012 can acces it fine
17:09 _abhi running 3.5.0
17:16 _abhi left #gluster
17:16 _abhi joined #gluster
17:16 haomaiwa_ joined #gluster
17:27 haomai___ joined #gluster
17:28 haomaiw__ joined #gluster
17:29 haomai___ joined #gluster
17:36 Wizzup joined #gluster
17:37 hagarth joined #gluster
17:37 _abhi I get a “The file <%1 NULL:NameDest> is too large for the destination file system when I try to create a folder in a gluster volume from windows 2008 r2
17:38 _abhi windws 2012 deals with it fine
17:46 davinder6 joined #gluster
18:09 jrcresawn joined #gluster
18:17 Pavid7 joined #gluster
18:21 [o__o] joined #gluster
18:37 bennyturns joined #gluster
18:37 hagarth joined #gluster
19:04 ctria joined #gluster
19:08 jrcresawn joined #gluster
19:37 jcsp joined #gluster
19:57 hchiramm_ joined #gluster
20:16 XpineX_ joined #gluster
20:50 Pupeno joined #gluster
20:57 XpineX joined #gluster
21:25 _pol joined #gluster
21:51 crashmag joined #gluster
22:11 maduser joined #gluster
22:22 _pol joined #gluster
22:38 ceddybu1 joined #gluster
22:52 fidevo joined #gluster
23:22 diegows joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary