Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 tdasilva joined #gluster
00:16 RobertLa- joined #gluster
00:21 bala joined #gluster
00:30 hagarth joined #gluster
00:44 gdubreui joined #gluster
00:51 gmcwhistler joined #gluster
00:55 davinder joined #gluster
01:00 hagarth joined #gluster
01:11 gmcwhistler joined #gluster
01:26 RobertLaptop_ joined #gluster
01:27 sputnik13 joined #gluster
01:33 tdasilva left #gluster
01:41 RobertLaptop_ joined #gluster
01:53 RobertLaptop_ joined #gluster
02:12 RobertLaptop joined #gluster
02:16 RobertLaptop_ joined #gluster
02:18 haomaiwa_ joined #gluster
02:26 RobertLaptop joined #gluster
02:29 baojg joined #gluster
02:31 Amanda joined #gluster
03:01 nightwalk joined #gluster
03:06 hchiramm__ joined #gluster
03:09 bharata-rao joined #gluster
03:27 kdhananjay joined #gluster
03:30 shubhendu joined #gluster
03:35 kanagaraj joined #gluster
03:49 itisravi joined #gluster
03:58 shubhendu joined #gluster
04:10 ndarshan joined #gluster
04:13 haomaiw__ joined #gluster
04:13 atinm joined #gluster
04:14 deepakcs joined #gluster
04:23 rastar joined #gluster
04:30 ndk joined #gluster
04:38 spandit joined #gluster
04:51 ppai joined #gluster
04:54 dusmant joined #gluster
05:06 prasanth_ joined #gluster
05:07 benjamin_ joined #gluster
05:22 haomaiwang joined #gluster
05:25 baojg joined #gluster
05:27 Philambdo joined #gluster
05:29 lalatenduM joined #gluster
05:39 TvL2386 joined #gluster
05:39 raghu joined #gluster
05:42 TvL2386 hi guys! I'm having a replicated gluster filesystem running on a 600GB ext4 filesystem. No tuning to either glusterfs or ext4. According to `df -h` the size is 591G, used 568G and Avail 0. It's full but I'm missing 23G. I'm trying to find out how it is used and why Size != Used.
05:44 Alex df -i doesn't indicate 100% inode usage does it?
05:45 TvL2386 used: 1613984 free: 37707616 (5% use)
05:45 TvL2386 it just popped in my mind that sometimes 5% of space is kept for root usage or something
05:45 TvL2386 I've seen that value in installers.... Searching for it
05:47 ravindran1 joined #gluster
05:47 ravindran1 left #gluster
05:49 Alex Ah, er, yeah. YOu can alter that with tune2fs -m 0 yourdevice
05:49 Alex Obviously don't make that change without being sure you want to, etc.
05:49 TvL2386 Thanks Alex!
05:51 TvL2386 just confirmed reserved blocks for root to be at 5%
05:52 TvL2386 I changed it and got 24G free now :)
05:53 TvL2386 glusterfs is running from a dedicated partition. I don't mind if has no reserved blocks
05:54 Alex *nod* - we do the same here
05:55 TvL2386 cool :)
05:56 TvL2386 what would you use as fs underneath glusterfs?
05:56 Alex I'm currently using XFS, as we had issues with ext4 and Unicode
05:57 Alex I think most people recommend XFS, but it's probably immaterial
05:57 Alex (for most workloads)
05:57 TvL2386 I see
05:57 Alex https://bugzilla.redhat.co​m/show_bug.cgi?id=1024181 for background
05:57 glusterbot Bug 1024181: unspecified, unspecified, ---, csaba, NEW , Unicode filenames cause directory listing interactions to hang/loop
05:59 shubhendu joined #gluster
06:20 RameshN joined #gluster
06:21 psharma joined #gluster
06:21 ngoswami joined #gluster
06:28 vimal joined #gluster
06:31 Pavid7 joined #gluster
06:33 glusterbot New news from newglusterbugs: [Bug 1087173] Gluster module (purpleidea) blocks unless firewall stopped with no errors reported <https://bugzilla.redhat.co​m/show_bug.cgi?id=1087173> || [Bug 1087177] Gluster module (purpleidea) fails on mkfs exec command <https://bugzilla.redhat.co​m/show_bug.cgi?id=1087177>
06:36 rahulcs joined #gluster
06:47 rastar joined #gluster
06:50 williamj_ joined #gluster
06:50 shubhendu joined #gluster
06:51 ekuric joined #gluster
06:53 rgustafs joined #gluster
06:57 purpleidea thanks glusterbot!
07:03 eseyman joined #gluster
07:04 ctria joined #gluster
07:05 saurabh joined #gluster
07:07 rahulcs joined #gluster
07:15 vpshastry joined #gluster
07:18 rastar joined #gluster
07:20 nshaikh joined #gluster
07:22 keytab joined #gluster
07:32 pvh_sa joined #gluster
07:35 prasanth_ joined #gluster
07:35 haomaiwang joined #gluster
07:35 Pavid7 joined #gluster
07:36 fsimonce joined #gluster
07:37 hybrid512 joined #gluster
07:38 hybrid512 joined #gluster
07:41 ctria joined #gluster
07:47 haomai___ joined #gluster
07:56 monotek joined #gluster
08:08 X3NQ joined #gluster
08:30 saravanakumar1 joined #gluster
08:32 shubhendu|lunch joined #gluster
08:43 meghanam joined #gluster
08:43 rahulcs joined #gluster
08:43 calum_ joined #gluster
08:47 Pavid7 joined #gluster
08:47 Humble joined #gluster
08:49 pvh_sa joined #gluster
08:53 liquidat joined #gluster
09:03 haomaiwang joined #gluster
09:05 haomai___ joined #gluster
09:08 Humble joined #gluster
09:13 rahulcs joined #gluster
09:14 baojg_ joined #gluster
09:18 TvL2386 joined #gluster
09:19 nishanth joined #gluster
09:19 nthomas joined #gluster
09:26 baojg_ left #gluster
09:27 baojg_ joined #gluster
09:31 ekuric joined #gluster
09:32 rastar joined #gluster
09:33 dusmant joined #gluster
09:34 rahulcs joined #gluster
09:37 ctria joined #gluster
09:38 baojg joined #gluster
09:41 baojg joined #gluster
09:41 bazzles joined #gluster
09:42 vcauw joined #gluster
09:42 qdk joined #gluster
09:46 hflai joined #gluster
09:46 RobertLaptop joined #gluster
09:47 xavih joined #gluster
09:53 aravindavk joined #gluster
09:54 neoice joined #gluster
10:06 pvh_sa joined #gluster
10:14 rahulcs joined #gluster
10:18 harish_ joined #gluster
10:36 ira_ joined #gluster
10:40 XATRIX joined #gluster
10:40 XATRIX Hi guys, how can i initiate gluster self-heal ?
10:40 XATRIX [root@vox1-ua xatrix]# cat /mnt/mail/rf.ua/xatrix/dovecot.index.log
10:40 XATRIX cat: /mnt/mail/rf.ua/xatrix/dovecot.index.log: Input/output error
10:40 XATRIX The same on the second node
10:41 XATRIX [root@vox1-ua xatrix]# ll /mnt/mail/rf.ua/xatrix/dovecot.index.log
10:41 XATRIX -rw-rw---- 1 dovecot mail 11672 Apr 13 23:27 /mnt/mail/rf.ua/xatrix/dovecot.index.log
10:47 Andyy2 XATRIX: might be in split brain.
10:47 Pavid7 joined #gluster
10:47 XATRIX Andyy2: possibly, but how can i fix things up ?
10:48 Andyy2 http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
10:48 glusterbot Title: Fixing split-brain with GlusterFS 3.3 (at joejulian.name)
10:48 vpshastry1 joined #gluster
10:50 XATRIX Andyy2: seems like i don't have splitbrain: http://ur1.ca/h2rnn
10:50 glusterbot Title: #94020 Fedora Project Pastebin (at ur1.ca)
10:51 Andyy2 check the client logs for errors (grep ' E ' <client.log>)
10:52 Andyy2 also the bricks.
10:53 Andyy2 you could start a heal operation too. I have no other ideas. normally io errors are because gluster is blocking access to a file, because of split-brain.
10:55 XATRIX Andyy2: take a look please http://ur1.ca/h2rol
10:55 glusterbot Title: #94021 Fedora Project Pastebin (at ur1.ca)
10:55 XATRIX Seems like i have a broken connection ? But the first lines says my demons are in contact
10:58 GabrieleV_ joined #gluster
11:02 ndk joined #gluster
11:07 dusmant joined #gluster
11:12 vpshastry1 joined #gluster
11:13 ppai joined #gluster
11:18 gdubreui joined #gluster
11:24 aravindavk joined #gluster
11:29 kdhananjay1 joined #gluster
11:34 glusterbot New news from newglusterbugs: [Bug 921215] Cannot create volumes with a . in the name <https://bugzilla.redhat.com/show_bug.cgi?id=921215>
11:38 siel joined #gluster
11:40 XATRIX Damn, can make self heal to start
11:41 shubhendu|lunch joined #gluster
11:43 bazzles joined #gluster
11:44 XATRIX Can i use Server-quorum just for 2 nodes cluster ?
11:47 dusmant joined #gluster
11:50 Pavid7 joined #gluster
11:53 bfoster joined #gluster
11:54 gdubreui joined #gluster
12:01 XATRIX Ok, seems like i've fixed it a bit
12:02 itisravi_ joined #gluster
12:04 Ark joined #gluster
12:11 andreask joined #gluster
12:12 benjamin_ joined #gluster
12:21 dusmant joined #gluster
12:29 Andyy2 XATRIX: Sorry was away. guess you found the issue: split-brain for those dovecot logs. quorum will not work on two nodes.
12:31 Pavid7 joined #gluster
12:34 glusterbot New news from newglusterbugs: [Bug 1087173] [RFE] Gluster module (purpleidea) to support iptables <https://bugzilla.redhat.co​m/show_bug.cgi?id=1087173>
12:36 hagarth joined #gluster
12:42 B21956 joined #gluster
12:43 sroy_ joined #gluster
12:45 gothos joined #gluster
12:45 gothos left #gluster
12:54 ctria joined #gluster
12:55 XATRIX Andyy2: yea, i've fixed it. By deleting the file
12:55 XATRIX So it was deleted on the opposite node, and recreated by dovecot
12:56 XATRIX How gluster check for the file consistency ? By the checksum ?
13:05 glusterbot New news from newglusterbugs: [Bug 1087487] DHT - rebalance - output of 'gluster volume rebalance start/start force/fix-layout start ' is ambiguous and poorly formatted <https://bugzilla.redhat.co​m/show_bug.cgi?id=1087487>
13:12 diegows joined #gluster
13:13 Staples84 joined #gluster
13:15 japuzzo joined #gluster
13:19 tdasilva joined #gluster
13:28 itisravi_ joined #gluster
13:29 Andyy2 XATRIX: see here: http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/ re: keeping stuff in sync.
13:29 glusterbot Title: What is this new .glusterfs directory in 3.3? (at joejulian.name)
13:30 Andyy2 XATRIX: If you want to dig deeper, read this: http://hekafs.org/index.php/2012/03/glu​sterfs-algorithms-replication-present/
13:34 XATRIX Alright, thanks for docs. AFK for reding
13:34 XATRIX *reading
13:38 japuzzo Seems we have a Y2K level event with HeartBleed, I'm personally waiting for the infomercial that will sell you the cure
13:40 Andyy2 japuzzo: shall be fixed by now.
13:47 siel joined #gluster
13:53 dbruhn joined #gluster
14:02 dbruhn_ joined #gluster
14:02 japuzzo Andyy2, Can you please fix the media as well? Thank you, please
14:03 Philambdo joined #gluster
14:06 japuzzo Andyy2, Sony that comment was meant for another channel, I was not trying to be rude
14:08 jobewan joined #gluster
14:10 elico joined #gluster
14:21 LoudNois_ joined #gluster
14:30 rpowell joined #gluster
14:31 itisravi_ joined #gluster
14:45 lmickh joined #gluster
14:48 nullck joined #gluster
14:50 jobewan joined #gluster
14:52 catdevnull joined #gluster
14:52 tdasilva joined #gluster
14:54 rpowell joined #gluster
14:54 dbruhn joined #gluster
15:00 diegows joined #gluster
15:01 daMaestro joined #gluster
15:01 Pavid7 joined #gluster
15:02 Slasheri joined #gluster
15:02 Slasheri joined #gluster
15:03 rpowell joined #gluster
15:04 kaptk2 joined #gluster
15:05 jag3773 joined #gluster
15:13 catdevnull I have a volume that's supposed to be 25x2 distributed-replicate, yet running 'find -type f' on all bricks and piping this through 'sort | uniq -c' shows that there are several files that have only one copy and several that have more than 2 (5 in some cases (?!)). Two questions: 1) wtf? 2) how do I fix this? I've already rebalanced.
15:14 benjamin_____ joined #gluster
15:16 ndk joined #gluster
15:17 dbruhn catdevnull, are some of them showing the permissions T
15:17 catdevnull dbruhn: yes; rerunning with ! -size 0 ! -perm 1000 now to exclude them
15:17 dbruhn still seeing a bunch of them though?
15:18 catdevnull the stats are 3069633x1 (!!!), 7715487x2, 197529x3, 994618x4, 70678x5, 121511x6, 472x7, 400x8, 12x10 and 4x12
15:19 catdevnull (I'll repaste once  the new find is done, but that'll take another ~10 minutes)
15:19 dbruhn if you are still seeing it messy after that, you'll have to essentially go through and treat them like they are in split-brain.
15:20 catdevnull I'm more concerned about the ones with only one copy, really
15:20 dbruhn if you stat those files they should selfheal and end up on the other replicant
15:20 dbruhn the ones with multiple copies will probably block with an input output error
15:22 social hmm shouldn't the empty ones be the LINK files?
15:22 dbruhn the empty ones are link files
15:27 jrcresawn joined #gluster
15:29 kkeithley joined #gluster
15:39 gmcwhistler joined #gluster
15:40 hagarth joined #gluster
15:42 rpowell left #gluster
15:47 micu joined #gluster
16:05 Mo_ joined #gluster
16:06 davinder joined #gluster
16:07 rwheeler joined #gluster
16:08 nthomas joined #gluster
16:10 nishanth joined #gluster
16:18 asku joined #gluster
16:19 glusterbot New news from resolvedglusterbugs: [Bug 1087173] Gluster module (purpleidea) blocks unless firewall stopped with no errors reported <https://bugzilla.redhat.co​m/show_bug.cgi?id=1087173>
16:25 asku left #gluster
16:33 systemonkey joined #gluster
16:34 systemonkey joined #gluster
16:52 vpshastry1 joined #gluster
16:54 rpowell joined #gluster
16:54 vpshastry1 left #gluster
16:55 sputnik13 joined #gluster
16:59 kkeithley joined #gluster
17:02 rpowell1 joined #gluster
17:04 rpowell joined #gluster
17:05 Pavid7 joined #gluster
17:06 baojg joined #gluster
17:07 rpowell2 joined #gluster
17:12 rpowell2 left #gluster
17:26 nishanth joined #gluster
17:27 nthomas joined #gluster
17:30 pvh_sa joined #gluster
17:44 calum_ joined #gluster
17:54 baojg joined #gluster
17:54 zerick joined #gluster
18:00 jag3773 joined #gluster
18:01 hagarth joined #gluster
18:08 monotek i use glusterfs 3.4.3. "gluster volume status --xml" just stopped working on on node (the others work).
18:08 monotek no output... "gluster volume status" works.
18:09 monotek how can i reenable it?
18:13 jclift That's wird.
18:13 * jclift hasn't heard of that happening before
18:13 jclift weird
18:13 rahulcs joined #gluster
18:14 jclift Anything showing up in the Gluster logs on that one node?
18:24 lpabon joined #gluster
18:29 nightwalk joined #gluster
18:32 kkeithley joined #gluster
18:40 jbd1 joined #gluster
18:43 baojg joined #gluster
19:01 andreask joined #gluster
19:15 baojg joined #gluster
19:15 rahulcs joined #gluster
19:17 baojg joined #gluster
19:18 RoyAndre joined #gluster
19:22 monotek no, vrzthing else isworking finde. restart didnt help to... it worked minutes before.... strange....
19:28 monotek no i found some op_ctx modification faile in the logs... seems to come when nagis ist triggering the command....
19:33 jiffe98 seems if I try to nfs export a gluster mount and reimport it, works fine until apache tries to access it after which everything hangs
19:34 jiffe98 not seeing any errors in the logs
19:37 dbruhn jiffe98, what happens if you take apache out of the picture and try and access the stuff yourself?
19:38 edward1 joined #gluster
19:38 DanF joined #gluster
19:39 DanF Hi, Newbie here, playing with gluster. If I have a volume of type distribute and a brick disappears for whatever reason, how I can get gluster to a) notice a brick is dead. b) tell me which files are unavailable due to that brick being dead
19:41 jiffe98 dbruhn: I can list, read and write fine up until apache accesses it
19:42 pvh_sa joined #gluster
19:46 dbruhn jiffe98, what logs are you checking?
19:46 dbruhn DanF, gluster is aware the brick is gone, it just keeps working without it.
19:47 dbruhn the files that are unavailable stop showing up in the filesystem
19:48 DanF ok, so there is no gluster command to report which files are unavailable? I would just have to traverse the gluster filesystem and see which files produce IO errors?
19:48 dbruhn I believe in a distributed volume if a brick is offline the files don't even show up in the file system
19:48 dbruhn I would have to test that though
19:49 DanF That seems to agree with what I see here
19:49 jiffe98 dbruhn: client and server logs
19:49 jiffe98 gluster logs that is
19:50 dbruhn Are you running your servers as clients?
19:51 jiffe98 no servers are on their own standalone machines
19:52 jiffe98 this used to work with 3.3.1 and ubuntu 12.04, I'm trying this with 3.4.2 on ubuntu 14.04 now
19:54 dbruhn hmm ok, weird. I don't have any 3.4.x systems to test anything on
19:57 dbruhn When you run into the issues from the apache do other clients suffer to, or only the one?
19:59 jiffe98 seems to be just that client
19:59 Matthaeus joined #gluster
20:00 dbruhn Does the behavior happen to all clients accessing it via apache
20:00 jiffe98 I only have the one vm setup so far for testing
20:00 hagarth joined #gluster
20:01 jiffe98 if I mount with fuse or nfs direct its fine
20:01 rahulcs joined #gluster
20:01 dbruhn wait, you're resharing an NFS mount via NFS again?
20:01 jiffe98 no
20:02 jiffe98 I'm mounting with fuse and then exporting that via nfs
20:02 dbruhn why?
20:03 jiffe98 for two reasons, one it gives me redundancy while using nfs, and it also seems to speed things up over mounting nfs using the gluster nfs server
20:04 dbruhn Interesting, I guess the only thing I can say to that is you are recreating how gluster already serves via NFS
20:05 dbruhn So I am not sure why it would be faster
20:05 dbruhn are you using NFSv.3 or 4?
20:05 jiffe98 my guess is is handles all the php negative lookups locally
20:06 jiffe98 I tried both
20:09 rahulcs joined #gluster
20:12 dbruhn So you have Apache -> NFS Client -> |Network| NFS Server-> GlusterFuse -> GlusterServer
20:15 Guest33936 Anyone have any tip tips on how to improve performance of a gluster mount under heavy read load from 11 client servers? The problem is cpu wait spikes and the network dies. I can provide mount options if that will help.
20:15 jiffe98 that sounds right
20:15 jiffe98 everything but the gluster server is on the same machine
20:15 dbruhn Ahh ok
20:16 dbruhn Apache -> NFS Client -> NFS Server -> Gluster Fuse Client -> |Network| -> Gluster Server
20:17 jiffe98 gotcha yeah that's how this is setup
20:17 dbruhn Guest33936, go ahead? But it already sounds like you've defined a resource issue?
20:17 dbruhn jiffe98, distro?
20:17 jiffe98 ubuntu 14.04
20:18 dbruhn When it stops working does the apache log say anything?
20:19 Guest33936 dbruhn: i believe the client cannot read the information from gluster fast enough and the network hangs, does that sound reasonable? Default 3.4 brick settings on a 4 brick disturbed replicated volume. /etc/fstab my_shiny_ip:/vertica-load /opt/vertica/load  glusterfs noauto,rw,defaults 0 0
20:20 Guest33936 sorry rather  the gluster client cannot give the data fast enough to the OS requesting the data and then then network hangs.
20:22 dbruhn Guest33936, does the network stop working? or am I misreading that?
20:23 Guest33936 yes dbruhn, the network just stops and a service network restart brings it back up
20:24 jiffe98 dbruhn: nothing gets logged, no access or error logs
20:25 dbruhn Guest33936, that seems like you have a system issue or network issue going on and Gluster is having issues because of it.
20:25 Guest33936 ok I will do more investigation and report back later. Thank you.
20:26 dbruhn jiffe98, not super familiar with ubuntu logs, is there an NFS log or system log that shows all of NFS output?
20:30 calum_ joined #gluster
20:32 rahulcs joined #gluster
20:35 jiffe98 dbruhn yup kernel logs show nfs: server 127.0.0.1 not responding, still trying
20:41 jiffe98 verified if I use the nfs server to mount a local dir (not gluster) that apache works then
20:41 jiffe98 so it seems its something between the nfs server and the fuse client
20:43 rahulcs joined #gluster
20:47 kkeithley joined #gluster
20:47 dbruhn weird
20:54 siel joined #gluster
20:56 semiosis :O
21:11 avati joined #gluster
21:12 ctria joined #gluster
21:17 rahulcs joined #gluster
21:29 Matthaeus joined #gluster
21:29 rahulcs_ joined #gluster
21:39 rahulcs joined #gluster
21:46 kkeithley joined #gluster
21:50 rahulcs joined #gluster
21:57 hagarth joined #gluster
21:59 jag3773 joined #gluster
22:02 rahulcs_ joined #gluster
22:07 criticalhammer does gluster support file checksum?
22:07 criticalhammer hopfully in a translator while files are being writen to a brick
22:07 criticalhammer and checksum stored in a file extention
22:07 fidevo joined #gluster
22:11 criticalhammer I've found this http://gluster.org/community/document​ation/index.php/Arch/BitRot_Detection
22:11 glusterbot Title: Arch/BitRot Detection - GlusterDocumentation (at gluster.org)
22:11 criticalhammer but thats about it
22:15 kkeithley joined #gluster
22:58 edong23 joined #gluster
23:03 vpshastry1 joined #gluster
23:04 jag3773 joined #gluster
23:17 jclift criticalhammer: Not yet.  If you need bit rot protection, then one way people have mentioned is using a filesystem on the bricks that supports it.  ZFS I think is what people have mentioned.
23:17 * jclift has no experience with ZFS though, and (at this stage) wouldn't know where to start
23:24 qdk joined #gluster
23:25 kkeithley joined #gluster
23:26 hagarth joined #gluster
23:55 Hydro joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary