Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 atrius joined #gluster
00:02 post-factum for the very first time in my company history every fscking single node is running *the same* gluster version
00:02 post-factum it was very long road
00:03 MugginsM I'm still a long way from that
00:03 MugginsM got sooo close, then discovered there are no 3.7.x packages for Ubuntu Precise
00:04 post-factum hah
00:05 post-factum JoeJulian: if i ever meet you, I'll buy you some beer
00:05 post-factum that's for sure
00:06 MugginsM yeh, likewise
00:07 russoisraeli joined #gluster
00:13 johnmilton joined #gluster
00:20 harish joined #gluster
00:21 theron joined #gluster
00:23 davpostpunk i need to know if it's possible upgrade gluster version from 3.7.8 to 3.7.11 without take to stop volume
00:28 Intensity joined #gluster
00:30 post-factum davpostpunk: you will have to remount the volume in case you need to update the client as well
00:31 post-factum davpostpunk: also, to upgrade server side with no downtime for clients, you need replica
00:32 post-factum davpostpunk: and please, stop cross-posting the same question to all channels :(
00:33 johnmilton joined #gluster
01:06 post-factum JoeJulian: what if gfid is not resolved to filename?
01:08 gem joined #gluster
01:11 JoeJulian Check the number of links on the gfid file. If it's only 1 on every server, then it's a dead file.
01:12 theron joined #gluster
01:13 theron joined #gluster
01:14 EinstCrazy joined #gluster
01:16 post-factum and then it could be removed from under .glusterfs/?
01:20 julim joined #gluster
01:20 post-factum it is 1 everywhere except 1 file
01:22 post-factum and that file is broken dovecot index, that could be removed as well
01:22 post-factum so, i just remove those from .glusterfs/?
01:28 MugginsM joined #gluster
01:33 post-factum yep, removing fixed remaining entries
01:33 post-factum ok, done with that
01:34 ghenry joined #gluster
01:39 Marbug joined #gluster
01:58 russoisraeli joined #gluster
01:59 Lee1092 joined #gluster
02:02 EinstCrazy joined #gluster
02:05 caitnop joined #gluster
02:07 davpostpunk thanks post-factum sorry
02:07 davpostpunk so..i can upgrade the servers one by one without stop the volume?
02:08 davpostpunk can i, i want to say sorry
02:12 harish joined #gluster
02:33 harish_ joined #gluster
02:43 kdhananjay joined #gluster
02:46 natarej joined #gluster
02:56 julim joined #gluster
02:56 overclk joined #gluster
02:57 poing joined #gluster
03:07 aravindavk joined #gluster
03:08 virusuy joined #gluster
03:16 syadnom joined #gluster
03:19 virusuy joined #gluster
03:32 nishanth joined #gluster
03:35 gem joined #gluster
03:37 MugginsM joined #gluster
03:39 ashiq joined #gluster
03:39 Gnomethrower joined #gluster
03:40 EinstCrazy joined #gluster
03:44 akay Does anyone know why using the gfid-resolver takes so long to process? or if there is any way to speed it up?
03:46 nehar joined #gluster
03:54 nehar joined #gluster
04:00 MugginsM joined #gluster
04:02 shubhendu joined #gluster
04:03 RameshN joined #gluster
04:05 atinm joined #gluster
04:07 nbalacha joined #gluster
04:17 ppai joined #gluster
04:22 itisravi joined #gluster
04:32 itisravi joined #gluster
04:38 ramteid joined #gluster
04:51 kdhananjay joined #gluster
04:51 MugginsM joined #gluster
04:58 nehar joined #gluster
04:58 prasanth joined #gluster
04:58 karthik___ joined #gluster
05:07 hgowtham joined #gluster
05:10 ndarshan joined #gluster
05:10 poornimag joined #gluster
05:15 skoduri joined #gluster
05:19 julim joined #gluster
05:19 gowtham joined #gluster
05:22 mhulsman joined #gluster
05:23 rastar joined #gluster
05:27 dgandhi joined #gluster
05:29 dgandhi joined #gluster
05:30 dgandhi joined #gluster
05:31 dgandhi joined #gluster
05:32 dgandhi joined #gluster
05:35 Apeksha joined #gluster
05:35 dorvan joined #gluster
05:42 aspandey joined #gluster
05:43 vmallika joined #gluster
05:51 nishanth joined #gluster
05:59 rafi joined #gluster
06:00 hchiramm joined #gluster
06:05 Manikandan joined #gluster
06:06 Bhaskarakiran joined #gluster
06:10 mhulsman joined #gluster
06:12 Pupeno joined #gluster
06:12 spalai joined #gluster
06:12 skoduri joined #gluster
06:13 jiffin joined #gluster
06:19 harish_ joined #gluster
06:22 jtux joined #gluster
06:26 atalur joined #gluster
06:27 ashiq joined #gluster
06:28 ashiq joined #gluster
06:31 Saravanakmr joined #gluster
06:33 karnan joined #gluster
06:38 armyriad joined #gluster
06:40 MikeLupe joined #gluster
06:41 level7_ joined #gluster
06:44 kdhananjay joined #gluster
06:50 kotreshhr joined #gluster
06:53 [Enrico] joined #gluster
06:53 hackman joined #gluster
06:55 nbalacha joined #gluster
07:03 skoduri joined #gluster
07:14 rastar joined #gluster
07:15 mhulsman joined #gluster
07:16 ramky joined #gluster
07:16 ivan_rossi joined #gluster
07:28 jri joined #gluster
07:37 rouven joined #gluster
07:42 nbalacha joined #gluster
07:45 level7 joined #gluster
07:53 fsimonce joined #gluster
08:03 Pupeno joined #gluster
08:08 post-factum davpostpunk: yes
08:08 TvL2386 joined #gluster
08:08 post-factum akay: because it traverses all the files on brick using find
08:14 level7 joined #gluster
08:15 akay thanks post-factum, I'm looking at one gfid file in .glusterfs folder and see the file is 0 bytes on one of the bricks and 600k on another... is the 0 byte file broken?
08:17 mhulsman joined #gluster
08:17 level7 joined #gluster
08:19 post-factum akay: if that is replica, then yes
08:20 post-factum akay: if that is distributed brick, then no
08:20 mhulsman1 joined #gluster
08:24 akay it's a distrib-replicated volume - those bricks are supposed to be replicas of one another... can I copy the gfid file from one brick to the other?
08:28 muneerse2 joined #gluster
08:28 post-factum you could try to launch heal instead
08:29 akay heal says the file is in split brain. could it be due to the 0 byte gfid file having a newer date than the 600k file?
08:30 skoduri joined #gluster
08:30 post-factum akay: yep, i guess so
08:30 akay could i delete the 0 byte then let heal do its thing?
08:30 post-factum Usage: volume heal <VOLNAME> [enable | disable | full |statistics [heal-count [replica <HOSTNAME:BRICKNAME>]] |info [healed | heal-failed | split-brain] |split-brain {bigger-file <FILE> | latest-mtime <FILE> |source-brick <HOSTNAME:BRICKNAME> [<FILE>]}]
08:31 post-factum i guess you need bigger-file option for heal
08:32 akay but I also need the filename... and I can't get it because the gfid-resolver fails
08:32 post-factum does resolver fail on both bricks?
08:33 akay I'm just running it on the second one now, but it takes a while to run through
08:34 kshlm joined #gluster
08:34 post-factum just have a patience :)
08:34 post-factum it will traverse the whole brick to find matching file
08:35 rastar joined #gluster
08:35 aspandey joined #gluster
08:36 akay the problem is when I have over 500 to fix and the gfid-resolver takes hours for each one :)
08:38 muneerse joined #gluster
08:39 post-factum akay: write a script, if heal full does not work for you
08:39 arcolife joined #gluster
08:41 atalur joined #gluster
08:45 akay post-factum: I have, but with the amount of time it's taking I'm looking to see if there is anything else I can do. I've got another one where the gfid file is ------T 0 bytes on 2 bricks and doesn't exist on any others. I'm guessing the file is toast - but could this be why I see "stale file handle" errors?
08:45 glusterbot akay: ----'s karma is now -3
08:45 post-factum i could. you could also check links count for that file
08:46 kovshenin joined #gluster
08:46 poing joined #gluster
08:46 akay how do I do that?
08:47 Slashman joined #gluster
08:47 mhulsman joined #gluster
08:50 post-factum ls -l will whow that right after file permissions
08:50 Pupeno joined #gluster
08:50 post-factum *show
08:51 Pupeno joined #gluster
08:52 mhulsman1 joined #gluster
08:52 akay nothing there...
08:52 Pupeno joined #gluster
08:53 aravindavk joined #gluster
08:53 akay out of about 30 files in that particular .glusterfs folder only 4 have links
08:54 natarej joined #gluster
08:55 Hamburglr joined #gluster
08:59 nbalacha joined #gluster
08:59 Pupeno_ joined #gluster
08:59 anil_ joined #gluster
09:03 spalai joined #gluster
09:12 Pupeno joined #gluster
09:14 kotreshhr joined #gluster
09:20 gem joined #gluster
09:22 Pupeno joined #gluster
09:26 jiffin1 joined #gluster
09:29 nottc joined #gluster
09:30 poing joined #gluster
09:38 level7 joined #gluster
09:40 Pupeno_ joined #gluster
09:42 Pupeno__ joined #gluster
09:48 nbalacha joined #gluster
09:50 Manikandan joined #gluster
09:51 Wizek joined #gluster
09:55 kassav joined #gluster
09:56 EinstCrazy joined #gluster
10:06 jiffin1 joined #gluster
10:06 EinstCra_ joined #gluster
10:07 post-factum akay: those 4 could be healed, i guess
10:07 post-factum akay: others could be dropped, if no useful info inside
10:11 gem joined #gluster
10:14 poing joined #gluster
10:17 poing joined #gluster
10:24 rafi joined #gluster
10:25 itisravi joined #gluster
10:27 edong23 joined #gluster
10:27 kkeithley joined #gluster
10:28 aspandey joined #gluster
10:33 rafi1 joined #gluster
10:39 bfoster joined #gluster
10:39 gem joined #gluster
12:02 ilbot3 joined #gluster
12:02 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
12:03 kshlm joined #gluster
12:04 _nex_ Hi guys, do you know why i got better write throughput using iozone with 64k block size records than 4k? thank you!
12:16 post-factum because aggregated i/o produces less iopses
12:17 rastar joined #gluster
12:17 russoisraeli joined #gluster
12:19 mhulsman joined #gluster
12:23 mhulsman1 joined #gluster
12:28 theron joined #gluster
12:31 skoduri_ joined #gluster
12:34 akay post-factum: so if that first file I'm trying to heal has no links to it does that mean the file has been deleted anyway so I could remove the gfid from inside .glusterfs?
12:35 post-factum akay: correct
12:35 itisravi joined #gluster
12:36 primehaxor joined #gluster
12:37 akay awesome, thanks mate
12:37 unclemarc joined #gluster
12:39 harish_ joined #gluster
12:39 julim joined #gluster
12:47 jiffin1 joined #gluster
12:49 poornimag joined #gluster
12:50 post-factum np
12:52 kotreshhr left #gluster
12:53 poing left #gluster
12:59 atalur joined #gluster
12:59 MikeLupe joined #gluster
13:02 EinstCrazy joined #gluster
13:04 EinstCrazy joined #gluster
13:07 RameshN joined #gluster
13:12 screeley44 joined #gluster
13:15 screeley44 Hi Gluster Community - does anyone know if there is a way to not log results from mount.glusterfs but rather return errors directly to caller?
13:16 screeley44 seems by default a log is generated or you can specify the log-file manually, but what if I just want the errors returned directly, the useful errors that are in the log, not the exit codes?
13:17 shyam joined #gluster
13:19 Gnomethrower joined #gluster
13:24 nbalacha joined #gluster
13:26 rastar joined #gluster
13:28 DV joined #gluster
13:30 EinstCrazy joined #gluster
13:30 Hamburglr joined #gluster
13:31 chirino_m joined #gluster
13:37 Manikandan joined #gluster
13:45 theron joined #gluster
13:47 EinstCrazy joined #gluster
13:47 overclk joined #gluster
13:48 EinstCrazy joined #gluster
14:05 aspandey joined #gluster
14:07 spalai left #gluster
14:16 jackdpeterson joined #gluster
14:18 skylar joined #gluster
14:18 rwheeler joined #gluster
14:19 jackdpeterson Hello gluster folks. I have 5 gluster clusters. one of them has been consuming excessive CPU relative to the others. We doubled the number of processors last night and added on enhanced networking (it's all on AWS). All of these clusters are replica-3 and all have very similar configurations.
14:19 jackdpeterson profiling seems to indicate that we have a lot of negative lookups
14:20 Drankis joined #gluster
14:21 bennyturns joined #gluster
14:22 Bhaskarakiran joined #gluster
14:22 jackdpeterson After doubling number of cores and going to a higher performance per-core we're still seeing load that's around 2x core count. Version of glusterfs is 3.7.6.
14:22 julim joined #gluster
14:26 post-factum jackdpeterson: consider upgrading to 3.7.11 first please
14:26 Manikandan joined #gluster
14:28 vh joined #gluster
14:29 jackdpeterson is there anything that can be done about negative lookups specifically? I'm hesitant to do an upgrade during business hours. Although we're already at an incident stage on our end since the gluster instances can't really serve traffic effectively.
14:30 shyam joined #gluster
14:30 vh Hello - I am hoping some can help us with this. We’ve noticed that the .glusterfs directory is larger than the contents of the volume. Our application only has access through the client so I don’t suspect anything was deleted on the brick.
14:30 drowe joined #gluster
14:30 post-factum jackdpeterson: that is why i perform upgrade at night :)
14:31 bowhunter joined #gluster
14:31 jackdpeterson our upgrade was done late last night :-P ... we were hoping after heals completed it would calm down
14:31 vh How could we have come into this state? Is there a way to find what is orphaned?
14:31 jackdpeterson well, heals completed ... and still not calm :-/
14:31 post-factum jackdpeterson: "heal full" could help
14:31 post-factum jackdpeterson: or could not
14:32 overclk joined #gluster
14:32 jackdpeterson eugh.... those take a long ass time
14:32 jackdpeterson I'm looking more for some settings tweaks or otherwise
14:32 post-factum jackdpeterson: yes, but it fixed for me lots of crap recently
14:32 jackdpeterson what would be different about that than say doing client-side driven find . -exec stat {} \;?
14:32 post-factum jackdpeterson: also, negative lookups could be the result of layout imbalance
14:33 jackdpeterson there isn't any distribution ... just replica 3 across 3 bricks
14:33 post-factum jackdpeterson: you do not want to do client-side heals if there is server-side heal
14:33 post-factum jackdpeterson: then, heal full
14:33 jackdpeterson okay, so possible order of events:
14:33 jackdpeterson 1. try heal full
14:33 jackdpeterson 2. try upgrade if things aren't better?
14:34 post-factum yep
14:34 screeley44 anyone have any thoughts on my question above, regarding mount.glusterfs and a way to return errors directly to the caller ?
14:34 post-factum vh: same for you. heal info should show you smth interesting
14:35 post-factum screeley44: you would like to use glusterfs executable directly for that
14:37 screeley44 this is in the context of kubernetes - when it tries to mount a glusterfs volume - so we call mount.glusterfs  -t <options> basically
14:38 Wizek joined #gluster
14:38 screeley44 post-factum: but when it fails we get an exit code (which doesn't mean much for the user) and then we have to either send user to the log file or pull the info out of the log file ourselves, so just wondering if its a possibility to have the errors returned directly
14:39 post-factum screeley44: would you like to enable verbose + redirect stderr for that?
14:41 screeley44 maybe :-)  I can try it to see what it looks like, I didn't see an option in man-page for mount.glusterfs(8) for verbose, unless I missed it
14:41 screeley44 post-factum: ^^^
14:41 screeley44 post-factum: how would I do that?
14:42 robb_nl joined #gluster
14:47 mpietersen joined #gluster
14:47 post-factum screeley44: glusterfs --debug
14:47 vh post-factum: we've already checked heal info and it looked normal :) When I added a second node and performed a full heal, the .glusterfs directory size on the second node is the same as the volume content size which is that we expected.
14:48 post-factum vh: what is the margin?
14:49 vh post-factum: Double.
14:49 vh # du -sh .glusterfs 31G     .glusterfs/ # du -sh * 13G     dir1 31M     dir2
14:49 kpease joined #gluster
14:50 post-factum vh: hmm
14:50 post-factum JoeJulian: ^^ is that normal?
14:53 spalai joined #gluster
14:54 post-factum vh: no answer from me yet. i believe it should have equal sizes
14:55 vh post-factum: me too :) thanks. Hopefully JoeJulian can help chip in.
14:55 post-factum vh: you could also find all files in .glusterfs/ folder with link count == 1
14:55 post-factum vh: those are removed files
14:58 cholcombe joined #gluster
15:01 vh post-factum: I tried "find .glusterfs -links 1 –ls" but it lists only about 8 records. I dont think this would account for the extra 15G. I'll see if I can add up their sizes to be sure.
15:02 post-factum vh: yup, you should look at sum
15:09 Wizek joined #gluster
15:12 andy-b joined #gluster
15:12 hgowtham joined #gluster
15:13 MikeLupe joined #gluster
15:13 Hamburglr joined #gluster
15:17 vh post-factum: Found it! Seems like it is a duplicate of the content we have...
15:18 vh post-factum: Would it be safe to simply delete the directory?
15:18 vh post-factum: 212341    0 lrwxrwxrwx   1 root     root           51 Mar  4 14:24 .glusterfs/e1/09/e1x084e-c5x-4c1f-9f-482x72106e -> ../../00/00/00000000-0000-​0000-0000-000000000001/cw
15:20 vh post-factum: Or would it be the hardlink I'm deleting (e1/09/e1x084e-c5x-4c1f-9f-482x72106e)?
15:21 Hamburglr how do you turn down the log level of nfs.log?
15:33 rafi joined #gluster
15:38 shubhendu joined #gluster
15:40 bowhunter joined #gluster
15:41 russoisraeli joined #gluster
15:44 chirino joined #gluster
15:49 spalai joined #gluster
15:50 post-factum vh: not sure, but those zeros look strange
15:51 vh post-factum: Actually, I think I've misinterpreted the results. The "->" indicates it's a softlink. If I exclude these, there were only three other files (health_check, indices/xattrop/xattrop-xxxx, data.db). These don't seem to be the culprits...
15:51 post-factum vh: then no new idea yet
15:53 vh post-factum: ok, thanks anyways!
15:54 hagarth joined #gluster
15:57 primehaxor joined #gluster
16:03 julim joined #gluster
16:04 DV joined #gluster
16:04 ctria joined #gluster
16:07 F2Knight joined #gluster
16:13 spalai joined #gluster
16:18 luizcpg joined #gluster
16:22 nathwill joined #gluster
16:25 skoduri_ joined #gluster
16:27 jiffin joined #gluster
16:36 ivan_rossi left #gluster
16:38 DV__ joined #gluster
16:41 mhulsman joined #gluster
16:44 rouven joined #gluster
16:48 jiffin joined #gluster
16:48 jackdpeterson ongoing saga here on outage. After upgrading to newest gluster ... our Fuse clients are driving CPU to all kinds of crazy on the glusterFS servers. NFS clients; however, aren't causing load and things stabilize nicely.
16:48 jackdpeterson Any thoughts on what could be causing that?
16:49 kotreshhr joined #gluster
16:49 kotreshhr left #gluster
16:50 atinm joined #gluster
16:54 jiffin joined #gluster
17:00 plarsen joined #gluster
17:00 ctria joined #gluster
17:08 F2Knight joined #gluster
17:09 jiffin joined #gluster
17:42 hackman joined #gluster
18:02 theron joined #gluster
18:04 Hamburglr joined #gluster
18:11 Hamburglr joined #gluster
18:17 luizcpg joined #gluster
18:20 theron joined #gluster
18:24 theron_ joined #gluster
18:25 ira joined #gluster
18:32 bowhunter joined #gluster
18:41 Pupeno joined #gluster
18:58 jiffin joined #gluster
19:05 theron joined #gluster
19:06 nishanth joined #gluster
19:23 amye joined #gluster
19:26 theron joined #gluster
19:35 BitByteNybble110 joined #gluster
19:44 XpineX joined #gluster
19:46 hagarth joined #gluster
19:47 shyam joined #gluster
19:48 timotheus1_ joined #gluster
19:58 ctria joined #gluster
20:10 gnulnx left #gluster
20:55 russoisraeli joined #gluster
21:05 RobertTuples joined #gluster
21:06 dlambrig_ joined #gluster
21:27 bowhunter joined #gluster
21:40 luizcpg joined #gluster
21:56 kkeithley joined #gluster
22:00 armyriad joined #gluster
22:07 DV joined #gluster
22:07 uebera|| joined #gluster
22:16 shyam joined #gluster
22:22 bennyturns joined #gluster
22:29 davpostpunk joined #gluster
22:32 level7 joined #gluster
22:49 MugginsM joined #gluster
23:09 kenansulayman joined #gluster
23:19 Ramereth joined #gluster
23:22 MugginsM joined #gluster
23:27 MugginsM joined #gluster
23:28 johnmilton joined #gluster
23:28 julim joined #gluster
23:30 Ramereth joined #gluster
23:45 johnmilton joined #gluster
23:48 harish joined #gluster
23:55 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary