Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-01-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:37 Intensity joined #gluster
02:01 Gambit15 joined #gluster
02:12 derjohn_mobi joined #gluster
02:21 cacasmacas joined #gluster
02:28 Wizek joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:57 Wizek joined #gluster
03:22 armyriad joined #gluster
03:25 riyas joined #gluster
03:42 bbooth joined #gluster
03:44 armyriad joined #gluster
03:45 magrawal joined #gluster
03:49 BlackoutWNCT joined #gluster
03:49 BlackoutWNCT1 joined #gluster
03:50 atinm joined #gluster
03:51 buvanesh_kumar joined #gluster
03:58 gyadav joined #gluster
04:00 Wizek joined #gluster
04:05 itisravi joined #gluster
04:07 buvanesh joined #gluster
04:19 karthik_us joined #gluster
04:25 bbooth joined #gluster
04:30 Shu6h3ndu joined #gluster
04:32 Karan joined #gluster
04:33 Seth_Karlo joined #gluster
04:33 aravindavk joined #gluster
04:35 nbalacha joined #gluster
04:38 msvbhat joined #gluster
05:00 sbulage joined #gluster
05:01 rafi joined #gluster
05:07 BitByteNybble110 joined #gluster
05:08 RameshN joined #gluster
05:09 skumar joined #gluster
05:11 ndarshan joined #gluster
05:16 Prasad joined #gluster
05:17 rjoseph joined #gluster
05:21 ankit_ joined #gluster
05:30 victori joined #gluster
05:37 skumar joined #gluster
05:39 Shu6h3ndu joined #gluster
05:40 riyas joined #gluster
05:42 skoduri joined #gluster
05:45 gyadav joined #gluster
05:47 buvanesh_kumar joined #gluster
05:55 Jacob843 joined #gluster
06:04 kramdoss_ joined #gluster
06:13 sanoj joined #gluster
06:17 ppai joined #gluster
06:17 susant joined #gluster
06:18 buvanesh_kumar joined #gluster
06:19 Saravanakmr joined #gluster
06:24 Jacob8432 joined #gluster
06:26 buvanesh_kumar joined #gluster
06:26 mb_ joined #gluster
06:28 sona joined #gluster
06:30 rastar joined #gluster
06:30 ahino joined #gluster
06:31 prasanth joined #gluster
06:34 apandey joined #gluster
06:36 bbooth joined #gluster
06:39 skumar_ joined #gluster
06:40 nishanth joined #gluster
06:41 poornima joined #gluster
06:49 Jacob843 joined #gluster
06:53 RameshN joined #gluster
06:57 buvanesh joined #gluster
07:00 Jacob843 joined #gluster
07:07 sbulage joined #gluster
07:09 Lee1092 joined #gluster
07:19 jtux joined #gluster
07:36 Humble joined #gluster
07:37 bbooth joined #gluster
07:42 Philambdo joined #gluster
07:46 shutupsquare joined #gluster
07:49 derjohn_mob joined #gluster
07:50 Pupeno joined #gluster
07:50 Pupeno joined #gluster
07:50 [diablo] joined #gluster
07:51 ivan_rossi joined #gluster
07:52 RameshN joined #gluster
07:58 BlackoutWNCT Hey Guys, I'm chasing some info regarding Split Brains. I've got a couple of variables over a couple of different sites, which are receiving similar but different issues. My main questions will be regarding 3.9 and how stable it is.
07:58 itisravi joined #gluster
07:58 BlackoutWNCT We're currently running 3.8 and are looking at upgrading to 3.9, but are hesitant due to features being flagged as "Experimental"
07:58 BlackoutWNCT I'm more just chasing some advise on how reliable these features actually are.
07:59 BlackoutWNCT More specifically those designed to resolve split brain issues.
07:59 BlackoutWNCT I do have one site which is currently running on ZFS with an arbiter, however has recently fallen victim to split brains.
08:00 BlackoutWNCT Now, I'm not sure if this is due to ZFS, or is a gluster issue. I have also noticed that the replicated bricks are not 100% in sync.
08:00 BlackoutWNCT I do know that there are known issues with ZFS are size reporting, however I would assume that both bricks should still be reporting the same size.
08:01 BlackoutWNCT If there are any logs or anything specific I should be looking at, please let me know.
08:01 DV joined #gluster
08:02 BlackoutWNCT My main questions are related to upgrading to 3.9, however if you guys can weigh in on the ZFS split brains also, that's be grand :)
08:06 cacasmacas joined #gluster
08:12 itisravi BlackoutWNCT: There are commands available to resolve split-brains. See https://github.com/gluster/glusterfs​-specs/blob/master/done/Features/hea​l-info-and-split-brain-resolution.md
08:13 glusterbot Title: glusterfs-specs/heal-info-an​d-split-brain-resolution.md at master · gluster/glusterfs-specs · GitHub (at github.com)
08:14 BlackoutWNCT itisravi: We have all self heal daemons disabled, to the best of my knowledge, this is due to advice that was provided to our old sys admin via IRC.
08:14 BlackoutWNCT Is there any particular reason that these would be advised to be disbaled? As this is something that I have questioned myself a few times.
08:15 BlackoutWNCT AFAIK there was a particular point made about having them disabled with ZFS.
08:16 itisravi BlackoutWNCT: It shouln't be disabled under normal circumstances. Some folks disable it temporarily when it consumes a lot of CPU cycles during heal, and let heals happen via the mounts.
08:16 itisravi BlackoutWNCT: you should keep them enabled uneless you are facing specfici issues.
08:16 BlackoutWNCT Sorry, could you elaborate a little further for me on "let heals happen via the mounts"?
08:17 itisravi BlackoutWNCT: oh when a mount accesses a file that needs heal, background heals are triggered even from the mount.
08:17 BlackoutWNCT Is this only if the mount daemon is enabled?
08:17 BlackoutWNCT Sorry, heal Daemon*
08:18 itisravi no, helas via the mount are independent of self-heal daemo heals.
08:18 itisravi daemon*
08:18 BlackoutWNCT ok, so what type of actions would cause a mount heal to trigger?
08:19 itisravi BlackoutWNCT: any I/O on the file could trigger the heal, say read or write.
08:19 BlackoutWNCT I've noticed recently that an rsync has forced 2 of my replicated bricks to resync (One had significatly less free space than the other before the rsync)
08:19 BlackoutWNCT But this could also be a con-incedence or something with ZFS
08:19 BlackoutWNCT Just trying to narrow it down.
08:20 BlackoutWNCT Furthermore, is there anything you could think of which would cause a mount self heal to not trigger?
08:21 BlackoutWNCT And, would an I/O request via samba also cause the mount self-heal?
08:22 itisravi If the background self-heal queue is full,  new heals would be dropped. Yes samba  access should also heal.
08:22 BlackoutWNCT Is there a way to see the current backupground self-heal queue?
08:22 itisravi Why don't you enable the self-heal daemon and let it heal proactively?
08:23 BlackoutWNCT I've been pushing for the daemon to be enabled, however the previous admin told the boss man that it needed to be disabled, so I'm more trying to build a case to enable it.
08:23 itisravi nope but if heals happen, you should see messages in the mount log: "Completed data self-heal on".. etc.
08:23 itisravi BlackoutWNCT: ask the admin for a reason :)
08:23 BlackoutWNCT the mount log should be located at like "/var/log/glusterfs/"?
08:23 itisravi yeah on your client.
08:23 itisravi BlackoutWNCT: but split-brains cannot be healed without intervention.
08:24 BlackoutWNCT cool cool. the previous admin's reason is that he read in in the IRC logs.
08:24 BlackoutWNCT Could you define "Intervention" for me?
08:24 itisravi BlackoutWNCT: you'll need to read the link I shared earlier.
08:24 BlackoutWNCT ok cool.
08:24 kkeithley AFAIK 3.9.1 is every bit as stable as, e.g. 3.8.8. Just be aware that 3.9 is a STM (short term maintenance) release. It will EOL when 3.10 is released — about a month from now.  If it happens, 3.9.2 will be the last 3.9 update.
08:25 * kkeithley doesn't want anyone saying we didn't warn you.
08:25 BlackoutWNCT All good, thanks for the info. I'll go off and read this doc now, and come back with any further questions.
08:26 BlackoutWNCT One more quick one, are there any docs available for 3.10 which have planned updates or similar in them?
08:26 itisravi BlackoutWNCT: Actually use https://gluster.readthedocs.io/en/latest/Troubl​eshooting/heal-info-and-split-brain-resolution/ instead of the other link. This is the latest one.
08:26 glusterbot Title: Split Brain (Auto) - Gluster Docs (at gluster.readthedocs.io)
08:27 BlackoutWNCT Is there a downloadable version of this readthedocs link? Like, a big PDF with everything in it?
08:28 itisravi I don't think so.
08:28 BlackoutWNCT No worries. Thanks.
08:33 jkroon joined #gluster
08:33 karthik_us joined #gluster
08:36 itisravi joined #gluster
08:38 fsimonce joined #gluster
08:39 musa22 joined #gluster
08:39 bbooth joined #gluster
08:39 jiffin joined #gluster
08:39 Pupeno joined #gluster
08:40 BlackoutWNCT itisravi I do have quick question regarding the md-cache which is marked as experimental. Is this stable?
08:41 itisravi BlackoutWNCT: poornima would be the right person to answer that.
08:41 itisravi poornima ^
08:41 BlackoutWNCT ping poornima
08:46 magrawal joined #gluster
08:47 Seth_Karlo joined #gluster
08:49 BlackoutWNCT joined #gluster
08:49 Karan joined #gluster
08:51 musa22 joined #gluster
08:51 Seth_Kar_ joined #gluster
08:58 Guest89004 joined #gluster
09:07 poornima BlackoutWNCT, 3.10 would be the right release to use it in production setup, 3.9 is experimental
09:08 BlackoutWNCT Ok, thanks for that, do we have a rough ETA for 3.10? Just so that I can go to the boss with something more than "About a month"?
09:13 susant BlackoutWNCT: https://github.com/gluster/glusterfs/milestone/1 -> points to Feb 14
09:13 glusterbot Title: Release 3.10 (LTM) Milestone · GitHub (at github.com)
09:13 BlackoutWNCT Excellent, thanks :)
09:13 derjohn_mob joined #gluster
09:17 flying joined #gluster
09:18 flying joined #gluster
09:23 NuxRo joined #gluster
09:23 juhaj joined #gluster
09:25 rafi joined #gluster
09:27 jiffin joined #gluster
09:29 rjoseph joined #gluster
09:32 bluenemo joined #gluster
09:33 Seth_Karlo joined #gluster
09:39 Marbug_ joined #gluster
09:40 bbooth joined #gluster
09:41 Seth_Kar_ joined #gluster
09:50 atinm joined #gluster
09:50 nishanth joined #gluster
09:55 kotreshhr joined #gluster
09:57 riyas joined #gluster
10:02 Seth_Karlo joined #gluster
10:02 Ilya joined #gluster
10:03 Ilya hello. I have a problem with GlusterFS. memory Leak on the 3.9 version
10:17 jiffin joined #gluster
10:18 magrawal joined #gluster
10:18 sona joined #gluster
10:25 ashiq joined #gluster
10:40 Pupeno joined #gluster
10:40 Pupeno joined #gluster
10:40 bbooth joined #gluster
10:54 riyas joined #gluster
11:09 msvbhat joined #gluster
11:09 Seth_Karlo joined #gluster
11:14 atinm joined #gluster
11:16 Guest89004 joined #gluster
11:27 ankit__ joined #gluster
11:32 Philambdo joined #gluster
11:42 bbooth joined #gluster
11:53 cyberbootje joined #gluster
11:56 karthik_us joined #gluster
12:02 DV joined #gluster
12:20 craigw joined #gluster
12:22 kettlewell joined #gluster
12:23 kettlewell joined #gluster
12:23 craigw left #gluster
12:29 skumar__ joined #gluster
12:40 Pupeno joined #gluster
12:42 bbooth joined #gluster
12:43 craig_ joined #gluster
12:44 craig_ left #gluster
12:44 nishanth joined #gluster
12:45 jiffin joined #gluster
12:48 susant left #gluster
12:48 kotreshhr joined #gluster
12:51 shutupsquare joined #gluster
12:54 musa22 joined #gluster
12:54 shutupsq_ joined #gluster
12:56 kotreshhr left #gluster
13:18 rjoseph joined #gluster
13:18 rwheeler joined #gluster
13:21 unclemarc joined #gluster
13:42 federicoaguirre joined #gluster
13:43 bbooth joined #gluster
13:49 federicoaguirre Hi there.!
13:49 federicoaguirre I have a cluster with 2 nodes (1 brick each node)
13:49 federicoaguirre node 2 is out of sync :( and I cannot sync it again.!
13:51 federicoaguirre any help?
14:03 BuBU29 left #gluster
14:13 shyam joined #gluster
14:22 cyberbootje hi all, anyone an idea what the support is of glusterFS in combination with ZFS on illuminous ?
14:24 BuBU29 joined #gluster
14:25 BuBU29 left #gluster
14:28 al joined #gluster
14:30 susant joined #gluster
14:32 skoduri joined #gluster
14:34 jiffin joined #gluster
14:38 rjoseph joined #gluster
14:39 skylar joined #gluster
14:39 shyam joined #gluster
14:44 bbooth joined #gluster
14:44 skylar joined #gluster
14:48 al joined #gluster
14:49 ivan_rossi left #gluster
14:55 aravindavk joined #gluster
14:58 squizzi joined #gluster
14:58 Gambit15 federicoaguirre, what's the issue? With that setup, you'll probably need to do some manual split-brain resolution
14:59 kpease_ joined #gluster
15:00 federicoaguirre hi Gambit15, this is the scenario
15:02 federicoaguirre I have 2 nodes... One of them were replcaced...
15:03 federicoaguirre the folder structure were created succesfully and some files also...
15:03 federicoaguirre in the main node (which has the complete information) I run: sudo gluster volume heal storage statistics
15:03 JoeJulian joined #gluster
15:04 federicoaguirre Crawl is in progress
15:04 federicoaguirre Type of crawl: INDEX
15:04 federicoaguirre No. of entries healed: 2
15:04 federicoaguirre No. of entries in split-brain: 0
15:04 federicoaguirre No. of heal failed entries: 30616
15:04 federicoaguirre Crawl is in progress
15:04 federicoaguirre Type of crawl: INDEX
15:04 federicoaguirre No. of entries healed: 2
15:04 federicoaguirre No. of entries in split-brain: 0
15:04 federicoaguirre No. of heal failed entries: 30616
15:05 federicoaguirre sorry for the paste here.!
15:09 Gambit15 Well that's a big chunk of failed heals!
15:09 Gambit15 Have you checked your logs?
15:09 farhorizon joined #gluster
15:10 Gambit15 IIRC, you want glustershd.log
15:13 susant left #gluster
15:17 Shu6h3ndu joined #gluster
15:18 federicoaguirre let's see...
15:21 federicoaguirre what is the URL to paste logs?
15:24 JoeJulian @paste
15:24 JoeJulian Oh? Where are you glusterbot?
15:24 JoeJulian fpaste.org is an easy one to use.
15:26 Gambit15 joined #gluster
15:27 shaunm joined #gluster
15:44 bbooth joined #gluster
15:45 plarsen joined #gluster
15:48 nbalacha joined #gluster
15:53 bowhunter joined #gluster
15:54 ankit__ joined #gluster
16:03 raghu joined #gluster
16:05 wushudoin joined #gluster
16:11 aravindavk joined #gluster
16:11 farhorizon joined #gluster
16:14 Prasad joined #gluster
16:16 DV joined #gluster
16:20 farhorizon joined #gluster
16:45 bbooth joined #gluster
16:55 bbooth joined #gluster
16:58 ankit__ joined #gluster
17:00 victori joined #gluster
17:00 jdossey joined #gluster
17:11 glusterbot joined #gluster
17:41 ic0n joined #gluster
17:46 farhorizon joined #gluster
17:49 cloph_away joined #gluster
17:54 bbooth joined #gluster
17:55 farhorizon joined #gluster
18:02 ankit__ joined #gluster
18:14 arpu joined #gluster
18:14 vbellur joined #gluster
18:16 Karan joined #gluster
18:16 snehring joined #gluster
18:17 jiffin joined #gluster
18:19 musa22 joined #gluster
18:22 musa22 joined #gluster
18:23 Humble joined #gluster
18:25 shutupsquare joined #gluster
18:29 ashiq joined #gluster
18:40 ankit__ joined #gluster
18:46 vbellur joined #gluster
18:58 stomith joined #gluster
18:59 farhorizon joined #gluster
19:03 bowhunter joined #gluster
19:10 Humble joined #gluster
19:12 jdossey joined #gluster
19:12 shutupsquare joined #gluster
19:21 hamburml joined #gluster
19:23 cliluw joined #gluster
19:23 Pupeno joined #gluster
19:23 Pupeno joined #gluster
19:24 hamburml Hello :) I am using glusterfs 3.9.1 and ganesha 2.4.1 on debian 8.7 with 4.8.0-amd64 kernel on two vserver with 1 GB Ram and both have 25 GB SSD storage. Sadly I can't use it stable. To test the performance I used dd to create a 1,4 GB file but the vserver which runs dd doesn't responds anymore.
19:24 hamburml I saw in dmesg that ganesha.nfds was killed because of oom
19:24 hamburml Does anyone use ganesha and glusterfs in here and could give some advice?
19:25 hamburml When using the glusterfs nfs mode everything worked
19:30 Seth_Karlo joined #gluster
19:35 Humble joined #gluster
19:36 bbooth joined #gluster
19:38 fcoelho joined #gluster
19:39 vbellur joined #gluster
19:41 Seth_Karlo joined #gluster
19:42 Seth_Karlo joined #gluster
20:02 Humble joined #gluster
20:02 musa22 joined #gluster
20:07 derjohn_mob joined #gluster
20:20 kramdoss_ joined #gluster
20:23 kpease joined #gluster
20:28 jkroon joined #gluster
20:48 shaunm joined #gluster
20:52 DV joined #gluster
20:52 bbooth joined #gluster
20:53 mlg9000 joined #gluster
20:58 stomith joined #gluster
21:00 bowhunter joined #gluster
21:06 raghu joined #gluster
21:07 emerson joined #gluster
21:18 mlg9000 left #gluster
21:22 stomith joined #gluster
21:27 johnnyNumber5 joined #gluster
21:28 musa22 joined #gluster
21:34 raghu joined #gluster
21:36 musa22_ joined #gluster
21:36 kpease joined #gluster
21:39 Marbug joined #gluster
21:41 kpease_ joined #gluster
21:51 kpease joined #gluster
22:06 plarsen joined #gluster
22:10 johnnyNumber5 joined #gluster
22:20 vbellur joined #gluster
22:24 Peppard joined #gluster
23:17 ashiq joined #gluster
23:28 Seth_Karlo joined #gluster
23:34 farhoriz_ joined #gluster
23:35 Pupeno joined #gluster
23:52 vbellur joined #gluster
23:52 stomith joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary