Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-03-29

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 hagarth joined #gluster-dev
01:07 EinstCrazy joined #gluster-dev
01:17 vmallika joined #gluster-dev
02:05 luizcpg joined #gluster-dev
02:29 baojg joined #gluster-dev
02:46 baojg joined #gluster-dev
02:58 EinstCra_ joined #gluster-dev
03:25 vmallika joined #gluster-dev
03:29 overclk joined #gluster-dev
03:30 vmallika joined #gluster-dev
03:38 atinm joined #gluster-dev
03:47 nishanth joined #gluster-dev
03:51 shubhendu joined #gluster-dev
03:55 atinm joined #gluster-dev
04:03 itisravi joined #gluster-dev
04:06 nbalacha joined #gluster-dev
04:11 Manikandan joined #gluster-dev
04:26 suliba joined #gluster-dev
04:35 pkalever joined #gluster-dev
04:53 jiffin joined #gluster-dev
04:56 ashiq joined #gluster-dev
05:02 sakshi joined #gluster-dev
05:05 prasanth joined #gluster-dev
05:08 ndarshan joined #gluster-dev
05:13 mchangir joined #gluster-dev
05:17 Bhaskarakiran joined #gluster-dev
05:20 rafi joined #gluster-dev
05:21 aravindavk joined #gluster-dev
05:22 aspandey joined #gluster-dev
05:23 karthik___ joined #gluster-dev
05:25 baojg joined #gluster-dev
05:28 Apeksha joined #gluster-dev
05:29 pranithk joined #gluster-dev
05:30 vmallika joined #gluster-dev
05:44 pkalever joined #gluster-dev
05:49 ashiq_ joined #gluster-dev
05:50 skoduri joined #gluster-dev
06:00 hgowtham joined #gluster-dev
06:02 vimal joined #gluster-dev
06:08 Saravanakmr joined #gluster-dev
06:10 ashiq_ joined #gluster-dev
06:11 asengupt joined #gluster-dev
06:14 nishanth joined #gluster-dev
06:14 spalai joined #gluster-dev
06:20 kanagaraj joined #gluster-dev
06:27 kdhananjay joined #gluster-dev
06:32 spalai joined #gluster-dev
06:46 poornimag joined #gluster-dev
06:53 pur joined #gluster-dev
07:01 asengupt joined #gluster-dev
07:20 kshlm joined #gluster-dev
07:22 scobanx joined #gluster-dev
07:23 scobanx Hi, Can you tell me the details of disperse heal? Any documentation you provide would be useful.
07:23 pranithk aspandey: ^^
07:25 scobanx During tests of dirperse volumes when I shutdown nodes, I see some files in 'gluster v heal info' output. Self heal sometimes take care of them and sometimes it does not.
07:26 scobanx How can I fix when self heal does not heal the files? Is is same as distributed volumes? Should I just delete the problematic file?
07:32 aspandey scobanx, Hi
07:33 scobanx aspandey: Hi
07:38 aspandey scobanx, About disperse heal - suppose u have create a volume with 4+2 config (2 being the redundancy number). You kill a brick (or shutdown a server) and then you write on a file. write will be successful and at the same time a dirty flag will be set on that file on all the bricks which were up..
07:39 scobanx Will write produce 6 chunks and put them on 3 servers while the one is down?
07:39 scobanx Sorry puts them in 5 servers
07:39 aspandey scobanx, yes.
07:39 aspandey on 5
07:40 scobanx ok then when down server come online heal will just put the chunk back yes
07:40 aspandey scobanx, yes.
07:41 scobanx ok, then when I see output in gluster v heal info and they never gone, what is the steps to fix it?
07:42 aspandey scobanx, there could be diff reason behind this. may be heal never happened successfully... Are you sure that chunks were written on the 6th brick which were down?
07:44 aspandey scobanx, sometimes it takes time also to heal complete file. depends on size of file and number of files required to heal...
07:44 scobanx During tests I just shutdown the server and when it comes up there were gfid entries in heal info output. I waited for some time but they never gone. I stared full heal but it did not fix the issue too..
07:45 scobanx Normally is it ok to see gfid entries in heal info output? I think I only see file names there, why gfid entries shown in heal info?
07:46 aspandey scobanx, I would suggest to check getxattr output of that file on all tht e brick and see if version and size is matching on all the brick. Than only you can be certain that heal info was successful.
07:47 aspandey scobanx, yeh, gfid entries shown in heal info is fine..
07:47 scobanx If they never gone from heal info should deleting them is the way to go to fix it?
07:49 aspandey scobanx, If you are sure that file has been healed and your data is consistent. You can remove the index entries from .glusterfs/indices/xattrop/
07:50 aspandey scobanx, We had this kind of issue in past but fixed that also when we saw that sometimes index entries were not getting purged even after heal..But That was fixed..
07:51 scobanx Actually I don't know if the data is in consistent state, fio was writing the volume...So deleting .glusterfs/indices/xattrop/ should fix the output of heal info?
07:52 scobanx My volume is 78x(16+4) distributed-disperse any best practices for that size volume?
07:52 aspandey scobanx, If I were you I would not delete the index entries until I found the data consistent. Fixing heal info should not be the main focus.
07:53 scobanx Ok, with real data I will avoid deleting index entries :)
07:58 aspandey scobanx, Best practices depend on many factors. As per my understanding 4+2, 8+4 or 8+3 could be good configurations.if all the bricks are on different servers for a one subvolume that will be better..
07:59 scobanx Yes I read them as suppoprted configs, so you say that instead of using 16+4, should I use 8+2?
08:00 aspandey scobanx, I never said 8+2 :)
08:01 scobanx I know but I dont want to loose too much disk space using 8+4 or 8+3
08:01 scobanx disperse support 16+4 right? just redhat does not support it?
08:02 aspandey scobanx, yes.
08:03 scobanx ok, does rebalance work on disperse volume?
08:04 aspandey scobanx, See, it again depend, 16+4 = 20 bricks. If you are sure that at any point of time, out of these 20 bricks not more than 4 bricks would go down. then it is ok..
08:04 aspandey scobanx, yes it works..
08:04 scobanx why we need rebalance in disperse volume? any scenario?
08:05 scobanx I distributed bricks among servers, so I pretty sure the probability of loosing more than 4 servers at the same time is very low..
08:06 aspandey scobanx, If you are adding some brick (multiple of config) in an already existing disperse volume, you would like to run rebalance so that file should be distributed on all the subvolume..
08:08 scobanx Ok I will never add any bricks/servers to this configuration. So I will not need rebalance..
08:09 scobanx On mailing list I asked if there was a tool to merge disperse chunks back to orginal file and pranith I think said they can wrote such a tool
08:09 scobanx you know anything about this?
08:10 scobanx or should I ask him on mail list?
08:10 aspandey scobanx, No, I am not aware of that..
08:11 scobanx ok thanks for your time aspandey :)
08:11 aspandey scobanx, wc..
08:13 pranithk scobanx: yes we can write one, but we didn't get to it....
08:15 rastar joined #gluster-dev
08:15 scobanx pranithk: ok thanks for the answer
08:24 jiffin joined #gluster-dev
08:27 rraja joined #gluster-dev
08:51 kotreshhr joined #gluster-dev
09:09 kshlm joined #gluster-dev
09:18 Bhaskarakiran joined #gluster-dev
09:21 jiffin joined #gluster-dev
09:22 nishanth joined #gluster-dev
09:22 ggarg joined #gluster-dev
09:27 pur joined #gluster-dev
09:32 ashiq_ joined #gluster-dev
09:37 cholcombe joined #gluster-dev
09:42 baojg joined #gluster-dev
09:45 baojg joined #gluster-dev
10:44 ndevos Manikandan, vmallika: tests/basic/inode-quota-enforcing.t failed in http://build.gluster.org/job/rackspace-re‚Äčgression-2GB-triggered/19353/consoleFull - is that a known issue?
10:45 Manikandan ndevos, nope, we will have a look
10:45 ndevos thanks!
10:46 shubhendu joined #gluster-dev
10:46 luizcpg joined #gluster-dev
10:47 nishanth joined #gluster-dev
11:01 overclk joined #gluster-dev
11:12 lpabon joined #gluster-dev
11:12 penguinRaider_ joined #gluster-dev
11:16 Debloper joined #gluster-dev
11:24 atinm jiffin, pm
11:32 gem joined #gluster-dev
11:33 vimal joined #gluster-dev
11:41 vmallika joined #gluster-dev
11:45 luizcpg joined #gluster-dev
11:50 bfoster joined #gluster-dev
11:52 Manikandan REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 10 minutes) in #gluster-meeting
11:53 nishanth joined #gluster-dev
11:54 jiffin atinm, pong
11:57 shaunm joined #gluster-dev
11:59 ira joined #gluster-dev
12:06 kanagaraj joined #gluster-dev
12:08 kanagaraj joined #gluster-dev
12:09 shubhendu joined #gluster-dev
12:25 vmallika joined #gluster-dev
12:27 EinstCrazy joined #gluster-dev
12:30 Saravanakmr joined #gluster-dev
12:32 Manikandan jiffin++, thanks a lot :-)
12:32 glusterbot Manikandan: jiffin's karma is now 33
12:36 baojg joined #gluster-dev
12:40 mchangir joined #gluster-dev
12:46 vmallika joined #gluster-dev
12:47 nishanth joined #gluster-dev
12:49 atinm joined #gluster-dev
13:05 spalai left #gluster-dev
13:06 EinstCrazy joined #gluster-dev
13:11 Munter55 joined #gluster-dev
13:15 rraja joined #gluster-dev
13:15 pkalever left #gluster-dev
13:19 rafi joined #gluster-dev
13:39 Manikandan joined #gluster-dev
13:44 josferna joined #gluster-dev
13:44 nbalacha joined #gluster-dev
14:07 spalai joined #gluster-dev
14:33 Manikandan joined #gluster-dev
14:43 kotreshhr left #gluster-dev
14:44 ggarg joined #gluster-dev
14:47 pkalever joined #gluster-dev
14:49 josferna joined #gluster-dev
15:11 Saravanakmr joined #gluster-dev
15:13 skoduri joined #gluster-dev
15:24 wushudoin joined #gluster-dev
15:32 vmallika joined #gluster-dev
15:47 kshlm joined #gluster-dev
16:46 shubhendu joined #gluster-dev
16:50 pkalever joined #gluster-dev
17:03 penguinRaider joined #gluster-dev
17:07 shyam joined #gluster-dev
17:07 sankarshan_away joined #gluster-dev
17:07 cholcombe joined #gluster-dev
17:07 ggarg joined #gluster-dev
17:08 hagarth joined #gluster-dev
17:22 vimal joined #gluster-dev
17:23 vmallika joined #gluster-dev
17:26 pkalever left #gluster-dev
17:43 nishanth joined #gluster-dev
18:11 jiffin joined #gluster-dev
18:39 kanagaraj joined #gluster-dev
18:46 jiffin joined #gluster-dev
18:54 jiffin joined #gluster-dev
18:57 penguinRaider joined #gluster-dev
19:34 penguinRaider joined #gluster-dev
20:16 lpabon joined #gluster-dev
20:31 luizcpg joined #gluster-dev
21:24 ira joined #gluster-dev
23:09 dlambrig_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary