Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-10-21

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
03:19 kdhananjay joined #gluster-dev
03:31 bala joined #gluster-dev
03:52 itisravi joined #gluster-dev
04:06 kshlm joined #gluster-dev
04:22 bharata-rao joined #gluster-dev
04:36 kanagaraj joined #gluster-dev
04:38 rafi1 joined #gluster-dev
04:38 Rafi_kc joined #gluster-dev
04:41 kdhananjay joined #gluster-dev
04:48 spandit joined #gluster-dev
04:52 lalatenduM joined #gluster-dev
04:54 deepakcs joined #gluster-dev
04:58 spandit joined #gluster-dev
05:04 ndarshan joined #gluster-dev
05:05 topshare joined #gluster-dev
05:13 anoopcs joined #gluster-dev
05:13 anoopcs joined #gluster-dev
05:18 bala joined #gluster-dev
05:23 jiffin joined #gluster-dev
05:25 kdhananjay joined #gluster-dev
05:25 kshlm joined #gluster-dev
05:25 hagarth joined #gluster-dev
05:26 raghu joined #gluster-dev
05:29 vimal joined #gluster-dev
05:31 soumya joined #gluster-dev
05:47 aravindavk joined #gluster-dev
05:50 nishanth joined #gluster-dev
05:53 atalur joined #gluster-dev
06:03 pranithk joined #gluster-dev
06:08 rgustafs joined #gluster-dev
06:13 hagarth pranithk: ping, what patches do you need in the next beta for 3.6.0?
06:17 pranithk hagarth: Give me a minute. I will respond. I have to clear 85 mails in inbox... gimme a sec
06:17 RaSTar joined #gluster-dev
06:18 hagarth pranithk: np, take your time
06:22 atinmu joined #gluster-dev
06:32 ppai joined #gluster-dev
06:36 topshare joined #gluster-dev
06:39 kshlm joined #gluster-dev
06:40 kshlm joined #gluster-dev
06:43 kaushal_ joined #gluster-dev
06:59 kshlm joined #gluster-dev
07:10 ndevos hagarth: I just posted 2 backports for 3.6, which I intend to include in the next 3.5 beta too - for bug 1151745
07:10 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1151745 urgent, unspecified, ---, ndevos, ASSIGNED , Option transport.socket.bind-address ignored
07:10 hagarth ndevos: cool, will merge them for release-3.6
07:11 ndevos hagarth: thanks!
07:15 soumya joined #gluster-dev
07:20 soumya_ joined #gluster-dev
07:21 kanagaraj joined #gluster-dev
07:25 pranithk hagarth: following 3 patches went in master after the last 3.6 patch merge
07:25 pranithk hagarth: http://review.gluster.org/8536, http://review.gluster.org/6529, http://review.gluster.org/8918
07:25 pranithk hagarth: http://review.gluster.org/8537 is one more patch I need to make a call on. I will update you by EOD today
07:26 hagarth pranithk: are the backports of those 3 available on release-3.6 for review?
07:26 hagarth pranithk: ok
07:26 pranithk hagarth: I will send them in a bit
07:27 hagarth Humble: basic smoke test successful on drone.io - https://drone.io/github.com/gluster/glusterfs/29
07:27 pranithk hagarth: I am working with ndevos to fix the makefile issue I created. I need to sort that one out.
07:27 Humble hagarth++ awesome!
07:27 hagarth pranithk: ok, thanks!
07:27 glusterbot Humble: hagarth's karma is now 17
07:29 Humble may be we could share the experience with gluster-devel so that everyone can make use of it.
07:29 Humble hagarth,
07:30 hagarth Humble: yes, I will try to get some of our tests working with it .. we still have some problems with lvm installation and hence I think our snapshot tests will fail
07:31 Humble oh.. ok..
07:50 ndevos JustinClift: something seems up with slave21 and slave22 in Jenkins?
07:51 ndevos pranithk: uh, you need my input again?
07:51 ndevos oh, never mind, I remember - thats probably what went over email!
07:53 shubhendu joined #gluster-dev
07:53 hagarth @channelstats
07:53 glusterbot hagarth: On #gluster-dev there have been 73108 messages, containing 1836141 characters, 309786 words, 2579 smileys, and 298 frowns; 804 of those messages were ACTIONs. There have been 41072 joins, 721 parts, 40324 quits, 0 kicks, 726 mode changes, and 1 topic change. There are currently 59 users and the channel has peaked at 65 users.
08:03 kdhananjay joined #gluster-dev
08:08 atinmu hagarth, u there?
08:08 hagarth atinmu: yes
08:10 atinmu hagarth, http://build.gluster.org/job/freebsd-smoke/1283/console   - I've been hitting free bsd smoke failure for one of my patch, log says fop_log_level is the culprit and its not able to find ENODATA as identifier, any clue?
08:11 hagarth atinmu: freebsd doesn't have ENODATA defined by default
08:11 ndevos atinmu: ENODATA could be ENOENT too, I think... some errno's do not mapt between Linux/FreeBSD
08:12 ndevos atinmu: in the master branch, some additions were made for these kind of errno's - you should be able to find them in the contrib/ directory
08:12 atinmu hagarth, ndevos : ok, but I am wondering why all the smokes were getting passed earlier
08:13 hagarth atinmu: I just merged a patch related to fop_log_level .. it just moved the defintion from protocol/server to common-utils.c
08:13 hagarth atinmu: check compat-errno.h for ENODATA
08:13 hagarth atinmu: s/errno.h/errno.[ch]/
08:14 atinmu hagarth, ok, let me check it
08:14 hagarth atinmu: will be afk now, shall bbl
08:14 atinmu hagarth, sure, thanks for ur help
08:15 ndevos atinmu: maybe you need a compat/errno.h include, or something - other files would would have a similar issue
08:16 ndevos atinmu: ah, yes, many other .c files have this: #include "compat-errno.h"
08:17 atinmu ndevos, but if u don't have consumer of fop_log_level u need not include that header file, isn't it
08:18 ndevos atinmu: I think that whereever fop_log_level is defined, that file should include compat-errno.h
08:18 atinmu :q
08:18 ndevos but, I have not looked at this particular change, so you'll have to judge for yourself :)
08:19 atinmu ndevos, sure
08:19 atinmu ndevos, I'll take a look
08:19 atinmu q
08:19 ndevos good luck :)
08:20 atinmu ndevos, I think after http://review.gluster.org/8918 is merged we r hitting this issue
08:21 ndevos atinmu: yes, indeed
08:21 atinmu ndevos, ahh!!!! see if you see the patch it got a free bsd smoke failure
08:22 ndevos atinmu: libglusterfs/src/common-utils.h should include compat-errno.h, please post a fix or that as follow up to bug 1151303
08:22 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1151303 high, medium, ---, gluster-bugs, POST , Excessive logging in the self-heal daemon after a replace-brick
08:22 atinmu ndevos, I will send the patch right away
08:22 ndevos atinmu: yes, and netbsd too
08:23 pranithk atinmu: thanks for that.
08:23 atinmu pranithk, np
08:23 pranithk ndevos: Any idea why we don't mark the build as failure when there is smoke failure?
08:23 ndevos atinmu: maybe you can send an email to the devel list about making sure that the rackspace-regression-triggered test may only run after successful smoke?
08:24 ndevos pranithk: we do, but the triggered regression run overrides the smoke results :-/
08:24 pranithk ndevos: hah!
08:24 atinmu ndevos, thats bad then
08:24 pranithk ndevos: nice :-). Now we know how we got the patch merged :-)
08:24 ndevos Humble: maybe you know how to setup a jenkins job to depend on an other? ^
08:25 ndevos pranithk: unfortunately I have seen that happen before :-/
08:25 pranithk ndevos: hmm :-(
08:25 ndevos pranithk: the maintainers should at least check if the smoke tests did not error out, but not everyone seems to do that...
08:26 pranithk ndevos: I guess the best approach is to fix the dependency like you were saying. Humans are prone to errors ;-).
08:29 atinmu ndevos, pranithk : I've dropped a mail in devel, I think we should have this dependency
08:30 ndevos pranithk: yeah, and I think Jenkins should be able to do this, so lets use it!
08:30 ndevos atinmu++ thanks!
08:30 glusterbot ndevos: atinmu's karma is now 2
08:30 pranithk atinmu++ thanks!!
08:30 glusterbot pranithk: atinmu's karma is now 3
08:59 vimal joined #gluster-dev
08:59 pranithk xavih: I was looking at http://review.gluster.org/#/c/8916
08:59 pranithk xavih: how is directory versioning done?
09:05 xavih pranithk: it modifies the version each time an entry is added or removed, but it's only checked by opendir
09:05 xavih pranithk: this is basically done to avoid invalid (out of date) directory listings
09:09 pranithk xavih: What will happen if the versions become same?
09:09 pranithk xavih: Example:
09:10 pranithk xavih: Only brick1 is up, create file 'a', only brick2 is up create file 'b', only brick3 is up create file 'c'
09:10 pranithk xavih: what will happen in this case?
09:11 xavih pranithk: that cannot happen because ec requires a minimum amount of bricks to do any operation
09:11 xavih pranithk: if more than the redundancy bricks are down, the volume is inaccessible
09:12 xavih pranithk: if less than the redundancy bricks are down, when they are brought back online, they will do a 'partial' self-heal when a lookup is made on them
09:12 pranithk xavih: but in practice, at least with afr, the fop sometimes fail just at the time of winding create/mkdir/link etc
09:12 xavih pranithk: a partial self heal means that the directory is created if missing (to allow future create's inside to succeed) but version is not synchronized
09:13 xavih pranithk: if that happens the version of that brick won't be updated
09:14 pranithk xavih: You will absolutely hate me for suggesting this, but all these problems are solved in afr already. May be we should find a way to merge the solutions for marking/self-healing? Imagine this, if we have such a thing, self-heal daemon will not only heal afr subvolumes, but also ec subvolumes without any changes to most of the code. I am not sure if you will like it, I just want you to consider it.
09:15 pranithk xavih: probably I brought it up too late? :-(
09:16 xavih pranithk: I've already seen some possible issues is "things" happen at certain times. So I was already planning to change the versioning system. I looked at AFR implementation and was thinking to do something similar, but I don't fully understand it yet
09:17 xavih pranithk: self-heal daemon is another of my "must do" tasks
09:17 pranithk xavih: Brilliant!
09:17 pranithk xavih: For a second I thought I got in too late
09:17 xavih pranithk: well, I think that it will be too late for 3.6.0 :(
09:18 pranithk xavih: I agree!
09:18 pranithk xavih: I am sorry I didn't pay much attention :-(
09:18 xavih pranithk: don't worry, I've also been very busy and haven't had time to look at other solutions/patches
09:18 pranithk xavih: Let me know what all do you want to know about this part and I will try to write some documents?
09:20 xavih pranithk: When I found the directory problem, I did some tests with AFR, and in some cases I did get invalid directory contents. However I'm unable to reproduce it with self-heal daemon enabled, but I think it should be reproducible because self heal cannot heal all at the same time
09:20 pranithk xavih: on 3.6?
09:20 xavih pranithk: I think it was on master, but not sure right now
09:21 xavih pranithk: anyway I have been unable to reproduce it without direct access to the brick contents
09:21 pranithk xavih: May be there are some cases. But what I am trying to suggest is, it is better if both have similar code paths. It will be less work for maintaining it?
09:21 xavih pranithk: let me check it again before losing more time. Maybe I did something wrong...
09:21 xavih pranithk: I absolutely agree :)
09:22 pranithk xavih: okay tell me what do you need and I will provide you
09:22 xavih pranithk: maybe it would be interesting to abstract healing and locking into libraries and implement only the differences in afr and ec
09:22 pranithk xavih: exactly!
09:22 xavih pranithk: Ok, thanks :)
09:22 xavih pranithk: I'll tell you something as soon as possible
09:22 pranithk xavih: if we abstract out eager locking, then performance will automatically increase for both xlators
09:23 pranithk xavih: okay then, send me a mail with things you know, if you don't find me online.
09:23 pranithk xavih: things you want* to know
09:23 xavih pranithk: perfect :)
09:27 RaSTar joined #gluster-dev
09:38 pranithk hagarth: I sent two of the four patches to 3.6 http://review.gluster.org/8956, http://review.gluster.org/8957
09:38 pranithk hagarth: will work on the other two now
09:38 hagarth pranithk: ok
09:38 pranithk atinmu: as part of http://review.gluster.org/8957, I already backported the change you sent upstream, you don't need to backport it again...
09:45 ndarshan joined #gluster-dev
10:02 anoopcs1 joined #gluster-dev
10:02 anoopcs1 joined #gluster-dev
10:14 atinmu pranithk, thanks buddy...
10:14 atinmu pranithk++
10:14 glusterbot atinmu: pranithk's karma is now 4
10:18 ppai joined #gluster-dev
10:22 kshlm joined #gluster-dev
10:27 kshlm joined #gluster-dev
10:28 kshlm joined #gluster-dev
10:42 RaSTar joined #gluster-dev
10:55 kkeithley1 joined #gluster-dev
10:57 kshlm joined #gluster-dev
11:20 ppai joined #gluster-dev
11:29 ndevos REMINDER: Gluster Bug Triage meeting starts in 30 minutes, see https://public.pad.fsfe.org/p/gluster-bug-triage
11:31 pranithk joined #gluster-dev
11:34 pranithk ndevos: will it be possible for you to take http://review.gluster.org/8960 today after it passes regression? this is already merged in upstream?
11:37 kanagaraj joined #gluster-dev
11:37 ndevos joined #gluster-dev
11:37 ndevos joined #gluster-dev
11:41 edward1 joined #gluster-dev
11:42 ndevos pranithk: sure, I think so, but please find someone to review it :)
11:43 pranithk kdhananjay: ^^
11:43 anoopcs joined #gluster-dev
11:46 ndevos joined #gluster-dev
11:46 ndevos joined #gluster-dev
11:46 edward1 joined #gluster-dev
11:46 anoopcs joined #gluster-dev
11:49 anoopcs joined #gluster-dev
12:00 ndevos REMINDER: Gluster Bug Triage meeting starting now in #gluster-meeting
12:00 ira joined #gluster-dev
12:01 soumya_ joined #gluster-dev
12:02 ndevos pranithk, Humble, lalatenduM: ^
12:02 lalatenduM ndevos, yes I am present
12:02 * Humble in
12:03 pranithk ndevos: was in wrong IRC channel :-D
12:03 * ndevos face palms
12:03 pranithk ndevos: joined the correct one now, there was a type
12:03 pranithk ndevos: typo*
12:13 ppai joined #gluster-dev
12:24 hagarth joined #gluster-dev
12:54 rgustafs joined #gluster-dev
13:00 bala joined #gluster-dev
13:12 pranithk joined #gluster-dev
13:12 pranithk left #gluster-dev
13:15 JustinClift ndevos: Yeah, they got themselves into weird state.  It happens occasionally. (I think there's a leak in something, and they run outta mem and OOM kill everything)
13:15 * JustinClift is rebooting them now
13:15 JustinClift slave20 seems same
13:16 JustinClift Hmmm, ssh keys for slave21 and 22 have changed
13:19 JustinClift Meh, they seem ok now
13:20 ndevos JustinClift: okay, thanks for looking into that
13:20 JustinClift ndevos: np
13:20 ndevos JustinClift: have you seen Atins email to the devel list? we'd like to prevent running regression tests if the smoke test failed
13:20 ndevos can you add such a dependency?
13:21 ndevos if the smoke test failed, a subsequent regression test can overwrite the failure with a success...
13:21 JustinClift They're separate triggers.  It might be possible, but I'm not sure off-hand.
13:21 JustinClift And no, I'm only just catching up with email for today atm. :)
13:21 ndevos sure, no problem, but we'd appreciate that if you can look into that
13:22 ndevos it does not happen often, but it happens on occasion and then suddenly all smoke tests start failing :-/
13:22 JustinClift I'll have to read the details. :)
13:23 ndevos sure, no worries
13:23 JustinClift Ahhh, this looks interesting: http://www.rackspace.com/blog/boot-rackspace-cloud-servers-from-a-cloud-block-storage-volume/
13:23 JustinClift That should let us write any unsupported OS to a block storage volume, and then get a VM to boot from it.
13:24 JustinClift That's likely easier than using a CentOS VM, uploading and image, and dd-ing over it.
13:24 JustinClift aka our NetBSD approach ;)
13:24 ndevos hehe
13:31 dlambrig_ joined #gluster-dev
13:56 spandit joined #gluster-dev
14:11 tdasilva joined #gluster-dev
14:25 jbautista- joined #gluster-dev
14:33 jdarcy joined #gluster-dev
14:34 topshare joined #gluster-dev
14:35 topshare joined #gluster-dev
14:51 jobewan joined #gluster-dev
14:53 topshare joined #gluster-dev
15:04 bala joined #gluster-dev
15:13 kanagaraj joined #gluster-dev
15:24 bala joined #gluster-dev
15:51 _Bryan_ joined #gluster-dev
16:36 anoopcs joined #gluster-dev
16:39 msvbhat JustinClift: ping, You still there
16:45 hagarth joined #gluster-dev
17:27 JustinClift msvbhat: What's up>
17:27 JustinClift ?
17:27 JustinClift msvbhat: Email me.  I need to go down the street. ;)
17:53 soumya_ joined #gluster-dev
18:08 dlambrig_ left #gluster-dev
18:10 davemc joined #gluster-dev
18:41 lpabon joined #gluster-dev
20:36 badone joined #gluster-dev
22:09 badone joined #gluster-dev
22:09 [o__o] joined #gluster-dev
22:29 davemc reminder Gluster community meeting is 22-Oct at 0:00 UTC on #gluster-meeting
22:40 semiosis in 1h 20m
23:48 davemc did I screw up that badly?

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary