Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-06-09

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:09 xavih joined #gluster-dev
01:37 ira joined #gluster-dev
02:53 hagarth joined #gluster-dev
03:02 overclk joined #gluster-dev
03:18 xavih joined #gluster-dev
03:47 shubhendu__ joined #gluster-dev
03:47 spandit joined #gluster-dev
03:57 itisravi joined #gluster-dev
03:58 sakshi joined #gluster-dev
04:01 atinmu joined #gluster-dev
04:03 gem joined #gluster-dev
04:04 atinmu joined #gluster-dev
04:16 nbalacha joined #gluster-dev
04:23 poornimag joined #gluster-dev
04:27 vimal joined #gluster-dev
04:32 kanagaraj joined #gluster-dev
04:33 atinmu http://review.gluster.org/#/c/11115/ has passed regression on both Linux & NetBSD, this is a backport and fixes a spurious failure, any volunteer for a quick review?
04:39 kshlm joined #gluster-dev
04:45 saurabh_ joined #gluster-dev
04:51 ppai joined #gluster-dev
04:52 ashishpandey joined #gluster-dev
04:55 rafi joined #gluster-dev
05:03 schandra joined #gluster-dev
05:07 hagarth atinmu: merged, thanks
05:07 atinmu hagarth, thanks
05:07 atinmu hagarth++
05:07 glusterbot atinmu: hagarth's karma is now 65
05:18 kdhananjay joined #gluster-dev
05:18 pppp joined #gluster-dev
05:18 Manikandan joined #gluster-dev
05:19 hgowtham joined #gluster-dev
05:20 Manikandan_ joined #gluster-dev
05:20 raghu` joined #gluster-dev
05:21 rgustafs joined #gluster-dev
05:21 jiffin joined #gluster-dev
05:27 soumya joined #gluster-dev
05:28 ashiq joined #gluster-dev
05:38 xavih joined #gluster-dev
05:45 anekkunt joined #gluster-dev
05:53 deepakcs joined #gluster-dev
05:53 Gaurav__ joined #gluster-dev
05:55 krishnan_p joined #gluster-dev
06:04 hagarth joined #gluster-dev
06:15 rgustafs joined #gluster-dev
06:20 xan joined #gluster-dev
06:24 soumya joined #gluster-dev
06:24 atalur joined #gluster-dev
06:24 xan left #gluster-dev
06:28 aravindavk joined #gluster-dev
06:38 aravindavk joined #gluster-dev
06:38 aravindavk joined #gluster-dev
06:43 itisravi kshlm: request merge of http://review.gluster.org/#/c/11104/ when free.
06:46 Joe_f joined #gluster-dev
06:52 kshlm itisravi, has netbsd regression been triggered for that change?
06:53 itisravi kshlm: I haven't triggered anything manually.
06:55 kshlm Huh. All active netbsd-regression jobs have been hung.
06:55 rafi left #gluster-dev
06:56 rafi joined #gluster-dev
06:56 ashishpandey joined #gluster-dev
07:04 nbalacha joined #gluster-dev
07:11 kdhananjay joined #gluster-dev
07:17 pranithk joined #gluster-dev
07:23 krishnan_p pranithk, did you get a chance to review http://review.gluster.org/#/c/11095/ ?
07:24 krishnan_p pranithk, I have not retriggered regression. I hope you didn't think I am still working on this patch. It is ready to be reviewed. It fails on ./tests/bugs/replicate/bug-880898.t.
07:24 krishnan_p pranithk, this is a known regression failure.
07:26 Joe_f joined #gluster-dev
07:46 soumya joined #gluster-dev
07:46 anrao joined #gluster-dev
07:50 krishnan_p kshlm, what happened to the hung netbsd-regression runs? Do we need to restart them individually?
07:55 pranithk joined #gluster-dev
07:57 Joe_f joined #gluster-dev
07:57 nkhare joined #gluster-dev
08:11 itisravi joined #gluster-dev
08:15 aravindavk joined #gluster-dev
08:33 nbalacha Is /tests/basic/afr/self-heald.t a known spurious failure?
08:34 nbalacha It is holding up http://review.gluster.org/11090
08:34 atinmu nbalacha, I spoke itisravi
08:35 itisravi nbalacha: try rebasing and resubmitting again instead of editing the commit message
08:35 atinmu nbalacha, I've already rebased from web interface
08:35 nbalacha itisravi, thanks. I think Atin has already done that
08:35 nbalacha atinmu, thanks
08:35 atinmu nbalacha, np
08:36 anekkunt krishnan_p, kshlm ,  Please can you  review this patch  http://review.gluster.org/#/c/11120/
08:48 kshlm krishnan_p, Yes. Unfortunately all the queued jobs need to be retriggered manually.
08:54 vimal joined #gluster-dev
08:55 hchiramm_ schandra++ thanks
08:55 glusterbot hchiramm_: schandra's karma is now 8
08:58 ndevos hchiramm_: csim could probably make the wiki read-only? any reason you did not sent the request to the infra list?
08:59 hchiramm_ ndevos, ah.. thought of Ccing
09:00 hchiramm_ however missed it
09:02 atinmu rafi, did you guys change your scrum standup schedule?
09:03 rafi atinmu: from onward, we are planning to do the scrum at 4:30 to 5:30
09:04 rjoseph joined #gluster-dev
09:05 rafi atinmu: *from today onward
09:12 pranithk joined #gluster-dev
09:12 csim ndevos: mhh I can make it readonly, not sure I can make it readonly in a elegant way :)
09:12 shubhendu__ joined #gluster-dev
09:13 pranithk joined #gluster-dev
09:13 ndevos csim: hchiramm_ sent an email with a note about it, if he did not forward it yet, I can do so
09:13 hchiramm_ ndevos, please go ahead
09:15 ndevos csim: http://thread.gmane.org/gmane.comp.file-systems.gluster.infra/207
09:15 csim oh, indeed, there is a setting
09:17 csim done
09:17 ndevos hchiramm_: what is the plan to find changes made in the wiki, and sync them to the docs project?
09:17 ndevos csim++ thank you
09:17 glusterbot ndevos: csim's karma is now 1
09:17 csim ndevos: we have a script to convert that to middleman/git based system
09:18 ndevos csim: was that used for creating the glusterdocs repo?
09:18 ndevos I guess schandra would know
09:19 hchiramm_ schandra, do u know how to track recently edited wiki pages ?
09:20 csim ndevos: not sure, I know we wanted to use that for ovirt and for rdo, maybe not for gluster
09:20 csim I get a bit confused between all projects :/
09:20 schandra hchiramm, I am not aware of any automated means of tracking..
09:21 ndevos schandra: http://www.gluster.org/community/documentation/index.php/Special:RecentChanges
09:21 hchiramm_ may be we could make use of the notifications received
09:21 hchiramm_ ndevos++ thanks
09:21 glusterbot hchiramm_: ndevos's karma is now 150
09:22 itisravi_ joined #gluster-dev
09:22 ndevos but, that looks rather ugly, maybe it is easier to find changes in the database?
09:23 hchiramm_ https://github.com/gluster/glusterdocs/pull/23 ndevos schandra real effect of easy contribution :)
09:24 ndevos schandra: this looks nicer: http://www.gluster.org/community/documentation/index.php?namespace=2&invert=1&days=90&title=Special%3ARecentChanges
09:24 schandra ndevos, thanks will check.
09:24 schandra ndevos++
09:24 glusterbot schandra: ndevos's karma is now 151
09:26 ndevos schandra: you can see more history when you change days=90 in the url ;-)
09:27 schandra yes :)
09:27 ndevos oh, maybe not... I cant see before May 6 :-/
09:28 ndevos ah, but you can click the "500" to show more
09:30 ndevos atinmu: can you do the bug triage today?
09:30 schandra ndevos, yes only the "no of changes" works , and not days
09:31 ndevos schandra: I think its a combination of both, but you'll work it out :)
09:31 schandra ndevos, will do (Y)
09:34 atinmu ndevos, yes I will do it with rafi
09:35 ndevos atinmu++ rafi++ thanks!
09:35 glusterbot ndevos: atinmu's karma is now 20
09:35 glusterbot ndevos: rafi's karma is now 14
09:41 kshlm joined #gluster-dev
09:42 ira joined #gluster-dev
10:04 badone_ joined #gluster-dev
10:08 ndevos kkeithley: hmm, whats up with TLSv1_2_method in http://review.gluster.org/11096 ? do I need to change that now, or not?
10:13 krishnan_p Could someone review http://review.gluster.org/11095 ?
10:13 krishnan_p ndevos, I am looking at you ^^ :-)
10:16 krishnan_p pranithk, and you! With this we can strike out sparse-self-heal.t from spurious regression.
10:17 ndevos krishnan_p: YES!
10:18 krishnan_p ndevos, you were saying that this patch was a little hard to wrap your head around. Would it help if I answered any of your latent concerns or questions?
10:19 ndevos krishnan_p: nah, thats ok, its just that there are *so* many other distractions...
10:21 ndevos krishnan_p: should there not be a bug for this?
10:21 krishnan_p ndevos, I can imagine. I am taking some liberties when I ask you to review my patches. Thanks for reviewing
10:21 krishnan_p ndevos, hmm. For reviews, I didn't think so.
10:22 krishnan_p ndevos, I was deferring creating a bug, hoping there would be a bug for the sparse-self-heal.t spurious regression failure.
10:22 ndevos krishnan_p: no, but a bug should contain the problem description that is addressed, its missing in the commit message :)
10:22 krishnan_p ndevos, I can arrange for a descriptive commit log, explaining the problem and the nature of the fix.
10:22 ndevos krishnan_p: yes, that would be appreciated
10:23 krishnan_p pranithk, itisravi_ is there are bug for sparse-self-heal.t spurious regression failure?
10:23 ndevos krishnan_p: I always like to know why a change was made, and what problem it fixes - preferably in the commit message so that a future "git blame" can refresh my memory
10:23 krishnan_p ndevos, is that preventing you from reviewing the patch? Then I would get to it right away
10:23 itisravi_ krishnan_p: checking
10:23 ndevos krishnan_p: no, thats not blocking me, but sometjing I would like to see before it gets merged
10:24 krishnan_p ndevos, me too. When I first submitted, I was considering this as a good to have fix. With the last patchset, I am convinced that this is how I would fix the problem
10:25 itisravi_ krishnan_p: don't think there is one. Just use the umberella BZ for the spurious failures.
10:25 krishnan_p itisravi_, and what would that be? Is it specific for AFR?
10:26 ndevos krishnan_p: it looks like a clean(er) solution, but I am not yet able to judge if it addresses the problem you say it does :)
10:26 itisravi_ krishnan_p: 1163543 was the bug for spurious failures, seems like it is modified. I think it would be best to create a new BZ
10:27 itisravi_ s/modified/ MODIFIED
10:28 krishnan_p ndevos, we take call_pool->lock (in fact a TRY_LOCK) in gf_proc_dump_pending_frames, which races with STACK_RESET, which removes frames from a stack, which is still part of the call_pool->all_frames
10:29 krishnan_p ndevos, this is different from how STACK_DESTROY works, where we remove the stack as a whole, from call_pool->all_frames under the call_pool->lock
10:30 krishnan_p ndevos, all parts of my change are not necessary to remove the race condition. While I at it, I thought it helps maintainability (or eases things) if we used struct list_head instead of custom-made doubly-linked list
10:31 krishnan_p itisravi_, what was the BZ against which the self-heald.t spurious failure was fixed?
10:31 ndevos krishnan_p: right, I've mainly checked the replacement/usage of the list_head - I'll check STACK_RESET a little more
10:35 jiffin ndevos: ping
10:35 glusterbot jiffin: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
10:35 krishnan_p ndevos, OK.
10:35 jiffin ping ndevos
10:35 itisravi_ krishnan_p: there was no bz for self-heald.t
10:35 * ndevos informs that "ping ndevos" is a naked ping too
10:36 kkeithley2 joined #gluster-dev
10:36 krishnan_p itisravi_, is it being merged as an rfc patch?
10:36 ndevos jiffin: whats up?
10:38 jiffin ndevos: I saw ur reply regarding the acl issue
10:38 anekkunt hagarth,  Can you have a look at this patch http://review.gluster.org/#/c/10894/
10:38 krishnan_p itisravi_, I think I wasn't clear. The recent spurious failure in self-heald.t was fixed or is being fixed as we speak
10:38 jiffin ndevos: i have doubt whether invaliding the cache entry in ganesha will work
10:38 itisravi_ krishnan_p: there was no spurious failure fix against self-heald.t
10:39 ndevos jiffin: yes, and you said that the refresh is needed in the xlators, not in ganesha?
10:39 itisravi_ krishnan_p: atleast git log tests/basic/afr/self-heald.t doesn't show anything recent.
10:39 jiffin ndevos: yup
10:39 krishnan_p itisravi_, In the last week, I remember seeing self-heald.t mentioned among spurious regression failure. Let me get you link to the corresponding archives
10:40 jiffin ndevos: more specfically acess-control translator
10:40 ndevos jiffin: that could well be, I did not have time to test it out, or think more about it
10:40 jiffin ndevos: k
10:40 krishnan_p itisravi_, I am referring to - http://www.gluster.org/pipermail/gluster-devel/2015-June/045458.html
10:40 krishnan_p itisravi_, does this ring a bell?
10:41 krishnan_p itisravi_, anyway never mind. I'd rather open a bug myself.
10:41 itisravi_ krishnan_p: yes that would be good, since your fix is not for the test case itself.
10:41 * krishnan_p thinks we should not need a bugzilla entry for every patch.
10:42 ndevos jiffin: is that access-control on the client side? if that is the case, we could probably use upcall to invalidate the cache there?
10:42 ndevos jiffin: or, could it be the md-cache xlator?
10:42 jiffin ndevos: i don't think access-control loaded in client side for nfs-ganesha
10:42 krishnan_p itisravi_, I don't understand why we should need different bugs for ones that fix test cases and ones that fix code, so that existing test case runs successfully
10:43 jiffin ndevos: i am receiving the error from access control translator in brick side
10:43 ndevos jiffin: hmm, okay, I'm not sure about the xlator stack in gfapi
10:43 krishnan_p itisravi_, hypothetically, if a regression test case failed, and the issue was with both current code and tests case, I would use one bug to send patches for both aspects of the failure
10:43 ndevos jiffin: right, thats rather conclusive :D
10:43 jiffin ndevos: :)
10:44 jiffin ndevos: by default access-control translator is loaded in client stack
10:45 jiffin ndevos:I will send the workaround and have ur comments on the same.
10:46 ndevos jiffin: yes, please do, it might make it easier to understand :)
10:46 jiffin ndevos: sure
10:47 atinmu joined #gluster-dev
10:48 itisravi_ krishnan_p: agreed, but the existing bugs are in MODIFIED state..
10:48 krishnan_p itisravi_, OK
10:51 firemanxbr joined #gluster-dev
10:52 rjoseph joined #gluster-dev
11:00 rafi1 joined #gluster-dev
11:05 ndevos krishnan_p: I think there is a potential race that could cause problems, I've left comments in the review, nothing else to note
11:07 krishnan_p ndevos, thanks. I am improving the commit log and raising a bug to capture this issue. (not the sparse-self-heal.t)
11:07 ndevos krishnan_p: ok, thanks!
11:10 atinmu joined #gluster-dev
11:12 shubhendu__ joined #gluster-dev
11:12 rjoseph joined #gluster-dev
11:14 krishnan_p ndevos, awesome catch! I can't think of a place where someone needs to use STACK_RESET and STACK_DESTROY concurrently.
11:15 krishnan_p ndevos, Another corollary is that DESTROY and RESET both require all the frames to have unwound, which is in some sense a synchronization among the frames in a stack.
11:15 krishnan_p ndevos, I will see how this race can be avoided. The one involving the last frame.
11:17 ndevos krishnan_p: ah, ok, I could also not think of a use-case that would hit the race, but rather be safe than sorry :)
11:23 Joe_f joined #gluster-dev
11:32 dlambrig joined #gluster-dev
11:35 nkhare joined #gluster-dev
11:39 hagarth joined #gluster-dev
11:43 ndevos kkeithley_: whats your opinion on http://review.gluster.org/#/c/11107/1/xlators/protocol/client/src/client-rpc-fops.c@507 , drop the variable, or keep it this way?
11:50 krishnan_p ndevos, thanks much for reviewing the changes. I have addressed them. Hope the updated patchset looks good.
11:56 atinmu REMINDER: Gluster Community Bug Triage meeting starting in another 5 minutes in #gluster-meeting
12:00 soumya joined #gluster-dev
12:33 lalatenduM hchiramm: hchiramm_ regarding the patch http://review.gluster.org/#/c/11129/, I think you should start mail in gluster-devel
12:34 hchiramm_ lalatenduM, I already sent a mail in gluster devel saying our future plan
12:35 lalatenduM hchiramm_: I believe that was part of the doc restructuring plans right, like 2 months back right?
12:35 hchiramm_ no :) , u r missing mails now a days :)
12:35 lalatenduM hchiramm_: yup, most likely , can you point me to the mail
12:36 hchiramm_ http://www.gluster.org/pipermail/gluster-users/2015-May/022065.html lalatenduM
12:36 lalatenduM hchiramm_: thanks
12:38 lalatenduM hchiramm_: ok , I had seen the mail :) and its a huge mail :)
12:38 hchiramm_ :)
12:40 lalatenduM hchiramm_: looks like we did not get many replies to the mail, So in order to bring more attentation , we have two options , if you want you can reply to the original mail with the patch link or start a new mail asking for review comments , what do you think
12:40 hchiramm_ lalatenduM, I dont think its required
12:41 hchiramm_ because there is no way to maintain 2 repos
12:41 hchiramm_ its not practically possible
12:41 hchiramm_ and the effort is to avoid duplicate contents
12:42 hchiramm_ that is one of the reason to introduce one single project called glusterdocs
12:42 lalatenduM hchiramm_: basically the idea is to socialize the new idea, so that by the time the patch merges most of the community know the change happened in the process
12:42 lalatenduM of documentation
12:43 lalatenduM and they dnt feel someone has imposed the idea on them
12:43 hchiramm_ if they want to respond , they can respond to the existing thread
12:43 hchiramm_ its not the first thread on this topic
12:44 hchiramm_ we already sent 2/3 mails on this.
12:45 lalatenduM hchiramm_: basically patches like this are just not code changes , rather a change in process. So if we take more people along us during the process we make things easy for us i.e. maintainers
12:46 lalatenduM hchiramm_: however u r free to do what u think is right :)
12:49 lalatenduM hchiramm_: also if you are interested to send a mail , I will do it :)
12:51 lalatenduM s/interested/not interested/
13:03 rafi joined #gluster-dev
13:06 hagarth joined #gluster-dev
13:10 pppp joined #gluster-dev
13:21 hagarth merged the NetBSD umount patch, hopefully our regression runs should get smoother
13:22 hagarth there is still a baffling problem of repos and binaries getting wiped out during a NetBSD regression run.
13:26 ndevos hagarth: yeah, I noticed that too, I wanted to check the logs of a (maybe) died nfs-server, but there was *nothing*
13:30 shyam joined #gluster-dev
13:31 hagarth ndevos: right. I think the umount problem was related to killall glusterfs before umount $N0
13:33 hagarth I think we should declare tomorrow as "Fix NetBSD regression tests day." :)
13:34 ndevos hagarth: yeah, killing before unmounting definitely is wrong :)
13:36 pousley joined #gluster-dev
13:36 ndevos hagarth: anrao is trying to figure out where the deleting of the build comes from, maybe she finds out soon and NetBSD is working tomorrow again
13:36 hagarth ndevos: that would be great
13:36 ndevos yes, it would be!
13:37 hagarth anrao: have you been able to hit the problem?
13:38 anrao hagarth: yes
13:38 anrao still checking through the logs
13:38 hagarth anrao: that's great progress!
13:40 ppai joined #gluster-dev
13:40 hagarth anrao: sure, thanks!
13:41 anrao hagarth: will let you about the progress
13:45 krishnan_p joined #gluster-dev
13:56 kkeithley_ (07:43:38 AM) ndevos: kkeithley_: whats your opinion on http://review.gluster.org/#/c/11107/1/xlators/protocol/client/src/client-rpc-fops.c@507 , drop the variable, or keep it this way?
13:57 ndevos indeed, I said that
13:57 kkeithley_ I don't like adding 5+ lines of code just to log or not log when one line will suffice
13:57 kkeithley_ kinda why I made the comment in the gerrit. ;-)
13:58 ndevos kkeithley_: ok, I'll change it if you dont like it :)
13:58 pranithk joined #gluster-dev
13:59 rjoseph joined #gluster-dev
14:26 deepakcs joined #gluster-dev
14:27 rafi joined #gluster-dev
14:31 lpabon joined #gluster-dev
14:45 msvbhat JustinClift: You there? I get "Configuration Error" while trying to login into review.gluster.org
14:48 msvbhat JustinClift: ^^ Have any idea?
15:02 ndevos aaah! hagarth, so you know who created the nbslave7f.cloud.gluster.org Jenkins slave? there is no VM for it, it just uses nbslave71.cloud.gluster.org - which then hosts 2 slave processes at the time :-/
15:13 shubhendu__ joined #gluster-dev
15:15 hchiramm_ anrao++
15:15 glusterbot hchiramm_: anrao's karma is now 6
15:15 pousley joined #gluster-dev
15:17 hchiramm_ csim, Is our wiki readonly now ?
15:18 hchiramm_ if that is the case,  its better to put a banner which points to new documentation
15:18 csim hchiramm_: it is
15:18 csim but then we annot put a banner, since it is readonly :p
15:19 hchiramm_ $wgReadOnly = 'This wiki is currently being upgraded to a newer software version.';
15:19 csim mhh ok, will look
15:19 hchiramm_ I thought if we adjust above string, it will display as banner
15:19 csim let's see
15:19 hchiramm_ sure..
15:20 hchiramm_ thanks csim++
15:20 glusterbot hchiramm_: csim's karma is now 2
15:21 pousley joined #gluster-dev
15:21 csim so I just changed it
15:22 * hchiramm_ checking
15:23 hchiramm_ not displaying it as a banner at my end , may be browser cachec
15:23 hchiramm_ cachec/cache
15:23 csim yeah, on http://gluster.org/documentation/About_Gluster/ ?
15:23 pranithk joined #gluster-dev
15:23 firemanxbr_ joined #gluster-dev
15:23 csim mhh no, where is the wiki ?
15:24 hchiramm_ http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.7
15:24 hchiramm_ those are from wiki
15:26 ndevos hchiramm_: whats the text of the banner you would like to see?
15:26 hchiramm_ ndevos, please feel free to suggest :)
15:26 hchiramm_ a pointer to gluster.readthedocs.org
15:27 hchiramm_ and a pointer to https://github.com/gluster/glusterdocs
15:27 ndevos hchiramm_: dont tell me, tell csim :)
15:28 hchiramm_ :P
15:37 JustinClift msvbhat: Still getting the Gerrit error when you try to log in?
15:37 * JustinClift has been ignoring IRC since friday.  Guess that's not good timing. ;)
15:38 lpabon joined #gluster-dev
15:48 soumya joined #gluster-dev
15:48 ndevos soumya: please correct the commit-message from http://review.gluster.org/11141 - it refers to the mainline bug and all
15:53 soumya ndevos, ohh...thanks..will update it
15:57 ndevos hey firemanxbr, did you see my reply about Latinoware?
16:10 JustinClift Hmmm, I think theforge v2 stats collection approach for "
16:10 JustinClift # of commits" needs to change
16:10 JustinClift Yeah.
16:10 JustinClift Well, there goes the historial data for that then. Oh well. ;)
16:11 hagarth ndevos++
16:11 glusterbot hagarth: ndevos's karma is now 152
16:12 hagarth @stats
16:12 glusterbot hagarth: I have 3 registered users with 0 registered hostmasks; 1 owner and 1 admin.
16:12 hagarth @karma
16:12 glusterbot hagarth: Highest karma: "ndevos" (152), "lalatenduM" (82), and "kkeithley" (72).  Lowest karma: "<" (-12), "(" (-6), and "typo" (-3).  You (hagarth) are ranked 4 out of 112.
16:12 firemanxbr ndevos, sorry my delay
16:13 firemanxbr ndevos, today is my bday :D
16:14 firemanxbr ndevos, I'm search all values for this event(travel+hosting)
16:14 firemanxbr ndevos, I believe send one feedback, but tomorrow :)
16:17 hagarth firemanxbr: happy birthday! :)
16:18 atalur joined #gluster-dev
16:18 firemanxbr hagarth, thnkz :D
16:20 ndevos firemanxbr: happy bday!
16:20 firemanxbr ndevos, thnkz master :)
16:20 ndevos firemanxbr: no need to rush the reply on the email, I was only reminding you :)
16:21 firemanxbr ndevos, no problems I see with the organizers of the event to be the most economic way for the project.
16:22 ndevos firemanxbr: oh, thats great, and maybe I'll find a way to join you there :)
16:23 firemanxbr ndevos, would be perfect, always learn a lot from your presentations!
16:23 csim so, someone remember the url of the wiki ?
16:24 ndevos csim: http://www.gluster.org/community/documentation/index.php
16:25 csim grmbl, no banner
16:25 ndevos edit a page and see if it really is read-only?
16:25 csim also, I was at dotscale yesterday, and one guy spoke of the work he did on https://github.com/aphyr/jepsen/tree/master/jepsen
16:25 csim I think his approach might be useful to pursue for gluster
16:26 csim "The administrator who locked it offered this explanation: This wiki is deprecated, please see https://gluster.readthedocs.org/en/latest/ "
16:26 csim ok, we need a bigger banner, or a redirect
16:29 ndevos csim: maybe send a mail or talk to msvbhat about jepsen? we have distaf that does testing for us, maybe we can use some of the jespen ideas
16:31 csim ndevos: I think I will wait on the video on youtube to send the link
16:32 csim the guy said the code was not the cleanest, maybe we could help
16:32 csim especially since we have both gluster and ceph and swift that would benefit from such high level fuzzing
16:33 ndevos csim: yes, when the presentation was recorded, send the link when its posted
16:34 csim ( the guy was also quite a good speaker )
16:34 ndevos Ceph uses https://github.com/ceph/teuthology which is rather full featured, distaf integrates well with Gluster, but I have no idea what Swift uses
16:34 csim users ?
16:35 ndevos ?
16:35 csim ndevos: users serve as regression test suite :)
16:35 ndevos ah, yeah, probably :D
16:36 ndevos teuthology would have a test that emulates a monkey in a datacenter, pulling random cables and drives and all
16:37 csim oh, nice
16:37 csim (another talk spoke of chaos monkey, and how they shut down a complete DC from time to time)
16:37 csim (at netflix)
16:37 ndevos thats good testing too
16:39 csim let me apply that to our infra :)
16:45 kkeithley_ ndevos: wrt There also does not seem to be a guarantee that TLSv1_2_method() is available when TLS1_2_VERSION is #define'd.
16:46 kkeithley_ My centos5.11 box has openssl/tls1.h:#define TLS1_2_VERSION0x0303
16:46 jiffin joined #gluster-dev
16:46 kkeithley_ but no TLSv1_2_method() in libssl.so
16:46 kkeithley_ if you hadn't already seen that.
16:54 shubhendu__ joined #gluster-dev
16:55 ndevos kkeithley_: I did not check that, I just took your word and combined that with the change you made
17:17 Gaurav__ joined #gluster-dev
17:25 shyam joined #gluster-dev
17:30 xavih joined #gluster-dev
17:53 atinmu joined #gluster-dev
17:55 dlambrig1 left #gluster-dev
17:55 atinmu hagarth, I've added a test case @ http://review.gluster.org/#/c/11143/ for the recent volume status issue we faced for 3.7
17:55 hagarth atinmu: great, thanks!
17:55 hagarth I think our regressions are back on track too now
17:56 atinmu hagarth, I see few NetBSD machines are alive
17:56 atinmu hagarth, however I spotted two new failures, already shared with devel
17:56 hagarth atinmu: yeah, Emmanuel's patch should prevent NetBSD machines from hanging
17:57 atinmu hagarth, one in mainline and the other in release-3.7
17:57 hagarth atinmu: failures in tests are fine. The NetBSD problem was quite involved and we ended up accumulating a lot of regression debt.
17:58 atinmu hagarth, right
18:38 ira joined #gluster-dev
19:05 pousley_ joined #gluster-dev
19:07 firemanxbr joined #gluster-dev
19:19 msvbhat JustinClift: Yes, I still get that, but only in Firefox. But chrome is unusually slow though
19:20 rafi joined #gluster-dev
19:39 shyam left #gluster-dev
20:14 lpabon joined #gluster-dev
20:29 shyam joined #gluster-dev
20:39 badone_ joined #gluster-dev
21:10 badone__ joined #gluster-dev
21:34 lpabon joined #gluster-dev
22:15 badone_ joined #gluster-dev
23:38 lpabon joined #gluster-dev
23:43 ira joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary