Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-03-31

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 bala joined #gluster-dev
00:51 obnox JustinClift: ok, thanks - I just started usign it.
00:52 obnox hmm, when I want to submit more than one patch as a patchset for review, should I manually change the Change-ID that the git commit puts into the commits message to be the same ID for all patches in the patchset?
00:53 obnox Otherwise the rfc.sh script seems to create separate changes/review requests in gerrit
01:46 JustinClift obnox: I'm not sure.  I don't often submit patches myself... :(
01:46 JustinClift (not in the last few months anyway)
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:54 rjoseph joined #gluster-dev
03:11 vipulnayyar joined #gluster-dev
03:35 kdhananjay joined #gluster-dev
03:38 atinmu joined #gluster-dev
03:41 itisravi joined #gluster-dev
03:44 shubhendu joined #gluster-dev
04:08 overclk joined #gluster-dev
04:12 kanagaraj joined #gluster-dev
04:18 nkhare joined #gluster-dev
04:34 anoopcs joined #gluster-dev
04:35 spandit joined #gluster-dev
04:41 ppai joined #gluster-dev
04:45 jiffin joined #gluster-dev
04:49 kasturi joined #gluster-dev
04:52 bala joined #gluster-dev
04:52 kotreshhr joined #gluster-dev
04:58 soumya joined #gluster-dev
05:12 rafi joined #gluster-dev
05:15 kshlm joined #gluster-dev
05:20 Manikandan joined #gluster-dev
05:23 aravindavk joined #gluster-dev
05:24 kdhananjay joined #gluster-dev
05:30 kotreshhr joined #gluster-dev
05:32 ndarshan joined #gluster-dev
05:34 vimal joined #gluster-dev
05:37 lalatenduM joined #gluster-dev
05:42 ashiq joined #gluster-dev
05:53 gem joined #gluster-dev
06:13 hagarth joined #gluster-dev
06:14 nishanth joined #gluster-dev
06:14 kotreshhr1 joined #gluster-dev
06:28 soumya joined #gluster-dev
06:45 deepakcs joined #gluster-dev
06:46 nkhare joined #gluster-dev
07:04 atinmu joined #gluster-dev
07:05 kotreshhr joined #gluster-dev
07:08 hagarth joined #gluster-dev
07:17 raghu joined #gluster-dev
07:20 atinmu joined #gluster-dev
07:46 hchiramm joined #gluster-dev
07:47 hagarth joined #gluster-dev
08:18 obnox ok, so if jenkins fails smoke test on some change submitted to gerrit, how can I really tell what is going on? I can't really tell what the problem is from the console output in jenkins, only that one of the testcases failse (chown/00.t in one case, chmod/00.t in the other)
08:22 anoopcs obnox: It may be spurious failure. You can re-trigger the same by updating the commit message
08:22 obnox anoopcs: thanks. I suspected so.
08:23 obnox anoopcs: still curious if I can see more details: these tests are outside the glusterfs code so I can't simply run them from within the checkout
08:24 anoopcs obnox: you mean to run it locally?
08:24 obnox anoopcs: yep
08:24 obnox anoopcs: or else, seem more failure details in jenkins
08:25 anoopcs obnox: We can mount those test-suites
08:26 obnox anoopcs: what does that mean?
08:26 anoopcs At present, I don't remeber the location
08:31 anrao joined #gluster-dev
08:31 anoopcs obnox: We can download the logs.
08:31 anoopcs I think
08:31 nkhare joined #gluster-dev
08:36 anoopcs obnox: For http://review.gluster.org/10055, regression tests passed [http://build.gluster.org/job/rackspace-regression-2GB-triggered/6144/consoleFull] but in overall failed
08:39 obnox anoopcs: yeah, this is the console outpue.
08:39 obnox output
08:39 obnox but it does not give me a real clue in this case:
08:39 obnox http://build.gluster.org/job/smoke/14831/console
08:39 obnox just says:
08:40 lalatenduM ndevos++
08:40 obnox ...
08:40 glusterbot lalatenduM: ndevos's karma is now 104
08:40 obnox /opt/qa/tools/posix-compliance/tests/chown/00.t .....
08:40 obnox Failed 1/171 subtests
08:40 obnox ...
08:40 obnox Test Summary Report
08:40 obnox -------------------
08:40 obnox /opt/qa/tools/posix-compliance/tests/chown/00.t   (Wstat: 0 Tests: 171 Failed: 1) Failed test:  161
08:40 glusterbot obnox: -----------------'s karma is now -1
08:40 anoopcs Towards last, there is
08:40 anoopcs smoke.sh returned 2
08:40 anoopcs Process leaked file descriptors.
08:41 obnox anoopcs: right. but isn't that a consequence of the failures?
08:41 obnox it also can't find the logfile to tar
08:42 anrao joined #gluster-dev
08:44 anoopcs obnox: May be.. I don't know in detail about that. JustinClift, are you there?
08:45 anoopcs JustinClift can help you in detail on this particular context
08:46 anoopcs obnox: Anyway try re-triggering the patch.
08:46 JustinClift ?
08:47 JustinClift anoopcs: I've been awake all night, so brain isn't so good atm.
08:47 JustinClift anoopcs: Which context?  The smoke failure?
08:47 anoopcs JustinClift, ok. some failure in smoke test
08:47 anoopcs yep
08:47 JustinClift anoopcs: Ignore it
08:48 anoopcs Just re-trigger, right?
08:48 JustinClift Well, rerun the test.  99.999999% chance it'll pass the 2nd time around
08:48 JustinClift If it does keep failing in the same spot though... that's a real problem that needs to be investigated
08:48 JustinClift Yeah
08:48 JustinClift re-trigger :)
08:48 anoopcs JustinClift: cool..
08:48 obnox JustinClift: retrigger == amend commit message and re-run rfc.sh ?
08:48 anoopcs thanks
08:48 JustinClift I don't think I've _ever_ seen one of those spurious type smoke failures ever fail a 2nd time ;)
08:49 JustinClift obnox: Nope
08:49 JustinClift There's a better way
08:49 * JustinClift looks for the instructions
08:49 JustinClift 1 sec
08:49 JustinClift http://www.gluster.org/community/documentation/index.php/Retrigger_jobs_in_Jenkins
08:49 JustinClift obnox: ^
08:50 obnox JustinClift: thanks
08:50 JustinClift obnox: If you're logged into Jenkins, there should be a "Retrigger" option on the failed job
08:50 JustinClift Sure np
08:50 anoopcs JustinClift: Will that re-trigger all smoke tests on all platforms?
08:50 obnox JustinClift: i need to log in to se the retrigger button, that's the trick :)
08:50 obnox JustinClift++
08:50 glusterbot obnox: JustinClift's karma is now 43
08:51 JustinClift anoopcs: I think it just reruns the failed one only, and updates the resulting votes depending on the result
08:51 JustinClift So, if it's the only one that failed, it's the only one that needs to be rerun
08:51 JustinClift Does that make sense?
08:52 anoopcs JustinClift: Ok. and that will remove the 'X' mark from verified column?
08:53 * obnox searching how to register for an account in jenkins.
08:55 raghu kshlm: can you please review this patch? http://review.gluster.org/#/c/10023/
08:56 ashiq joined #gluster-dev
08:56 obnox heh, it seems I have a user-ID in jenkins, but no password
08:56 JustinClift obnox: What's your username?  I can reset the password for you
08:57 JustinClift anoopcs: If it passes the smoke test after being retriggered (and if that was the only thing that failed), then from memory yeah, it'll turn the X into a green tick
08:58 anoopcs JustinClift: Hmm, ok
08:58 obnox JustinClift: by my builds triggered trough gerrit, I am 'obnox@samba.org'
08:59 obnox JustinClift: I have not set a pw yet
09:00 anoopcs obnox: Jenkins and gerrit have different login.
09:01 obnox anoopcs: right, but since gerrit scheduled jenkins jobs for me, I appear in jenkins:
09:01 obnox http://build.gluster.org/user/obnox@samba.org/
09:01 obnox anoopcs: and there is no 'register' button or similar.
09:01 * obnox afk for a few minutes
09:02 JustinClift obnox: Yeah, it's closed registration.  We manually create accounts for people.
09:02 JustinClift It seems like you don't have a proper account yet, so I'll make one for you now. :)
09:03 JustinClift obnox: Guessing preferred username is obnox?
09:03 JustinClift Not "ious" :D
09:03 obnox JustinClift: thanks. exactly :)
09:03 obnox ;)
09:03 JustinClift Doing it now. :)
09:04 obnox thanks. bbl (couple of minutes)
09:04 JustinClift np
09:04 JustinClift I'll email the bits to you
09:14 bala joined #gluster-dev
09:15 pranithk joined #gluster-dev
09:30 obnox JustinClift: thanks, works!
09:30 JustinClift :)
09:43 obnox JustinClift: I can also confirm that a succeeded second trigger of a previously failed test case updates the gerrit review request
09:43 obnox anoopcs: so the red X vanishes. (the green tick is only added later after more tests, I believe)
09:44 nkhare joined #gluster-dev
09:46 anoopcs obnox: thanks for the update
09:49 lalatenduM joined #gluster-dev
09:50 anoopcs obnox: most probably after regression
09:55 rjoseph joined #gluster-dev
09:55 rafi1 joined #gluster-dev
10:02 obnox anoopcs: ping
10:02 glusterbot obnox: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
10:02 obnox oops
10:03 obnox anoopcs: you worked on  feature/trash, right?
10:07 ashiq joined #gluster-dev
10:15 rjoseph joined #gluster-dev
10:18 ashiq joined #gluster-dev
10:18 obnox rjoseph: hi. got a minute for a question regarding features/trash?
10:20 hgowtham joined #gluster-dev
10:21 aravindavk joined #gluster-dev
10:38 anoopcs obnox: yes
10:40 obnox anoopcs: cool. i was pointed to this code by coverity. it seems that in trash.c, the wiping and storing of the eliminate path will not work correctly:
10:41 obnox store_eliminate_path (and wipe_eliminate_path) take eliminate as pointer, not pointer to pointer
10:41 lalatenduM joined #gluster-dev
10:41 obnox and so the store_eliminate_path can't have a side effect on caller's priv->eliminate
10:41 obnox hence this is leaked and never added to the structure
10:41 obnox also, wipe_eliminate_path frees the memory buy does not set priv->eliminate to NULL
10:42 obnox so subsequent NULL checks lead to access after free
10:42 anoopcs obnox: Yes..
10:42 obnox I have a patch here that fixes thes issues
10:42 obnox I just wanted to double-check before submitting, that I am not missing a subtle point here...
10:42 anoopcs obnox: we identified it 2 days back
10:42 obnox ah.
10:42 obnox submitting ... :)
10:42 anoopcs no problem sent the path
10:43 anoopcs sorry ppatch
10:43 anoopcs crap
10:43 anoopcs *patch
10:43 obnox ;)
10:43 anoopcs :)
10:43 obnox anoopcs: i get your message
10:43 anoopcs :D
10:43 anoopcs http://review.gluster.org/#/c/10064/ This is it, right?
10:43 obnox wait a minute...
10:44 obnox anoopcs: actually:
10:44 obnox http://review.gluster.org/10068
10:44 obnox this one
10:45 anoopcs So 10068 is dependent on 10064, right?
10:45 obnox let me check
10:45 obnox yeah.
10:46 obnox i am going through various CIDs in the trash module right now
10:46 obnox anoopcs: but 10064 is a very similar class of error
10:46 obnox (handing in a pointer where a side effect on that pointer is intended)
10:47 anoopcs obnox: yeah..
10:47 anoopcs obnox: and the fun is
10:47 anoopcs its passing trash.t
10:47 obnox anoopcs: i was wondering ... from reading the code, this should make it difficult for the module to work correcty - is the test case not sufficient, maybe?
10:47 obnox also:
10:48 jiffin anoopcs: it seems our trash.t is broken too :(
10:48 obnox jiffin: ah, that would explain
10:48 obnox anoopcs: also, regarding the context of 10064 - remove_trash_path:
10:48 obnox anoopcs: remove_trash_path does the very same thing for internal == and internal == false.
10:49 obnox so either the check is superfluous or it should do s/th different
10:49 obnox (like in copy_trash_path where an "internal_op/"  string is added
10:49 * anoopcs checking remove_trash_path
10:49 ira joined #gluster-dev
10:50 obnox anoopcs: I think looking for '/' is ok for remove_trash_path, so one could say the check for internal is not required here
10:50 obnox only if one would like to make sure that it is really created as internal, one could check for the whole string instead
10:50 obnox not sure
10:51 obnox hence asking :)
10:51 jiffin obnox:  i will explain it u
10:51 jiffin that's nice catch
10:52 jiffin rem_path should be used instead of path
10:52 anoopcs that make sense
10:52 * JustinClift got errors in trash.t last night
10:53 JustinClift Spurious though, and only showed up a few times
10:53 anoopcs JustinClift: in regression?
10:53 * JustinClift is collecting the info atm
10:53 JustinClift anoopcs: yep, on master
10:53 obnox jiffin: ahh, right.
10:54 obnox in the internal case for copy, another path level is added, so for remove, another level should be removed
10:54 obnox dzang
10:54 anoopcs obnox: you are right.
10:55 * anoopcs waiting for trash.t spurious failure info. . .
10:55 jiffin obnox++ thanks for pointing out
10:55 glusterbot jiffin: obnox's karma is now 1
10:55 anoopcs obnox++
10:55 glusterbot anoopcs: obnox's karma is now 2
10:55 jiffin trash translator handles internal operation and normal client operation differently
10:56 jiffin internal operation are self heal, rebalalnce etc which have a negative pid
10:57 jiffin files part of internal operation stored in <trash-directory>/internal_op
10:58 jiffin and for normal client operation files will be stored inside /<trash-directory>/
10:59 JustinClift anoopcs: Found it:
10:59 JustinClift ./tests/features/trash.t                                                            (Wstat: 0 Tests: 65 Failed: 1) Failed test:  57
10:59 JustinClift Failed test: 57
11:00 obnox The finding of the test is a little difficult it seems to me. i.e. doing manual counting  in the test script - right or wrong?
11:00 anoopcs obnox: we have a script
11:00 JustinClift anoopcs: I ran two lots of tests overnight.  One with 20 x standard Rackspace 2GB VM's
11:01 kkeithley1 joined #gluster-dev
11:01 JustinClift The results from that are in the email earlier.  Wasn't too bad.
11:01 JustinClift anoopcs: Then I pushed hard, and ran 20 x Rackspace 2GB VM's
11:01 JustinClift These VM's are _slow_
11:01 JustinClift And way underpowered CPU wise
11:01 JustinClift They really show up race conditions and time outs well :)
11:02 JustinClift One of these had the failure in trash.t
11:02 JustinClift I'm cutting-n-pasting the results into an email now
11:02 JustinClift I have the launching of the VM's scripted... but not the result collection. :/
11:03 JustinClift (I will get around to automating the result collection at some point too, so then we can just kick it off from a jenkins job or something maybe)
11:03 anoopcs JustinClift: thanks...As part of portability to NetBSD, we are planning to address spurious
11:03 JustinClift anoopcs: I know
11:03 JustinClift anoopcs: It should make our code safer for all OS's, by finding the hidden bugs :)
11:04 * JustinClift hopes
11:04 hagarth Since NetBSD is now voting, I wonder if it will be a bottleneck
11:04 JustinClift It's possible
11:04 JustinClift We may need to increase the # of VM's there
11:04 hagarth JustinClift: yes
11:05 JustinClift I'm sort of worried that the voting system might not work well for it.  But, lets find out and see if it's something we can live with for a few weeks
11:10 pranithk joined #gluster-dev
11:12 hagarth JustinClift: right
11:13 kkeithley_ Gluster Bug Triage in #gluster-meeting in 45 minutes
11:13 nkhare joined #gluster-dev
11:18 Manikandan joined #gluster-dev
11:24 itisravi joined #gluster-dev
11:34 ashiq joined #gluster-dev
11:45 rafi joined #gluster-dev
11:47 firemanxbr joined #gluster-dev
11:50 obnox jiffin: anoopcs: is there already a BZ for the bug in the internal case in remove_internal_path?
11:51 anoopcs obnox: Not yet
11:51 anoopcs you can file one
11:51 anoopcs and update the patch
11:51 obnox anoopcs: I have an add-on patch
11:52 obnox (that only makes sense once the path with the pointer handling is in)
11:53 obnox (after all it is a separate issue)
11:54 kkeithley_ Gluster Bug Triage in #gluster-meeting in 5 minutes
11:55 anoopcs As of now, you have not sent a patch for internal case,right?
11:55 anoopcs obnox
11:57 anoopcs joined #gluster-dev
11:58 rjoseph joined #gluster-dev
12:00 soumya joined #gluster-dev
12:02 obnox anoopcs: no, because there is not BZ yet
12:02 anoopcs obnox: Ok. Is that reported in Coverity?
12:03 obnox anoopcs: no, this is a semantic problem I observed
12:04 anoopcs anoopcs:Ok.
12:04 anoopcs obnox: :)
12:05 itisravi_ joined #gluster-dev
12:17 DV joined #gluster-dev
12:19 rjoseph joined #gluster-dev
12:20 hagarth joined #gluster-dev
12:21 kanagaraj joined #gluster-dev
12:22 anoopcs joined #gluster-dev
12:36 pppp joined #gluster-dev
12:39 rjoseph joined #gluster-dev
12:45 kdhananjay joined #gluster-dev
12:59 shyam joined #gluster-dev
13:17 rafi joined #gluster-dev
13:20 gem joined #gluster-dev
13:58 shubhendu joined #gluster-dev
14:07 nishanth joined #gluster-dev
14:08 bala1 joined #gluster-dev
14:30 _Bryan_ joined #gluster-dev
14:37 atinmu joined #gluster-dev
14:37 soumya joined #gluster-dev
14:45 wushudoin joined #gluster-dev
15:12 bala joined #gluster-dev
15:36 kotreshhr left #gluster-dev
15:44 lalatenduM joined #gluster-dev
15:47 purpleidea joined #gluster-dev
15:55 purpleidea joined #gluster-dev
16:32 bala joined #gluster-dev
16:41 rjoseph joined #gluster-dev
17:01 vipulnayyar joined #gluster-dev
17:46 purpleid1a joined #gluster-dev
17:47 lalatenduM joined #gluster-dev
18:14 purpleidea joined #gluster-dev
20:34 anrao joined #gluster-dev
21:13 badone_ joined #gluster-dev
22:11 purpleidea joined #gluster-dev
22:11 purpleidea joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary