Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-05-27

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:41 shyam joined #gluster-dev
01:31 kkeithley_ joined #gluster-dev
01:41 pranithk joined #gluster-dev
02:54 pranithk joined #gluster-dev
03:01 kdhananjay joined #gluster-dev
03:10 pranithk kdhananjay: Is http://review.gluster.com/10880 good to go?
03:11 kdhananjay pranithk: I'm reviewing it one more time just to be sure it's all OK, as we speak. :)
03:11 overclk joined #gluster-dev
03:11 pranithk kdhananjay: I am going to send a backport as well. If you find something I will resubmit both...
03:12 kdhananjay pranithk: Oh OK. Is it a fix for one of those spurious failures?
03:12 pranithk kdhananjay: Yes. All ec eio errors are fixed with these patches
03:12 pranithk kdhananjay: well at least the known ones ;-)
03:13 kdhananjay pranithk: OK. :)
03:24 pranithk joined #gluster-dev
03:24 pranithk kdhananjay: Sorry, there was some network problem, so got disconnected..
03:25 kdhananjay pranithk: Ok. np. I said nothing. :)
03:25 pranithk kdhananjay: Are you able to do git fetch on your local repo? I get following error: ssh_exchange_identification: Connection closed by remote host
03:25 kdhananjay checking ...
03:26 kdhananjay pranithk: Same here.
03:26 pranithk kdhananjay: Let me post on gluster-devel
03:26 kdhananjay pranithk: Yep.
03:31 pranithk kdhananjay: I sent the mail. I will get ready and come to office. Will address any comments you have there... cya in office.
03:33 krishnan_p joined #gluster-dev
03:36 itisravi joined #gluster-dev
03:46 hagarth joined #gluster-dev
03:50 shubhendu joined #gluster-dev
03:57 pppp joined #gluster-dev
04:00 ashiq joined #gluster-dev
04:06 pppp joined #gluster-dev
04:08 atinmu joined #gluster-dev
04:17 rjoseph joined #gluster-dev
04:19 ashishpandey joined #gluster-dev
04:40 ndarshan joined #gluster-dev
04:50 rafi joined #gluster-dev
04:50 jiffin1 joined #gluster-dev
04:52 jiffin1 joined #gluster-dev
04:53 jiffin joined #gluster-dev
04:56 Joe_f joined #gluster-dev
05:07 schandra joined #gluster-dev
05:08 hgowtham joined #gluster-dev
05:13 Manikandan joined #gluster-dev
05:17 sakshi joined #gluster-dev
05:19 hagarth joined #gluster-dev
05:22 ppai joined #gluster-dev
05:23 poornimag joined #gluster-dev
05:24 aravindavk joined #gluster-dev
05:28 deepakcs joined #gluster-dev
05:32 gem joined #gluster-dev
05:34 Manikandan joined #gluster-dev
05:34 itisravi joined #gluster-dev
05:35 surabhi joined #gluster-dev
05:35 soumya joined #gluster-dev
05:41 vimal joined #gluster-dev
05:48 kdhananjay joined #gluster-dev
05:48 kdhananjay left #gluster-dev
05:48 kdhananjay joined #gluster-dev
05:48 arao joined #gluster-dev
05:54 Anjana joined #gluster-dev
05:56 Apeksha joined #gluster-dev
05:58 Gaurav_ joined #gluster-dev
05:58 atalur joined #gluster-dev
05:59 kanagaraj joined #gluster-dev
06:06 raghu joined #gluster-dev
06:08 kdhananjay joined #gluster-dev
06:14 anekkunt joined #gluster-dev
06:36 Saravana joined #gluster-dev
06:38 krishnan_p joined #gluster-dev
06:42 soumya joined #gluster-dev
06:45 pranithk joined #gluster-dev
06:46 krishnan_p joined #gluster-dev
06:51 lalatenduM joined #gluster-dev
06:55 krishnan_p Did anyone see rfc.sh on release-3.7 branch complaining "ERROR: Unrecognized email address: 'NetBSD Build System'
06:55 krishnan_p #10:
06:55 krishnan_p Tested-by: NetBSD Build System
06:55 krishnan_p " ?
06:56 pranithk krishnan_p: I just pushed some patches, it went fine...
06:56 pranithk krishnan_p: what is the content of commit description?
06:56 pranithk krishnan_p: ah! tested-by
06:56 krishnan_p pranithk, I could still push the patch. But checkpatch.pl would complain
06:57 krishnan_p pranithk, yep :(
06:57 pranithk krishnan_p: I generally remove these things when I backport, so never ran into them :-(.
06:57 anrao joined #gluster-dev
06:58 rraja joined #gluster-dev
07:00 rafi krishnan_p: just remove that line :)
07:03 kotreshhr joined #gluster-dev
07:04 krishnan_p pranithk, rafi, aah! Let me try
07:23 kshlm joined #gluster-dev
07:24 hchiramm_ joined #gluster-dev
07:37 soumya joined #gluster-dev
07:44 hchiramm_ 3.6.4beta1 rpms are ready @http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.6.4beta1/
07:50 arao joined #gluster-dev
08:11 lalatenduM joined #gluster-dev
08:33 pranithk xavih: pm
08:42 anekkunt joined #gluster-dev
08:45 raghu hchiramm_:  cool. Will send a notification to the mailing lists
08:48 nishanth joined #gluster-dev
08:50 Saravana joined #gluster-dev
08:53 arao joined #gluster-dev
08:59 arao joined #gluster-dev
09:08 soumya joined #gluster-dev
09:24 hgowtham joined #gluster-dev
09:37 hchiramm_ raghu, thanks
09:49 hagarth joined #gluster-dev
09:53 anekkunt joined #gluster-dev
09:58 hagarth itisravi: ping, should we remove 1015990.t from is_bad_test() ?
09:59 itisravi hagarth: checking...
10:01 itisravi hagarth: This is also a volume stop+volume delete failure. I think we should remove this as well.
10:02 hagarth itisravi: cool, will you be sending a patch?
10:02 itisravi hagarth: I can.
10:02 hagarth itisravi: thanks, while you are at it can you also remove ests/bugs/glusterfs/bug-867253.t?
10:03 hagarth I was looking into that one and it is no longer happening even on RAX vms.
10:03 itisravi hagarth: sure
10:03 hagarth itisravi: thanks!
10:03 itisravi hagarth: np :)
10:17 itisravi hagarth: I think removing sparse-file-self-heal.t is also a good idea too
10:17 hagarth itisravi: right
10:19 pranithk1 joined #gluster-dev
10:31 Manikandan joined #gluster-dev
10:38 rafi ndevos: can you review the patch http://review.gluster.org/#/c/10870/ ?
10:42 ira joined #gluster-dev
10:45 ira joined #gluster-dev
10:51 Manikandan joined #gluster-dev
10:52 rafi1 joined #gluster-dev
10:54 rafi joined #gluster-dev
10:59 ndevos rafi: ah! just looking at it and hagarth hit the [submit] button
10:59 hagarth ndevos: it looked good to me ;)
10:59 ndevos hagarth: to me too! I almost +2'd it
11:00 hagarth ndevos: cool :)
11:00 hagarth ndevos: agenda updated
11:00 rafi ndevos: hagarth : thanks a lot
11:00 pranithk joined #gluster-dev
11:01 ndevos hagarth++ okay, thanks!
11:01 glusterbot ndevos: hagarth's karma is now 61
11:01 rafi ndevos: hagarth : looks like tiering is clean from blocking :)
11:02 rafi ndevos++ hagarth++  thanks for your reviews
11:02 glusterbot rafi: ndevos's karma is now 135
11:02 glusterbot rafi: hagarth's karma is now 62
11:02 hagarth rafi: looks like that, good work there!
11:02 ndevos rafi: we will see if any regression tests fail ;-)
11:02 rafi ndevos: fingers crossed ;-)
11:18 soumya joined #gluster-dev
11:23 firemanxbr joined #gluster-dev
11:23 kshlm Did anyone start using slave0.cloud.gluster.org?
11:23 kshlm I was using it, but I got disconnected and cannot ssh into it now.
11:30 ndevos REMINDER: Gluster Community meeting starts in 30 minutes from now in #gluster-meeting
11:31 xrsanet_ joined #gluster-dev
11:39 poornimag joined #gluster-dev
11:39 hagarth ndevos: I think I will join a few minutes after the meeting starts.
11:40 ndevos hagarth: sure, whatever works for you
11:48 spalai joined #gluster-dev
11:50 ndarshan joined #gluster-dev
11:58 ndevos REMINDER: Gluster Community meeting starts in 2 minutes from now in #gluster-meeting
12:03 jdarcy joined #gluster-dev
12:04 rafi1 joined #gluster-dev
12:07 surabhi joined #gluster-dev
12:08 kaushal_ joined #gluster-dev
12:08 ppai joined #gluster-dev
12:13 hagarth joined #gluster-dev
12:13 kshlm JustinClift, If you are free, could you check what's wrong with slave0.cloud.gluster.org?
12:14 rafi joined #gluster-dev
12:14 kshlm I was running a test on it, and got disconnected. Now I'm not able to ssh into it.
12:17 sankarshan joined #gluster-dev
12:17 JustinClift kshlm: Sure.  1 sec
12:17 JustinClift Ahh.  It seems like that random bug which causes the machine to need to reboot
12:17 kshlm Is that a bug in rackspace or with gluster?
12:17 JustinClift kshlm: I'm not sure what causes it.  I think it's something to do with running our regression tests multiple times on the same VM
12:18 JustinClift After a while things just break, and it needs rebooting
12:18 JustinClift kshlm: Two options here...
12:18 JustinClift a) I can reboot it using the Rackspace console.  Very simple, and it'll be ok after that.
12:18 JustinClift b) You can try logging into it using the Rackspace console, in case you want to look into why this weird problem happens too
12:19 JustinClift b) is likely to be a PITA ;)
12:19 JustinClift Which do you want to go with?
12:20 kshlm I was asking raghu to log into the console. So that we could debug it.
12:20 kshlm If I can do it without bugging raghu, I'd be happy.
12:22 JustinClift That's confusing to me
12:22 JustinClift Is that more a) or b) approach? :)
12:22 JustinClift eg do you want me to reboot it, so you can get on with investigating your issue?
12:22 JustinClift I'm in the Rackspace console atm, so this is a 5 second thing...
12:25 kshlm b)
12:27 krishnan_p JustinClift, would it be possible for me to use gerrit query command line tool on review.gluster.org?
12:27 krishnan_p JustinClift, It would be greatly helpful for a maintainer.
12:28 krishnan_p JustinClift, this is what I am referring to - https://gerrit.googlecode.com/svn-history/r6507/documentation/2.2.1/cmd-query.html
12:28 kshlm krishnan_p, you should be able to do it already.
12:28 kshlm `ssh <yourgerritusername>@review.gluster.org gerrit`
12:29 kshlm Every registered user should be able to run the query commands.
12:29 kshlm You'll need special permissions to do other stuff though.
12:29 JustinClift krishnan_p: Yeah, in theory that should already work. :)
12:29 JustinClift krishnan_p: If you're wanting access to the backend gsql interface remotely though, we need to grant you specific permissions
12:29 krishnan_p JustinClift, kshlm thanks.
12:30 krishnan_p JustinClift, I wouldn't know what to do :)
12:30 JustinClift And really, you can make the whole setup permanently fail with gsql
12:30 kshlm gsql is only allowed for the admins.
12:30 JustinClift Yeah, that's why ;)
12:30 kshlm :D
12:30 JustinClift kshlm: So, for the slave0 VM...
12:30 kshlm option b.
12:31 kshlm I'll like to snoop around.
12:32 kshlm On the topic of gsql, yesterday deepakcs was asking for his accounts on gerrit to get merged.
12:32 JustinClift Sure.  To do that, you need to enable java in your browser, then login to the VM using the Rackspace web based UI
12:32 kshlm Cool with me.
12:32 JustinClift kshlm: I don't remember if you have a Rackspace login yet... any idea?
12:32 kshlm I don't.
12:32 JustinClift I can create one for you if not
12:32 JustinClift k, gimme a few minutes
12:33 JustinClift I'll email your new login details to you :)
12:36 rafi1 joined #gluster-dev
12:37 krishnan_p JustinClift, kshlm gerrit query worked!
12:38 kshlm krishnan_p, you could also get the same using the rest api. But the only problem is it returns it in json, unlike the gerrit command which returns text
12:39 hchiramm_ krishnan_p, may be u can share the command among other maintainers :)
12:39 kotreshhr left #gluster-dev
12:40 ashiq joined #gluster-dev
12:40 krishnan_p kshlm, gerrit query also returns json only, I thought
12:41 kshlm you have an option `--format=text` I think which returns text.
12:41 glusterbot kshlm: `'s karma is now -1
12:41 krishnan_p hchiramm, most definitely
12:41 krishnan_p kshlm, yeah. I just did that. Need to see which one is more script friendly.
12:42 kshlm shell script friendly or python script friendly?
12:42 krishnan_p kshlm, preferrably shell. I am OK with python.
12:42 raghu hagarth: the uss.t fix, do you think it has to be backported to release-3.6 branch as well?
12:43 hagarth raghu: yes, let us do that for all branches ..3.7 and 3.6
12:44 raghu hagarth: sure. I was certain that it is needed for 3.7. Was not sure about 3.6 as there were not spurious failure in uss.t in that branch
12:45 JustinClift kshlm: Your Rackspace login details are in your Inbox.  Or should be.
12:45 hagarth raghu: probably we've been lucky there
12:45 kshlm JustinClift, they just dropped in.
12:45 JustinClift kshlm: Please let me know if they don't work.  We had that happen once, so it's not always perfect. ;)
12:45 kshlm thanks.
12:45 JustinClift Cool
12:46 raghu hagarth: sure. Will backport it to both the branches.
12:49 kshlm JustinClift, login worked.
12:49 JustinClift kshlm: Cool :)
12:50 shyam joined #gluster-dev
13:12 kshlm hagarth, do we wait for both regressions to pass for backport of spurious failure fixes?
13:12 hagarth kshlm: yes
13:12 kshlm Oh. Okay.
13:20 kaushal_ joined #gluster-dev
13:25 rafi joined #gluster-dev
13:36 sankarshan joined #gluster-dev
13:37 kshlm Ah damn! I accidently rebooted slave0. :-\
13:46 arao joined #gluster-dev
13:46 JustinClift kshlm: Well, you have access for the next time now. ;)
13:48 kshlm Yeah.
13:48 kshlm But they really shouldn't put the refresh and ctrl-alt-del buttons besides each other.
13:49 sankarshan joined #gluster-dev
13:51 JustinClift Oops
13:53 anrao ndevos++
13:53 hchiramm_ ndevos++ thanks
13:53 glusterbot anrao: ndevos's karma is now 136
13:53 glusterbot hchiramm_: ndevos's karma is now 137
14:07 atinmu joined #gluster-dev
14:10 JustinClift kshlm: Can I CC gluster-infra on my reply to you about busted vm finding?
14:10 kshlm Didn't I cc gluster-infra?
14:10 JustinClift Nope
14:11 JustinClift You can resend it if you want to include them from the start int he email trail, and I'll reply to that?
14:11 kshlm I thought I had.
14:11 JustinClift Meh, I just add them in reply :)
14:12 kshlm I'll resend.
14:12 JustinClift Doh
14:12 JustinClift I just hit Send :(
14:12 kshlm I still hadn't. So no problems :)
14:12 JustinClift :)
14:13 JustinClift Sorry, I really didn't see your response here in time :/
14:15 kshlm I've started the test loop again.
14:15 kshlm I think the problems have to do with the snapshot tests and usage of lvm.
14:24 JustinClift kshlm: k.  The weird behaviour of ssh dropping connection and refusing login happens on NetBSD too
14:25 JustinClift Which I'd be *really* surprised if that runs on NetBSD ;)
14:25 JustinClift Meaning the snapshot/lvm bits
14:27 wushudoin joined #gluster-dev
14:35 ashiq joined #gluster-dev
14:38 gem joined #gluster-dev
14:41 rtalur joined #gluster-dev
14:43 rtalur56 joined #gluster-dev
14:45 hagarth JustinClift: we seem to have about 6 VMs in offline state. Are those many VMs being used for debugging?
15:01 arao joined #gluster-dev
15:05 kshlm joined #gluster-dev
15:05 hagarth hmm, lot of VMs are offline since Jenkins failed to launch slave agent
15:08 ndevos JustinClift: an email from me is put in the moderation queue for gluster-infra - 100516 bytes with a limit of 40 KB
15:09 ndevos it contains a screen shot with Jenkins GitHub/Oauth config options
15:09 kshlm joined #gluster-dev
15:17 atinmu hagarth, kshlm : I am thinking of introducing force option in volume set
15:22 hagarth atinmu: what would the behavior be?
15:22 atinmu hagarth, currently we do not have an option to bump down the op-version
15:22 arao joined #gluster-dev
15:23 atinmu hagarth, I am thinking of handling it with a vol set force option
15:23 atinmu hagarth, I feel with that we can have test cases for heterogeneous cluster
15:24 atinmu hagarth, atleast we can hit those code paths which are mainly missed out or ignored
15:24 hagarth atinmu: sounds ok to me .. might be worth starting a discussion on gluster-devel
15:24 atinmu hagarth, yeah, I will drop a mail shortly
15:25 JustinClift ndevos: Just approved it
15:26 hagarth atinmu: I think we need to start populating distaf with quite a bit
15:26 JustinClift hagarth: If there's no message about the VM's being used for debugging, then no.
15:26 JustinClift kshlm: We have some more VM's with that weird behavrious if you want to investigate one?
15:26 hagarth atinmu: I have also been playing around with gluster in docker recently .. that seems like an easy way to set up / tear down heterogeneous clusters
15:27 JustinClift kshlm: If not, we can just reboot them
15:27 hagarth JustinClift: yes, noted that. everything else seems to be down due to ssh failing from jenkins master.
15:27 atinmu hagarth, yeah, with the current regression framework its difficult to write tests for heterogeneous cluster
15:27 JustinClift hagarth: I'll leave slave20 alone for the moment in case kshlm wants it
15:27 JustinClift The others I'll reboot
15:27 hagarth JustinClift: ok cool
15:27 Gaurav_ joined #gluster-dev
15:28 hagarth can anybody do a quick review of http://review.gluster.org/10939 ? this is a backport from mainline.
15:33 hagarth atinmu: are you looking into quorum.t failure?
15:34 atinmu hagarth, I was and figured out that there was an explicit kill issued for glusterd
15:34 atinmu hagarth, glusterd went down and rest of the commands failed
15:34 hagarth atinmu: so a real failure?
15:34 atinmu hagarth, I think someone was using slave25 machine
15:34 atinmu hagarth, no
15:34 hagarth atinmu: ah ok
15:34 atinmu hagarth, it was an manual interruption
15:35 hagarth atinmu: has anybody re-triggered that test?
15:35 atinmu hagarth, yes I did
15:35 hagarth atinmu: cool, thanks
15:37 atinmu hagarth, reviewed 10939 with +1
15:37 hagarth atinmu: thanks
15:38 hagarth atinmu: merging 10937 now
15:39 jiffin joined #gluster-dev
15:42 atinmu hagarth, thanks
15:45 JustinClift Meh, I'm going to reboot slave20 too
15:45 JustinClift If kshlm wants something to debug, he only needs to wait a day.  There will likely be more tomorrow. ;)
15:47 kdhananjay joined #gluster-dev
15:49 kshlm JustinClift, I don't need to wait. slave0 is down again.
15:49 kshlm I'll take a look after dinner.
15:55 spalai joined #gluster-dev
15:55 JustinClift kshlm: No worries. ;)
15:59 gem joined #gluster-dev
16:00 deepakcs joined #gluster-dev
16:02 atinmu joined #gluster-dev
16:15 msvbhat Guys, what's the best way to fetch the pathinfo (where the file resides on bricks) from the mountpoint?
16:15 msvbhat I can use pathinfo xattr on fuse mount
16:15 msvbhat But that fails on NFS mount
16:15 hagarth msvbhat: NFSv3 doesn't support xattrs
16:15 msvbhat Is there something which works across different mounts?
16:16 hagarth msvbhat: it would be nice to write a tool based on libgfapi
16:17 msvbhat hagarth: Hmm...
16:17 msvbhat So libgfapi must be having an api to getxattr right?
16:17 * msvbhat checks that
16:18 hagarth msvbhat: yes
16:18 msvbhat hagarth: glfs_getxattr
16:19 msvbhat hagarth: Cool. Thanks. Will see how can we make use of this
16:19 hagarth msvbhat: yw
16:36 JustinClift hagarth msvbhat: Something command line like the FB guys were showing for their nfs-* tools, but using gfapi calls?  Hmmm... what's a good prefix name...
16:36 JustinClift gf-* ?
16:36 JustinClift gfs-*?
16:37 msvbhat Yeah, gfs-ls
16:37 msvbhat gfs-getxattr
16:38 msvbhat JustinClift: Something like that would be good
16:38 msvbhat Or better glfs-*
16:38 msvbhat JustinClift: That way we be consistent with libgfapi APIs :)
16:39 RaSTar I agree glfs-*
16:44 soumya joined #gluster-dev
17:10 JustinClift glfs- is more to type though
17:10 * JustinClift reckons making it simpler to type is more important here
17:11 JustinClift "nfs" is easy to type
17:11 JustinClift "glfs" isn't so much :/
17:11 JustinClift "gfs" isn't too bad
17:11 JustinClift (same row of keys, one hand)
17:11 JustinClift msvbhat rastar_afk: ^
17:12 JustinClift msvbhat: You're probably writing it though, so up to you. ;)
17:12 * JustinClift points out that if people use it + it's easy, that helps adoption
17:12 JustinClift Just saying :D
17:20 hagarth i would prefer glfs .. gfs has a global file system connotation too
17:28 JustinClift Doh .;)
17:29 JustinClift gf ?
17:29 JustinClift Meh, I suppose if ppl care enough for a shorter thing, they can alias it :D
17:34 hagarth JustinClift: maybe gl .. short for gluster :D
18:04 hagarth JustinClift: same problem with NetBSD vms?
18:04 hagarth the NetBSD regression/smoke queues seem quite long
18:09 msvbhat JustinClift: I only suggested glfs because it will be consistent with gfapi APIs which starts with glfs_*
18:09 msvbhat We can have a vote later :)
18:14 gem joined #gluster-dev
18:23 JustinClift msvbhat: Good point
18:23 JustinClift hagarth: I'm not sure.  I've tried rebooting them, but they're not coming back.
18:25 JustinClift hagarth: Just emailed Manu to alert him
18:25 * JustinClift signs off for the night though
18:28 spalai joined #gluster-dev
18:33 Gaurav_ joined #gluster-dev
18:53 ndevos msvbhat: glfs-* has  my preference, and of course you can code it in python too!
18:53 ndevos from gluster import api as glfs
18:54 ndevos I think hchiramm_ is persuading a Fedora package with it, not sure of its status
18:55 ndevos https://github.com/gluster/libgfapi-python for the repo
19:03 dlambrig1 left #gluster-dev
19:03 msvbhat ndevos: Thanks. Yeah actually I was thinking of using python too
19:04 msvbhat ndevos: I'm going to talk to Aravinda for some advice about it tomorrow
19:05 msvbhat And rastar_afk as well. He had some idea about it already
19:10 ndevos msvbhat: ah, okay, but if you are interested in implementing certain xattr checks in distaf, you should probably just the python-libgfapi directly?
19:14 msvbhat ndevos: Yeah, I can do that too. I can just import the python bindings, do glfs_init and then do some xattr check using glfs_* api
19:14 msvbhat ndevos: That should solve my problem for now anyway
19:14 ndevos msvbhat: yes, thats why the python bindings are there :)
19:16 msvbhat ndevos: Yeah :)
19:17 msvbhat ndevos: But the tool would be quite useful nevertheless
19:18 msvbhat ndevos: Atleast for some folks in QE, who can exploit it :)
19:21 hagarth joined #gluster-dev
19:24 pousley joined #gluster-dev
19:27 ndevos msvbhat: oh, yes, it would surely be helpful, no doubt about that!
19:40 nishanth joined #gluster-dev
19:41 lpabon joined #gluster-dev
20:36 badone_ joined #gluster-dev
21:41 lkoranda joined #gluster-dev
21:41 csaba joined #gluster-dev
21:57 lkoranda joined #gluster-dev
22:01 csaba joined #gluster-dev
22:09 lkoranda joined #gluster-dev
22:48 nkhare joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary