Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-08-25

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 shyam joined #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:51 hagarth joined #gluster-dev
02:54 magrawal joined #gluster-dev
03:25 Manikandan joined #gluster-dev
03:36 atinm joined #gluster-dev
03:41 jlrgraham joined #gluster-dev
03:41 EinstCrazy joined #gluster-dev
03:44 Byreddy joined #gluster-dev
03:48 ramky joined #gluster-dev
04:02 itisravi joined #gluster-dev
04:06 nishanth joined #gluster-dev
04:06 shubhendu joined #gluster-dev
04:12 spalai joined #gluster-dev
04:27 rafi joined #gluster-dev
04:32 aspandey joined #gluster-dev
04:32 spalai left #gluster-dev
04:33 itisravi joined #gluster-dev
04:34 nbalacha joined #gluster-dev
04:38 ankitraj joined #gluster-dev
04:38 kshlm joined #gluster-dev
04:40 ankitraj joined #gluster-dev
04:44 skoduri joined #gluster-dev
04:45 itisravi joined #gluster-dev
04:45 spalai joined #gluster-dev
04:46 spalai left #gluster-dev
04:48 jiffin joined #gluster-dev
04:49 asengupt joined #gluster-dev
04:59 aravindavk_ joined #gluster-dev
05:05 rastar joined #gluster-dev
05:09 EinstCrazy joined #gluster-dev
05:10 ndarshan joined #gluster-dev
05:12 nbalacha nigelb, any luck with the FreeBSD machine?
05:13 ppai joined #gluster-dev
05:15 nigelb nbalacha: Yeah. Support replied. I need to get into the machine and reset the network. Doing that now.
05:15 mchangir joined #gluster-dev
05:19 ankitraj joined #gluster-dev
05:23 karthik_ joined #gluster-dev
05:23 nbalacha aravindavk_, ping
05:23 glusterbot nbalacha: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
05:23 aravindavk_ pong nbalacha
05:24 nbalacha aravindavk_, do we still need to use USE_EVENTS for the event code?
05:24 aravindavk_ nbalacha: not required, if extra variables used then it is good to use within USE_EVENTS else not required
05:25 aravindavk_ nbalacha: if events feature is disabled empty function is added so code will not break
05:25 nbalacha aravindavk_, ok
05:25 nbalacha thanks
05:25 aravindavk_ nbalacha: np
05:34 Bhaskarakiran joined #gluster-dev
05:40 Muthu_ joined #gluster-dev
05:43 msvbhat joined #gluster-dev
05:44 ppai joined #gluster-dev
05:52 kotreshhr joined #gluster-dev
05:56 mchangir joined #gluster-dev
06:00 skoduri joined #gluster-dev
06:07 atalur joined #gluster-dev
06:07 ashiq joined #gluster-dev
06:08 hgowtham joined #gluster-dev
06:12 Manikandan joined #gluster-dev
06:17 ppai joined #gluster-dev
06:22 kdhananjay joined #gluster-dev
06:45 atinm joined #gluster-dev
06:51 Manikandan joined #gluster-dev
06:53 kotreshhr joined #gluster-dev
07:00 aravindavk joined #gluster-dev
07:05 devyani7 joined #gluster-dev
07:15 asengupt joined #gluster-dev
07:31 Saravanakmr joined #gluster-dev
07:37 skoduri joined #gluster-dev
07:38 mchangir joined #gluster-dev
07:40 atinm joined #gluster-dev
07:40 kotreshhr joined #gluster-dev
08:08 Yingdi joined #gluster-dev
08:10 mchangir joined #gluster-dev
08:11 pur joined #gluster-dev
08:11 Manikandan joined #gluster-dev
08:21 s-kania joined #gluster-dev
08:29 aravindavk joined #gluster-dev
08:30 ankitraj joined #gluster-dev
08:32 Muthu_ joined #gluster-dev
08:39 ndevos magrawal: left a note there, please add a test-case and maybe some comments in the commitmessage and code
08:41 magrawal ndevos,sure,thanks
08:44 jiffin atinm:I have already created feature page for nfs-ganesha improvements and have mentioned it in commit msg,so  can you please remove -2 http://review.gluster.org/#/c/14906/4?
08:57 aspandey joined #gluster-dev
08:58 rafi joined #gluster-dev
09:10 Manikandan_ joined #gluster-dev
09:12 mchangir ndevos, please do the honors for http://review.gluster.org/15297
09:15 ndevos mchangir: done!
09:17 mchangir ndevos, thanks
09:18 bkunal joined #gluster-dev
09:18 ndevos atinm, overclk: do you know where the latest dht2 sources are kept?
09:19 * ndevos was looking for them under github.com/gluster/ ... but there is only a glusterd2 repo
09:20 ndevos oh, maybe it already is in the experimental directory?
09:20 ndevos https://github.com/gluster/glusterfs/​tree/master/xlators/experimental/dht2
09:20 ndevos bkunal: ^ that would be the master branch of the standard glusterfs repo
09:20 bkunal ndevos, ok
09:23 bkunal ndevos, how will I  build this? I will have to build complete source again ...right?
09:24 atinm jiffin, done
09:24 bkunal ndevos, but hot to remove dht with dht2..any idea?
09:24 riyas joined #gluster-dev
09:26 bkunal *how
09:29 bkunal ndevos, I don't think enough code is available @ https://github.com/gluster/glusterfs/​tree/master/xlators/experimental/dht2
09:29 atinm bkunal, what are you trying to do here?
09:30 bkunal atinm, I am willing to play with dht2.
09:30 atinm bkunal, as the name implies, its pretty much experimental
09:30 bkunal atinm, I used the source from https://github.com/ShyamsundarR/g​lusterfs/tree/gl40_dht_playground, but installing built rpm is giving error
09:31 atinm bkunal, that's an obsolete one
09:31 bkunal atinm, that is why I was looking for a place where I can get latest source
09:32 atinm bkunal, yes the one which you are looking at is the latest source
09:32 atinm bkunal, if you have more questions, you can get in touch with Shyam
09:32 bkunal atinm, sure
09:32 bkunal atinm++ ndevos++
09:32 glusterbot bkunal: atinm's karma is now 66
09:33 glusterbot bkunal: ndevos's karma is now 305
09:39 jiffin atinm: thanks
09:44 magrawal ndevos:ping
09:48 aspandey joined #gluster-dev
09:49 itisravi joined #gluster-dev
09:54 msvbhat joined #gluster-dev
10:09 ndevos magrawal: pong
10:10 rastar joined #gluster-dev
10:10 magrawal ndevos,i saw ur comment on my patch,in case of auth.allow/auth.reject * and *.example.com is not valid value
10:10 magrawal ndevos, i can create test file to validate name only example.com.
10:14 ndevos magrawal: well, a server would be called www.example.com or such, and it would be nice to be able to allow/deny *.example.com too
10:14 magrawal ndevos,will check
10:15 ndevos magrawal: but yeah, for now, adding a test-case that accepts hostnames like client.local, server.example.com and client.place.example.com is sufficient
10:16 ndevos magrawal: just mention clearly in the commit message that matching with *.domain is not supported
10:16 magrawal ndevos,sure
10:22 bfoster joined #gluster-dev
10:30 riyas joined #gluster-dev
10:31 shyam joined #gluster-dev
10:33 aravindavk_ joined #gluster-dev
10:42 rafi joined #gluster-dev
10:52 poornima joined #gluster-dev
10:58 aravindavk joined #gluster-dev
11:07 atalur_ joined #gluster-dev
11:09 lalatenduM joined #gluster-dev
11:18 rastar joined #gluster-dev
11:30 devyani7 joined #gluster-dev
11:45 asengupt joined #gluster-dev
11:51 aspandey joined #gluster-dev
11:57 * kkeithley is not seeing a way to edit commit messages in gerrit
11:57 kkeithley oh, I found it I think
11:58 kkeithley except editing it and rerunning smoke tests erases any review votes!
11:59 anoopcs at least we have the regression results retained
11:59 anoopcs which is great.
11:59 kdhananjay joined #gluster-dev
12:02 kkeithley yes, that's A Good Thing.
12:11 ankitraj joined #gluster-dev
12:20 [o__o] joined #gluster-dev
12:26 mchangir joined #gluster-dev
12:27 shyam joined #gluster-dev
12:38 ramky joined #gluster-dev
12:58 shyam joined #gluster-dev
13:09 julim_ joined #gluster-dev
13:11 ndevos kkeithley: which change was giving you troubles?
13:14 asengupt joined #gluster-dev
13:17 julim_ joined #gluster-dev
13:18 nbalacha joined #gluster-dev
13:25 hagarth joined #gluster-dev
13:37 Jules- joined #gluster-dev
13:43 ndevos kkeithley: have you ever checked the gfapi+upcall code? maybe review http://review.gluster.org/15191 then ;-)
13:44 kkeithley http://review.gluster.org/15258
13:44 kkeithley ndevos: ^^
13:44 kkeithley I've looked at it, but not very carefully yet
13:45 jobewan joined #gluster-dev
13:46 Manikandan joined #gluster-dev
13:51 Manikandan joined #gluster-dev
13:55 dlambrig joined #gluster-dev
14:02 hagarth joined #gluster-dev
14:03 aravindavk joined #gluster-dev
14:07 shyam joined #gluster-dev
14:33 nbalacha nigelb, any luck with netbsd?
14:37 nbalacha nigelb, sorry - FreeBSD
14:52 spalai joined #gluster-dev
14:56 jiffin joined #gluster-dev
14:56 post-factum nbalacha: around?
14:56 nbalacha post-factum, yes.
14:57 nbalacha post-factum, I'm afriad I did not get to look into the bug
14:57 post-factum nbalacha: http://termbin.com/66ud here is the script i use to try to trigger leaks
14:57 nbalacha I will take a look at it and get back
14:57 post-factum nbalacha: it is leaking right now :). byte by byte
14:58 nbalacha post-factum, thanks for the script - I will try it out
14:58 post-factum nbalacha: i'll update bz with recent observations
15:01 nbalacha dlambrig1, i'm in another meeting. will join late
15:04 rafi joined #gluster-dev
15:18 spalai left #gluster-dev
15:18 ndevos kkeithley: it seems to do patch #2 now, you fixed it!
15:19 kkeithley yeah
15:19 ndevos kkeithley: maybe those regression tests were still stuck in the queue? *someone* posted an insane amount of patches
15:19 kkeithley they should fire that guy
15:20 ndevos he's only doing it for the stats - http://projects.bitergia.com/redhat-​glusterfs-dashboard/browser/scm.html
15:23 spalai joined #gluster-dev
15:24 kkeithley what a poser
15:27 pranithk1 joined #gluster-dev
15:32 ndevos hmm, bugzilla only gives me proxy errors now :-/
15:40 nigelb nbalacha: No :(
15:40 nigelb nbalacha: I'll bring up a new machine instead tomorrow.
15:40 nigelb We have freebsd stuff on ansible.
15:40 nigelb It should *mostly* work.
16:00 nbalacha nigelb, thanks
16:10 hagarth joined #gluster-dev
16:21 jiffin joined #gluster-dev
16:44 Manikandan joined #gluster-dev
16:59 kotreshhr left #gluster-dev
17:28 msvbhat joined #gluster-dev
17:33 ankitraj joined #gluster-dev
17:35 post-factum pranithk1: around?
17:37 pranithk1 post-factum: yes
17:37 pranithk1 post-factum: I am in U.S.A for the last two weeks and next 3 weeks. this is day time here.
17:38 post-factum pranithk1: oh, have a nice trip :)
17:39 post-factum pranithk1: i'd like to ask you about backporting http://review.gluster.org/#/c/15302/ once again
17:39 pranithk1 post-factum: I hope to, I became an uncle day before yesterday, which is the reason for the trip.
17:39 pranithk1 post-factum: let me take a look
17:39 post-factum pranithk1: should that be done?
17:39 pranithk1 post-factum: well one of the reasons :-)
17:39 post-factum pranithk1: congrats!
17:40 pranithk1 post-factum: I was hoping he/she (don't know gender..) would respond to my query. I don't know if it is theoretical or a real one
17:40 pranithk1 post-factum: thanks :-)
17:40 post-factum pranithk1: i could email him/her directly
17:41 pranithk1 post-factum: We are also working container persistent storage. Which is the other reason I am in US
17:41 pranithk1 post-factum: Hmm.. feel free to put me in CC. I would like to know the use-case which lead to the leak before backporting.
17:42 post-factum pranithk1: okay, will do
17:42 pranithk1 post-factum: Do you know of anyone using containers+gluster? if yes point to us :-)
17:43 pranithk1 post-factum: well container storage in general would do.
17:43 post-factum pranithk1: i ran lxc on top of ceph rbd some time ago
17:44 pranithk1 post-factum: oh, what was the usecase if you don't mind me asking..
17:44 pranithk1 post-factum: providing storage for the container is it?
17:44 pranithk1 post-factum: What kind of workload inside the container would also help
17:44 post-factum pranithk1: we were running haproxy/nginx/dnsbalancer for balancing mysql, web and dns traffic across various backends
17:45 post-factum pranithk1: those were different containers
17:45 pranithk1 post-factum: cool. Thanks for this input
17:45 post-factum pranithk1: now we do the same but in kvm
17:45 post-factum pranithk1: lxc brings much lower overhead, but kvm is more robust
17:46 pranithk1 post-factum: wow, that seems interesting. People are generally moving from VMs to containers. I wonder why the reverse move
17:46 pranithk1 post-factum: ah!
17:46 pranithk1 post-factum: as in security?
17:46 pranithk1 post-factum: I mean what you meant by 'robust'
17:46 post-factum pranithk1: we ran into the issue of stucking haproxy in D or Z state and couldn't get rid of it without rebooting host system
17:46 pranithk1 post-factum: ah! got it. How long ago was this?
17:46 post-factum pranithk1: after then i abandoned lxc in centos 7 for the nearest future ;)
17:47 post-factum pranithk1: umm. half a year or year ago
17:47 pranithk1 post-factum: yeah, whatever makes us happy I guess :-)
17:47 pranithk1 post-factum: containers is relatively new tech compared to VMs, so I guess they will get there....
17:48 post-factum pranithk1: i really want to have lxc with proper isolation and live migration, but that is not possible in centos unfortunately. to use that one need most recent kernel and something like lxd to orchestrate
17:49 pranithk1 post-factum: oh. lxd is the one driven by ubuntu guys right?
17:50 post-factum pranithk1: yup, but i hope it would become distro-agnostic somewhen, unlike unity and mir
17:50 post-factum pranithk1: or systemd-nspawn will obsolete it ;)
17:52 pranithk1 post-factum: I need to read up on these things. Thanks for the tip
17:52 post-factum pranithk1: np
17:53 post-factum pranithk1: as for shared storage... i guess it is pretty trivial to use cephfs/glusterfs for it as those are posix-compilant. however, using ceph rbd va kernel nbd is one of the options as well
17:53 post-factum *via
17:54 pranithk1 post-factum: agree
17:55 pranithk1 post-factum: Using glusterfs as shared storage inside VMs is something quite a lot of people do. You guys do the same?
17:56 post-factum pranithk1: yep, web-backends use that extensively. also, we'd like to use that for mail boxes, but, unfortunately, fuse clients leak
17:56 post-factum pranithk1: that is why i creating memleak BZs and pinging nbalacha periodically ;)
17:56 post-factum *i'm
17:57 post-factum pranithk1: also we use glusterfs for samba filestorage, asterisk sound files and other crap
17:57 pranithk1 post-factum: We are actually coming up with tests to be run before we make releases. Memory leak tests are going to be one of the top ones.
17:57 pranithk1 post-factum: nice!
17:58 post-factum pranithk1: for the last half a year it leaks much less, but still leaks
17:58 pranithk1 post-factum: Thanks for all the help you provided, we found quite a few leaks
17:58 post-factum pranithk1: as you may notice, i dance around various reviews that contain "leak" word heavily
17:58 post-factum pranithk1: yep
17:59 post-factum pranithk1: btw, sent an email just now
17:59 pranithk1 post-factum: Of course I do. You backport them even before we realize the master patch is merged :-)
17:59 post-factum pranithk1: because we use them in production even before they hit master branch in the last revision
17:59 pranithk1 post-factum: I didn't get the mail :-/, let me wait for a bit.
18:00 pranithk1 post-factum: So you keep doing upgrades as soon as you have these patches?
18:00 post-factum pranithk1: as of 3.7.14 I stuck with 5 to 7 patches and do not do rolling upgrade until .15 is out. it is pretty stable for us already (except mailboxes)
18:01 post-factum pranithk1: but before that, yes. i cherry-picked another patch, rebuilt rpms and did an upgrade
18:02 post-factum pranithk1: 3.7.14 is the first gluster version for us that doesn't generate tons of warnings in logs
18:02 post-factum pranithk1: and leaks much less that 3.7.6 did
18:03 post-factum pranithk1: also, it seems i've pushed everything we have up to date to .15 already
18:03 pranithk1 post-factum: Good. Process around releases is something we are discussing. 1) Leaks based runs 2) Upgrade releated runs 3) Based on your tip, (Number of logs)
18:03 pranithk1 post-factum: good strategy :-)
18:03 pranithk1 post-factum: How do you guys monitor things on gluster?
18:03 pranithk1 post-factum: got the mail
18:04 post-factum pranithk1: we use zabbix extensively. it monitors bricks availability and memory usage (for bricks, glusterd, shd and nfs)
18:05 post-factum pranithk1: it should be graylisted, i guess ;)
18:05 pranithk1 post-factum: why?
18:06 post-factum pranithk1: email graylisting happens. i've noticed 5 mins delay for you
18:06 post-factum pranithk1: nevertheless, you've got it
18:06 pranithk1 post-factum: oh you are talking about the mail. I got it now...
18:07 post-factum pranithk1: more on leaks. the most crappy thing with the last sort of leak i face with mailboxes is that i cannot detect it via valgrind. it suggests everything is okay there.
18:07 post-factum pranithk1: also, valgrind'ing gluster is pretttyyyy sloooowww
18:07 pranithk1 post-factum: I know! :-(
18:08 pranithk1 post-factum: When valgrind wasn't catching leaks, I used to use a different leak-checker than memcheck, let me look it up
18:08 post-factum pranithk1: let me know, would be useful
18:08 post-factum pranithk1: also, this leak is unstable. it went from 64M to 201M and stuck there
18:09 * post-factum is doing tests right now
18:10 post-factum pranithk1: i tendto follow the main testing rule: "first, recreate the issue reliably"
18:10 pranithk1 post-factum: Using this I remember I fixed one of the bugs. It is a bit different that memcheck tool though http://valgrind.org/docs/manual/ms-manual.html
18:11 post-factum pranithk1: The program will execute (slowly).
18:12 pranithk1 post-factum: hey, I need to run to a meeting now. Let me know if you find something using this massif took
18:12 pranithk1 post-factum: tool*
18:12 post-factum pranithk1: yep, I will try it tomorrow
18:12 post-factum pranithk1: many thanks
18:23 hagarth joined #gluster-dev
19:06 hagarth joined #gluster-dev
19:11 ashiq joined #gluster-dev
19:12 hchiramm joined #gluster-dev
20:05 shyam joined #gluster-dev
20:53 ndevos pranithk, hagarth, amye-away: got any details from this new contributor? http://review.gluster.org/#/q/​owner:Ryan%20Ding+status:open
20:54 * ndevos leaves for the day, ttyl!
20:57 shyam joined #gluster-dev
21:10 pranithk ndevos: What info are you looking for? I merged one of his patches and Pointed him to another patch susant is working on as a replacement for one of his other patches
21:41 hagarth pranithk: I think he is looking for some metadata about the contributor :)
22:20 shyam joined #gluster-dev
23:12 shyam joined #gluster-dev
23:50 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary