Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-09-07

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 ankitraj joined #gluster-dev
01:05 hagarth joined #gluster-dev
01:55 EinstCrazy joined #gluster-dev
02:03 EinstCrazy joined #gluster-dev
02:08 skoduri joined #gluster-dev
03:09 nishanth joined #gluster-dev
03:13 magrawal joined #gluster-dev
03:23 spalai joined #gluster-dev
03:33 spalai joined #gluster-dev
03:45 spalai joined #gluster-dev
03:49 rafi joined #gluster-dev
03:54 mchangir joined #gluster-dev
04:01 atinm joined #gluster-dev
04:02 nbalacha joined #gluster-dev
04:05 riyas joined #gluster-dev
04:05 msvbhat joined #gluster-dev
04:06 itisravi joined #gluster-dev
04:07 sanoj joined #gluster-dev
04:07 shubhendu joined #gluster-dev
04:14 shubhendu joined #gluster-dev
04:21 itisravi joined #gluster-dev
04:53 atinm joined #gluster-dev
04:55 aspandey joined #gluster-dev
04:59 msvbhat joined #gluster-dev
05:00 karthik_ joined #gluster-dev
05:04 ashiq joined #gluster-dev
05:06 aravindavk joined #gluster-dev
05:08 ankitraj joined #gluster-dev
05:12 rafi joined #gluster-dev
05:14 nbalacha anoopcs, got a minute?
05:25 kdhananjay joined #gluster-dev
05:28 ppai joined #gluster-dev
05:31 EinstCrazy joined #gluster-dev
05:32 kotreshhr joined #gluster-dev
05:38 atinm joined #gluster-dev
05:39 Bhaskarakiran joined #gluster-dev
05:41 hgowtham joined #gluster-dev
05:45 anoopcs nbalacha, Yes.
05:52 hchiramm joined #gluster-dev
05:56 Muthu joined #gluster-dev
05:56 jiffin joined #gluster-dev
05:57 Manikandan joined #gluster-dev
06:01 nishanth joined #gluster-dev
06:10 EinstCrazy joined #gluster-dev
06:17 Muthu_ joined #gluster-dev
06:18 itisravi joined #gluster-dev
06:18 mchangir joined #gluster-dev
06:42 asengupt joined #gluster-dev
06:43 k4n0 joined #gluster-dev
06:48 rastar joined #gluster-dev
06:49 devyani7 joined #gluster-dev
06:50 aravindavk ping ppai
06:52 ppai aravindavk, pong
06:52 kdhananjay joined #gluster-dev
06:54 karthik_ joined #gluster-dev
06:58 shubhendu joined #gluster-dev
07:00 aravindavk ppai: which repo for 3.9 docs? Do we maintain versioned docs?
07:00 xavih joined #gluster-dev
07:01 nishanth joined #gluster-dev
07:03 ppai aravindavk, there are no versioned docs
07:03 EinstCrazy joined #gluster-dev
07:05 kshlm joined #gluster-dev
07:06 aravindavk ppai: ok, shall I send pull request for master? If documentation only specific to a release do we need to mention in doc?
07:06 ppai aravindavk: that is correct
07:06 ppai aravindavk: A note at the feature heading section such as - "New in 3.9" should suffice
07:07 aravindavk ppai: ok
07:17 Saravanakmr joined #gluster-dev
07:18 skoduri joined #gluster-dev
07:21 k4n0 skoduri, ping
07:21 glusterbot k4n0: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
07:21 k4n0 skoduri, I was looking for any online cli docs for nfs-ganesha
07:21 k4n0 skoduri, can you help?
07:23 anoopcs jiffin, ^^
07:23 post-factum k4n0: https://github.com/nfs-ganesha/nfs-ganesha/wiki no?
07:24 k4n0 post-factum, thanks
07:25 post-factum k4n0: the doc is a little bit messy (and i really have no idea if that is updated), so do not hesitate to ask more specific questions
07:25 k4n0 post-factum, will do
07:26 anoopcs You can also ask on #ganesha
07:34 nbalacha joined #gluster-dev
07:36 EinstCrazy joined #gluster-dev
08:12 EinstCrazy joined #gluster-dev
08:23 devyani7 joined #gluster-dev
08:32 itisravi joined #gluster-dev
08:34 EinstCra_ joined #gluster-dev
08:35 hchiramm joined #gluster-dev
08:43 poornima joined #gluster-dev
08:47 ndevos pkalever: I dont seem to be able to add you as a reviewer to http://review.gluster.org/15418
08:48 pkalever ndevos: I have added it myself for now, please use Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
08:54 riyas joined #gluster-dev
08:54 Bhaskarakiran joined #gluster-dev
08:54 pkalever joined #gluster-dev
08:54 poornima joined #gluster-dev
08:54 rjoseph|afk joined #gluster-dev
08:54 kdhananjay joined #gluster-dev
08:54 itisravi joined #gluster-dev
08:54 jiffin joined #gluster-dev
08:54 ashiq joined #gluster-dev
08:54 sac joined #gluster-dev
08:54 magrawal joined #gluster-dev
08:55 lalatenduM joined #gluster-dev
08:55 skoduri|training joined #gluster-dev
08:55 Saravanakmr joined #gluster-dev
08:55 mchangir joined #gluster-dev
08:55 aspandey joined #gluster-dev
08:55 sanoj joined #gluster-dev
08:55 atinm joined #gluster-dev
08:55 hchiramm joined #gluster-dev
08:55 rastar joined #gluster-dev
08:55 nbalacha joined #gluster-dev
08:56 hgowtham joined #gluster-dev
08:56 kshlm joined #gluster-dev
08:56 Manikandan joined #gluster-dev
08:56 EinstCrazy joined #gluster-dev
08:56 shubhendu joined #gluster-dev
08:57 kotreshhr joined #gluster-dev
08:57 misc nigelb: so for the 2 builder, we want 2G of ram, 2 vcpus, any specific disk partition ?
08:59 nishanth joined #gluster-dev
09:01 nigelb I'm just looking at existing machines.
09:01 nigelb about 30 GB
09:02 nigelb Remember we need an xfs partition as well.
09:02 misc is it ok if all are on the same partition ?
09:02 nigelb of about 1 GB.
09:02 misc mhhh
09:02 ppai joined #gluster-dev
09:02 nigelb heh.
09:03 misc that requires to think on how I can automate this
09:03 nigelb let's solve that now because we're definitely going to need it.
09:03 Muthu_ joined #gluster-dev
09:05 misc we also have nothing that wouldn't be trashable ?
09:05 devyani7 joined #gluster-dev
09:06 asengupt joined #gluster-dev
09:06 misc (like, if we want to just reinstall)
09:07 nigelb I didn't understand.
09:08 kdhananjay joined #gluster-dev
09:09 shubhendu joined #gluster-dev
09:11 kshlm ndevos, 3.7.15 can be pushed to the main repo. I
09:11 kshlm I've replied to you on the 3.7.15 tagging thread in the maintainers list.
09:14 ndevos kshlm: done, I expect they get signed+pushed later today
09:14 kshlm ndevos++ Thanks!
09:14 glusterbot kshlm: ndevos's karma is now 308
09:15 misc nigelb: there is no need to have something like a separate partition for data and system, ie, if I need to reinstall from 0 the builder ?
09:15 misc (cause for now, that's a bit experimental)
09:16 poornima joined #gluster-dev
09:16 k4n0 joined #gluster-dev
09:16 nigelb misc: oh, if we want to re-install, we start from scratch.
09:16 nigelb My theory is there's no data is particularly want to save in that scenario.
09:17 misc oki
09:18 misc (cause I was alo planning to have data on a separate partition and be smart with it, but not done yet)
09:21 jiffin k4n0: Hi
09:21 k4n0 jiffin, hey
09:22 jiffin you can refer http://gluster.readthedocs.io/en/l​atest/Administrator%20Guide/NFS-Ga​nesha%20GlusterFS%20Intergration/ for nfs-ganesha -gluster integration
09:32 devyani7 joined #gluster-dev
09:33 k4n0 jiffin, thanks, will check
09:39 EinstCrazy joined #gluster-dev
09:40 msvbhat joined #gluster-dev
09:43 rafi joined #gluster-dev
09:49 itisravi joined #gluster-dev
10:00 shubhendu joined #gluster-dev
10:08 sanoj joined #gluster-dev
10:08 kshlm joined #gluster-dev
10:09 Muthu_ joined #gluster-dev
10:09 ashiq joined #gluster-dev
10:11 nigelb kshlm: what are you testing when you test upgrades?
10:11 nigelb (I'm curious about the flow to automate)
10:23 kshlm nigelb, I test the toughest upgrade scenario, ie rolling upgrade without downtime and active IO.
10:23 kshlm I have a cluster with a replicate and disperse volumes.
10:24 kshlm Clients have the volume mounted and do active IO (iozone or dbench mainly)
10:25 kshlm Kill all gluster processes on a server and upgrade. Clients should continue to work (with minor rise in latencies due to any failovers).
10:26 kshlm Then is restart gluster on the server, and launch heal. Heal should complete with the clients still continuing to run.
10:26 nishanth joined #gluster-dev
10:26 kshlm After heal is complete, move onto the next server.
10:27 kshlm After upgrade of each server, I also check to make sure that the cluster is still all peered correctly and in a Befriended state.
10:29 kshlm Then at the end remount clients (before upgrading them) to make sure they mount from the upgraded servers.
10:29 kshlm And after that upgrade and remount clients.
10:30 kshlm These are just basic tests to esure the core of GlusterFS isn't broken.
10:30 kshlm I don't test snapshots, geo-rep, gfapi etc. which also need to be tested.
10:30 kshlm nigelb, Was this enough? Or do you need more?
10:40 nbalacha joined #gluster-dev
10:48 nigelb kshlm: oh fun.
10:49 nigelb kshlm: This should be interesting to automate.
10:55 misc yup
10:55 misc ansible for test cases :p
10:55 kshlm I'm using ansible for parts of it as well.
10:55 poornima joined #gluster-dev
10:56 kshlm Mainly for the provisioning part (well just installing).
10:57 kshlm I create volumes, mount, do IO, heal, check status, etc. etc. by hand.
10:57 kshlm I should probably use gdeploy, but I'm a little lazy to move right now.
10:59 ppai joined #gluster-dev
11:00 nigelb kshlm: I'm planning on using gdeploy for the performance stuff.
11:02 * misc did found gdeploy to be a bit complicated and curious
11:03 ira joined #gluster-dev
11:13 nigelb http://2586f5f0.ngrok.io/ <-- failure stats looks more practical now :)
11:13 glusterbot nigelb: <'s karma is now -15
11:13 nigelb ...
11:15 rraja joined #gluster-dev
11:18 misc "1 weeks"
11:18 * misc scream silently inside
11:18 misc but nice job
11:21 nigelb Hey, it's a work in progress.
11:21 nigelb I need to make sure you can't do more than a few weeks :)
11:23 misc ok so
11:23 misc do I push the builder creation now, or do I get lunch ?
11:24 post-factum nbalacha: i've updated BZ 1369364 (about FUSE memory leak) with Massif profiling, hoping that could be helpful in debugging the issue. let me know if I could do anything else within this BZ
11:24 nigelb misc: lunch.
11:24 nbalacha post-factum, thanks. I started looking at the issue - I couldnt find anything in the statedump though I do see the mem usage going up
11:25 post-factum nbalacha: at least you have my logs now and a reliable way to reproduce it
11:25 nbalacha post-factum, yes, that is a bug help
11:25 nbalacha *big help
11:25 post-factum nbalacha: bug help is ok
11:25 nishanth joined #gluster-dev
11:25 post-factum :D
11:25 nbalacha :)
11:27 misc nigelb: ok. now to find where to get lunch in this place
11:27 misc there is sushi near the corner, I guess I can try, tired of pizza :)
11:34 kdhananjay joined #gluster-dev
11:39 post-factum anyone kick that spammer in #gluster
11:41 nigelb don't have permission :(
11:43 post-factum he's already left
11:43 kkeithley Maybe someone can convince hagarth or JoeJulian to give a few more people channelops privs
11:46 post-factum kkeithley: JoeJulian already suggested that, noone wants :)
11:47 shubhendu joined #gluster-dev
11:47 kkeithley when did he do that?
11:47 kkeithley I didn't see it.
11:48 post-factum kkeithley: a month or two ago
11:49 nigelb I offered to help.
11:49 nigelb He gave me access
11:49 nigelb and somehow it didn't retain the access.
11:50 ramky joined #gluster-dev
11:53 kshlm Can't glusterbot be configured to give OP to whoever had been opped?
11:54 kshlm kkeithley had given OP on #gluster-meeting to me, but that didn't stick as well.
11:54 kshlm On that note,
11:54 nigelb that's how I was given access.
11:54 kshlm Weekly community meeting starts in 5 minutes in #gluster-meeting
11:54 nigelb My guess is glusterbot had some corruption?
11:55 kshlm I also think CHANSERV can be configured for it.
11:55 nigelb it was.
11:55 kshlm We don't have Chanserv in the rooms though.
11:55 nigelb don't need to.
11:55 nigelb chanserv works it's magic without being in the room.
11:56 kshlm Huh, then what advantage is it to get Chanserv in the room?
11:56 kshlm Lots of rooms have it.
11:57 kkeithley kshlm: maybe the chanserv thinks you're really kaushal_
11:58 kshlm kkeithley, Should that matter? My freenode account was created as kaushal_ but kshlm is linked with it.
11:58 kkeithley nope, nm. I've given you +O again
11:59 kshlm Let me test it.
11:59 kshlm Cool. Op is retained over /part /join
12:00 kkeithley now we just need some more ops here and in #gluster who are actually around a good amount of time.
12:17 ppai joined #gluster-dev
12:25 mchangir joined #gluster-dev
12:26 nishanth joined #gluster-dev
12:39 ashiq joined #gluster-dev
12:42 k4n0 joined #gluster-dev
12:44 kkeithley nigelb: http://review.gluster.org/15410 is the 3.8 strfmt patch
13:01 shyam joined #gluster-dev
13:04 nbalacha joined #gluster-dev
13:09 kkeithley kshlm++
13:09 glusterbot kkeithley: kshlm's karma is now 112
13:10 kkeithley samikshan++
13:10 glusterbot kkeithley: samikshan's karma is now 3
13:14 post-factum kshlm++
13:14 glusterbot post-factum: kshlm's karma is now 113
13:20 ndevos nigelb: for some patches smoke does not seem to do voting? http://review.gluster.org/15411 is one of them
13:31 nigelb ndevos: Is there a bug?
13:32 ndevos nigelb: I dont know, it seems to have been happening since a few (?) days?
13:33 nigelb Please file a bug. I think it's something I fixed today, but I'll hunt it down anyhow.
13:33 ndevos nigelb: did something change related to smoke tests? maybe with the introduction of the strfmt one?
13:34 nigelb that's what I think.
13:35 nigelb so I think when I fixed strfmt tests to not vote, it may have caused this side effect.
13:40 nigelb ndevos: so, file a bug and in the meanwhile, "recheck smoke" should get you a green.
13:40 ndevos nigelb: yeah, I'll file a bug in case I notice it happening again
13:47 mchangir joined #gluster-dev
13:47 shyam joined #gluster-dev
13:48 JoeJulian Is that everybody?
13:54 jiffin1 joined #gluster-dev
13:54 kkeithley don't you want +O
13:54 kkeithley versus +o
13:55 atinm ndevos, would you be able to take a look at http://review.gluster.org/#/c/15367/ ?
13:56 kkeithley s/you/we/
13:56 ndevos atinm: yeah, I'm planning to look at it later
13:56 atinm ndevos, thanks!
13:58 kkeithley JoeJulian: I think we need (want) +O so we get always get channelop when we  join.
13:59 msvbhat joined #gluster-dev
14:09 jiffin1 joined #gluster-dev
14:10 mchangir joined #gluster-dev
14:17 JoeJulian kkeithley: did so
14:19 ankitraj joined #gluster-dev
14:30 EinstCrazy joined #gluster-dev
14:36 mchangir joined #gluster-dev
14:42 gvandeweyer joined #gluster-dev
14:43 lkoranda joined #gluster-dev
14:55 kkeithley JoeJulian: thanks
15:00 Bhaskarakiran joined #gluster-dev
15:02 kkeithley JoeJulian: I see that several of us have +O in #gluster-meeting, but I'm not able to check the flags here or in #gluster.
15:02 kkeithley ...  not authorized to perform this operation.
15:10 riyas joined #gluster-dev
15:16 wushudoin joined #gluster-dev
15:19 aravindavk joined #gluster-dev
15:24 shubhendu joined #gluster-dev
15:26 JoeJulian kkeithley: ok, try it now.
15:27 kkeithley JoeJulian: works. thanks
15:27 EinstCrazy joined #gluster-dev
15:34 skoduri joined #gluster-dev
15:35 Bhaskarakiran joined #gluster-dev
15:44 rraja joined #gluster-dev
15:46 devyani7 joined #gluster-dev
15:49 ppai joined #gluster-dev
15:49 anmol joined #gluster-dev
16:00 hagarth joined #gluster-dev
16:01 ashiq_ joined #gluster-dev
16:03 jiffin joined #gluster-dev
16:06 nigelb misc++
16:06 glusterbot nigelb: misc's karma is now 35
16:07 EinstCrazy joined #gluster-dev
16:09 rafi joined #gluster-dev
16:27 Bhaskarakiran joined #gluster-dev
16:27 msvbhat joined #gluster-dev
16:38 baojg joined #gluster-dev
16:40 k4n0 joined #gluster-dev
16:52 nbalacha joined #gluster-dev
16:55 misc nigelb: shouldn(y you go sleep ?
17:03 jiffin joined #gluster-dev
17:08 hchiramm joined #gluster-dev
17:10 rraja joined #gluster-dev
17:11 hagarth joined #gluster-dev
17:21 shyam joined #gluster-dev
17:22 baojg joined #gluster-dev
17:45 post-factum heh, samba 4.5 released
17:45 post-factum obnox++ ira++
17:45 glusterbot post-factum: obnox's karma is now 7
17:45 glusterbot post-factum: ira's karma is now 5
18:01 shaunm joined #gluster-dev
18:20 pranithk1 joined #gluster-dev
18:22 ChrisHolcombe joined #gluster-dev
18:39 pranithk1 post-factum: are you there? I want to talk to you about the massif output
18:40 post-factum pranithk1: hey
18:40 post-factum pranithk1: please go on
18:40 pranithk1 post-factum: Just updated https://bugzilla.redhat.com/​show_bug.cgi?id=1369364#c22
18:40 glusterbot Bug 1369364: medium, medium, ---, nbalacha, NEW , Huge memory usage of FUSE client
18:40 pranithk1 post-factum: Thinking you may not be available :-)
18:41 pranithk1 post-factum: it is a bit late for you :-)
18:41 post-factum 21:41 here
18:42 post-factum pranithk1: correct, RSS was 900M
18:42 pranithk1 post-factum: It is surprising because heap memory is only ~100MB
18:43 post-factum pranithk1: should I try re-run with --pages-as-heap=yes?
18:43 pranithk1 post-factum: let me check that
18:44 pranithk1 post-factum: As per the documentation we don't get stack-traces...
18:45 pranithk1 post-factum: I am still reading
18:47 rastar joined #gluster-dev
18:48 pranithk1 post-factum: seems okay man, let us do it
18:48 pranithk1 post-factum: I checked it here: https://blog.mozilla.org/nnethercote/2010/1​2/09/memory-profiling-firefox-with-massif/
18:48 post-factum pranithk1: so, --pages-as-heap=yes is okay to try?
18:49 pranithk1 post-factum: let's try, because at the moment, things seem fine. And one more suggestion
18:49 pranithk1 post-factum: Is it possible to delete all the files you create in the test before stopping the profiling?
18:50 post-factum pranithk1: no, it is production mailboxes :D
18:50 pranithk1 post-factum: okay sir, then no
18:50 pranithk1 post-factum: :-)
18:50 post-factum pranithk1: i just stat them, no reads, no writes
18:50 pranithk1 post-factum: Does it even OOM?
18:51 pranithk1 post-factum: I think the process by default will use around 70MB heap because of memory pools
18:51 pranithk1 post-factum: 30MB for inodes is fine.
18:51 pranithk1 post-factum: What we need to find out is where are these numbers like 900MB are coming from
18:51 post-factum pranithk1: it does not OOM on test stand, because we have 12G there, but it really does OOM in mail VM because we have 3G there
18:51 post-factum OOMing with 3G of RAM is awful
18:51 pranithk1 post-factum: Now you are talking
18:52 pranithk1 post-factum: I suspect it to be inode-leak in that case.
18:52 post-factum consuming 1G is okay if it does not grow, but it grows and oom finally
18:52 post-factum pranithk1: you'll have new results with --pages-as-heap=yes in 12 hours
18:54 post-factum pranithk1: any other suggestions?
18:55 pranithk1 post-factum: Are you sure you are emulating all the syscalls the applications are doing?
18:55 pranithk1 post-factum: Otherwise we may not hit those code paths?
18:55 post-factum pranithk1: i tried several approaches before: simulating reading, removing, writing etc
18:55 post-factum pranithk1: only stat'ting causes fast and reliable rss grow
18:56 post-factum pranithk1: nbalacha also sees that as she told me today
18:56 pranithk1 post-factum: RSS growth is not necessarily leak. In otherwords, were you able to get the process to OOM only by doing stats?
18:56 pranithk1 post-factum: On your VMs with 3GB RAM I mean
18:56 post-factum pranithk1: well, real dovecot workload leads to oom
18:57 post-factum pranithk1: did not do pure stat testing on it
18:57 post-factum pranithk1: but if it grows beyond 3G, it will be kille oom, obviously
18:57 pranithk1 post-factum: I just want us to make sure we are not spending our time on something that may not lead to OOMs
18:57 post-factum pranithk1: remember, i did drop_caches as well
18:58 pranithk1 post-factum: Drop caches doesn't lead inodes to be forgotten
18:58 pranithk1 post-factum: So it doesn't do much
18:58 post-factum pranithk1: anyway, memory pressure does not cause rss to shrink as well under real workload
18:59 pranithk1 post-factum: Is there a possibility to do the same test similar to devcot test which you saw lead to OOM killer?
18:59 pranithk1 post-factum: oh it is not shrinking?
18:59 pranithk1 post-factum: Good, then that is what we need to find out why.
19:00 post-factum pranithk1: i guess, anyway, we need pages-as-heap profiling first
19:00 pranithk1 post-factum: If we keep doing stats do you eventually see any drops?
19:00 post-factum pranithk1: drops in rss?
19:00 pranithk1 post-factum: yes yes, go ahead with that, I am curious to find out the RSS 900MB anyway
19:00 pranithk1 post-factum: yes, drops in rss or stabilizing
19:01 post-factum pranithk1: occasionally rss drops by 2 to 3 megs, but then it grows further
19:01 pranithk1 post-factum: okay...
19:02 pranithk1 post-factum: So may be there are more code paths which are leaking after we find this leak, because this is not the exact devcot workload?
19:02 post-factum pranithk1: also, at the beginning, rss grows faster, and i guess, that is because of various caches
19:02 post-factum pranithk1: sure there are lots of other leaks, and we will hunt them all somewhen l)
19:03 pranithk1 post-factum: cool
19:03 post-factum pranithk1: lets have a deal with stat first
19:03 pranithk1 post-factum: okay then, I will move on to other work I need to complete. At the moment I am just curious about the 900M where as heap is just 100M
19:03 pranithk1 post-factum: yeah
19:03 post-factum pranithk1: i hope massif will give you an answer
19:04 post-factum pranithk1: for now, i dunno as well
19:04 pranithk1 post-factum: well they made firefox better using this tool, that is why I am hopeful :-)
19:04 post-factum pranithk1: as you may see, memcheck does not show anything as well
19:04 post-factum pranithk1: that does not allow to say firefox is great browser, unfortunately
19:05 post-factum pranithk1: basically, all browsers consume lots of ram :/
19:05 pranithk1 post-factum: yeah... memcheck doesn't consider things that are still reachable as leaks
19:05 pranithk1 post-factum: yeah, but there were able to make good progress using this tool is what I am trying to say :-)
19:06 pranithk1 post-factum: As long you have a reference to a leak somewhere it is not leak
19:06 pranithk1 post-factum: That is the kind of leak I fixed all those years back. I completely forgot about this tool
19:06 pranithk1 post-factum: I had to read documentation again to refresh memory :-)
19:07 post-factum pranithk1: good. i'll let you know the results via BZ. i guess you are in GMT-N time so far
19:07 pranithk1 post-factum: I am in Fremont SFO, so it is afternoon for me now
19:07 pranithk1 post-factum: for two more weeks I will be in this TZ
19:08 pranithk1 post-factum: attach the massif output on the same BZ, let's see
19:08 post-factum in 2 days i'm leaving my current job, and will have less resources to debug memleaks within old setup
19:08 post-factum however, i hope, RH have some test playgrounds to play with gluster further :)
19:09 pranithk1 post-factum: You can reach out to us :-) and we can give you I hope
19:09 pranithk1 post-factum: What is your day job? linux kernel?
19:09 post-factum pranithk1: "we", yeah. i guess i will have a possibility to take them on my own
19:09 post-factum pranithk1: correct, kernel
19:10 pranithk1 post-factum: Is it for support?
19:10 post-factum pranithk1: aye, technical support engineering
19:11 pranithk1 post-factum: let's see if you can help with gluster support also. We can push for that too if you are interested enough.
19:14 post-factum pranithk1: just give me several months to accomodate
19:14 post-factum pranithk1: i started to move in kernel direction before doing gluster contributions :)
19:15 pranithk1 post-factum: :-), wait, I will introduce you to one of the leads in gluster support engineering in Redhat
19:16 raghu joined #gluster-dev
19:16 post-factum pranithk1: oh
19:16 jiffin joined #gluster-dev
19:16 post-factum pranithk1: i believe they are already aware of me, ndevos told them
19:17 pranithk1 raghu: do you know post-factum is joining support for kernel?
19:17 post-factum raghu: hey!
19:17 pranithk1 raghu: apparently he is joining in some 2-3 days
19:17 raghu pranithk1: Ahh great :)
19:18 raghu post-factum: Welcome :)
19:18 pranithk1 raghu: I was asking him if he is interested in support for gluster and this is what he has to say "just give me several months to accomodate,  i started to move in kernel direction before doing gluster contributions"
19:18 pranithk1 raghu: So I wanted to let you know that as soon as you see availability give your best shot to pull him :-P
19:19 raghu pranithk1: Sure. Definitely. :)
19:19 post-factum pranithk1: 2 weeks in fact
19:19 post-factum raghu: oh, lovely
19:19 post-factum raghu: talk to my manager first :P
19:19 pranithk1 post-factum: oh a short break in between jobs
19:20 post-factum pranithk1: yup, some time to end all deals here. i need to relocate to another country
19:20 raghu post-factum: Ha ha. Sure. Once you are settled and ready I am more than happy to have you for gluster
19:21 post-factum raghu: okay, i'll be ready!
19:21 pranithk1 post-factum: wow, which contry to which country?
19:21 post-factum pranithk1: heh, from Ukraine, my homeland, to Czech Republic, Brno hub
19:21 post-factum pranithk1: hope to see you @ devconf
19:21 pranithk1 post-factum: ah! may be :-)
19:27 k4n0 joined #gluster-dev
20:16 shyam joined #gluster-dev
22:06 hagarth joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary