Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-04-26

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 shaunm joined #gluster-dev
00:29 hellboy2k8 joined #gluster-dev
01:31 EinstCrazy joined #gluster-dev
02:28 EinstCrazy joined #gluster-dev
02:41 EinstCrazy joined #gluster-dev
02:44 EinstCrazy joined #gluster-dev
03:14 josferna joined #gluster-dev
03:35 aspandey joined #gluster-dev
03:35 kshlm joined #gluster-dev
03:44 overclk joined #gluster-dev
03:51 skoduri joined #gluster-dev
04:01 shubhendu joined #gluster-dev
04:03 nbalacha joined #gluster-dev
04:04 rafi joined #gluster-dev
04:08 itisravi joined #gluster-dev
04:20 kdhananjay joined #gluster-dev
04:21 shubhendu joined #gluster-dev
04:22 gem joined #gluster-dev
04:24 atinm joined #gluster-dev
04:30 shubhendu joined #gluster-dev
04:31 mchangir joined #gluster-dev
04:32 jiffin joined #gluster-dev
04:35 rafi1 joined #gluster-dev
04:35 EinstCra_ joined #gluster-dev
04:50 rafi joined #gluster-dev
04:57 karthik___ joined #gluster-dev
05:01 jiffin1 joined #gluster-dev
05:06 ndarshan joined #gluster-dev
05:19 rafi1 joined #gluster-dev
05:24 rafi joined #gluster-dev
05:27 hgowtham joined #gluster-dev
05:31 pkalever joined #gluster-dev
05:37 prasanth joined #gluster-dev
05:38 shubhendu joined #gluster-dev
05:39 Apeksha joined #gluster-dev
05:45 rafi1 joined #gluster-dev
05:47 aravindavk joined #gluster-dev
05:49 atalur joined #gluster-dev
05:49 vimal joined #gluster-dev
05:55 asengupt joined #gluster-dev
06:04 Bhaskarakiran joined #gluster-dev
06:07 poornimag joined #gluster-dev
06:08 spalai joined #gluster-dev
06:16 spalai left #gluster-dev
06:18 kdhananjay joined #gluster-dev
06:27 rafi joined #gluster-dev
06:28 vmallika joined #gluster-dev
06:29 ppai joined #gluster-dev
06:31 hchiramm joined #gluster-dev
06:35 Manikandan joined #gluster-dev
06:36 rafi1 joined #gluster-dev
06:43 rraja joined #gluster-dev
06:46 mchangir joined #gluster-dev
06:47 atinm joined #gluster-dev
06:47 nbalacha joined #gluster-dev
06:50 hellboy2k8 joined #gluster-dev
06:59 ashiq joined #gluster-dev
07:00 pg joined #gluster-dev
07:08 kshlm joined #gluster-dev
07:15 Debloper joined #gluster-dev
07:19 Saravanakmr joined #gluster-dev
07:25 rastar joined #gluster-dev
07:39 nbalacha joined #gluster-dev
07:41 mchangir joined #gluster-dev
07:43 atinm joined #gluster-dev
07:58 spalai joined #gluster-dev
08:12 itisravi joined #gluster-dev
08:32 atinm kshlm, I need a review from you - http://review.gluster.org/#/c/14069
08:32 atinm kshlm, its a regression
08:33 aravindavk joined #gluster-dev
08:50 vimal joined #gluster-dev
08:52 kdhananjay joined #gluster-dev
09:03 pranithk1 joined #gluster-dev
09:19 hellboy2k8 joined #gluster-dev
09:52 pkalever joined #gluster-dev
09:54 atinm joined #gluster-dev
10:07 kkeithley1 joined #gluster-dev
10:16 josferna joined #gluster-dev
10:22 josferna joined #gluster-dev
10:53 rastar joined #gluster-dev
10:54 poornimag joined #gluster-dev
10:56 pkalever joined #gluster-dev
10:56 pg joined #gluster-dev
10:57 atinm joined #gluster-dev
11:09 hellboy2k8 joined #gluster-dev
11:12 rastar joined #gluster-dev
11:14 gem joined #gluster-dev
11:16 ira joined #gluster-dev
11:37 hellboy2k8 joined #gluster-dev
11:37 kshlm joined #gluster-dev
11:45 Manikandan joined #gluster-dev
11:48 lpabon joined #gluster-dev
12:01 post-factum any meeting today?
12:04 gem joined #gluster-dev
12:07 jiffin post-factum: sorry i need to host today's community meeting
12:08 post-factum emm, community meeting should happen tomorrow, afaik
12:10 jiffin Sorry for the late notice gluster bug triage meeting will start on #gluster-meeting
12:11 atinm kshlm, query
12:11 kshlm Yeah
12:12 atinm kshlm, I saw your comment on 14069
12:12 jiffin post-factum: btw thanks for alerting, post-factum++
12:12 glusterbot jiffin: post-factum's karma is now 10
12:12 overclk joined #gluster-dev
12:13 atinm kshlm, this is done only when the key is cluster.op-version and all volume check is already taken care inside that check
12:13 post-factum no problem
12:13 atinm kshlm, so you wouldn't hit this code until gluster volume set all cluster.op-version <value> is executed
12:14 atinm kshlm, am I misunderstanding it?
12:14 kshlm atinm, Really? Maybe I'm looking at a different part of the stage_volume code.
12:14 kshlm That function is really huge and easy to get lost in. And I contributed a large part to its hugeness.
12:15 atinm kshlm, can you check again?
12:15 atinm kshlm, look at line 1102 on that patch
12:16 rastar joined #gluster-dev
12:17 post-factum atinm: kshlm: does that regression mean that i shouldn't bump opversion with 3.7.11?
12:17 atinm post-factum, which regression are you talking about?
12:18 kshlm atinm, Ok. You are right.
12:18 post-factum atinm: http://review.gluster.org/#/c/14069
12:18 kshlm post-factum, Unless you try to set cluster.op-version to something lower, you shouldn't worry.
12:18 post-factum oh, i see. no, i'm not going to lower it :)
12:18 post-factum thanks!
12:19 kshlm In any case, this won't work, the regression is that it doesn't throw an error.
12:19 atinm post-factum, this doesn't affect any functionality, its just that it wasn't throwing a failure message to CLI
12:19 atinm post-factum, we fixed it and then regressed it
12:19 post-factum ok, that is usual thing in development :D
12:20 kshlm For a long time trying to set a lower cluster.op-version was silently ignored, but it was recently changed to error out.
12:20 * atinm can expect kshlm to merge the patch now :)
12:20 kshlm Now another change causes it to not error out, but the op-version isn't lowered any way.
12:21 atinm kshlm, I need another review from you for http://review.gluster.org/#/c/14075
12:27 kkeithley_ meh, how do I get rid of a Verfied-1 that Gluster Build System added?
12:28 kkeithley_ http://review.gluster.org/#/c/13919/3
12:32 rraja joined #gluster-dev
12:36 jiffin atalur, pranithk1, kdhananjay: can u guys please triage BZ1329344
12:36 jiffin ?
12:36 jiffin https://bugzilla.redhat.com/show_bug.cgi?id=1329344
12:36 glusterbot Bug 1329344: unspecified, unspecified, ---, bugs, NEW , heal-info slow response while IO is in progress
12:37 hagarth joined #gluster-dev
12:40 jiffin atinm, kshlm: can u guys please look into https://bugzilla.redhat.com/show_bug.cgi?id=1328994?
12:40 glusterbot Bug 1328994: unspecified, unspecified, ---, bugs, NEW , When a feature fails needing a higher opversion, the message should state what version it needs.
12:50 kkeithley_ blech. who ever thought it was a good idea to move all the %pre, %post, %preun, %postun in the glusterfs.spec(.in)?
12:50 ndevos uh, didnt you do that?
12:53 Debloper joined #gluster-dev
12:59 EinstCrazy joined #gluster-dev
13:01 kkeithley_ no, I just went along with it.
13:02 jiffin ndevos++, kkeithley_++, Saravanakmr++, rafi++ hgowtham++
13:02 glusterbot jiffin: ndevos's karma is now 245
13:02 glusterbot jiffin: kkeithley_'s karma is now 6
13:02 glusterbot jiffin: Saravanakmr's karma is now 5
13:02 glusterbot jiffin: rafi's karma is now 46
13:02 glusterbot jiffin: hgowtham's karma is now 21
13:02 hgowtham jiffin++
13:02 glusterbot hgowtham: jiffin's karma is now 34
13:02 Saravanakmr jiffin++
13:02 glusterbot Saravanakmr: jiffin's karma is now 35
13:02 jiffin kkeithley++
13:02 glusterbot jiffin: kkeithley's karma is now 114
13:02 ndevos jiffin++
13:02 glusterbot ndevos: jiffin's karma is now 36
13:03 post-factum jiffin++
13:03 glusterbot post-factum: jiffin's karma is now 37
13:05 kkeithley_ (IIRC it was Harsha, not that I'd try to "fix the blame")
13:21 mchangir joined #gluster-dev
13:33 post-factum rastar: here?
13:41 kkeithley_ http://review.gluster.org/#/c/13919/3
13:48 josferna joined #gluster-dev
13:48 penguinRaider_ joined #gluster-dev
13:56 spalai left #gluster-dev
14:02 josferna joined #gluster-dev
14:06 pkalever left #gluster-dev
14:30 kkeithley_ ndevos++
14:30 glusterbot kkeithley_: ndevos's karma is now 246
14:30 kkeithley_ ndevos: what was the magic?
14:30 shaunm joined #gluster-dev
14:33 gem joined #gluster-dev
14:37 post-factum https://github.com/gluster/samba-glusterfs/blob/master/src/vfs_glusterfs.c#L182
14:37 post-factum does this work only within 1 connection but not for different users/connections?
14:38 post-factum rastar: ^^ ?
14:41 anoopcs post-factum, Why are you looking at samba-glusterfs?
14:42 post-factum anoopcs: i was wondering whether i could replace 3 fuse mountpoints with vfs_glusterfs
14:42 post-factum anoopcs: it turned out vfs_glusterfs eats LOTS of memory
14:42 anoopcs post-factum, Check directly inside Samba source for latest updates.
14:42 post-factum anoopcs: it is the same there
14:43 rastar joined #gluster-dev
14:44 * kkeithley_ doesn't like that vfs_glusterfs uses glfs_ prefixes.
14:44 anoopcs post-factum, It may be same but its better to track from samba source.
14:44 * post-factum does not like that vfs_glusterfs does not reuse existing connections to gluster server
14:47 anoopcs post-factum, It think reuses, if the connection request comes from the same volume. Or am I missing something?
14:47 anoopcs post-factum, Can you explain the scenario?
14:47 post-factum as far as i understand, samba creates separate process per each user connection
14:47 anoopcs post-factum, Yes. you are right.
14:48 anoopcs one smbd per client machine
14:48 post-factum if the volume is reused by the same user within 1 connection (iow, eithin 1 thread), it is ok
14:48 post-factum if separate users use the same volume, each user will have its own connection
14:48 post-factum if i have 200 users, there will be 200 connections to gluster server, and that takes a lot of RAM
14:49 * ndevos wonders how other Samba users handle that
14:50 ndevos 200 users does not sound very much...
14:50 post-factum have you heard about large deploymenets?
14:50 sakshi joined #gluster-dev
14:50 post-factum *deployments
14:50 ndevos I'm not following the Samba part much, but I guess anoopcs or others in his team should know about them
14:51 post-factum with fuse mountpoints, my samba setup eats ~512M of RAM
14:52 post-factum with vfs_glusterfs 2G were exhausted at the beginning of workday
14:52 post-factum +512M of swap
14:52 post-factum :)
14:53 post-factum i see each smbd process eating ~100M of RAM, it is not that much. but every connection eats 100M, and in total that is pretty much
14:53 post-factum so, vfs_glusterfs is just unusable for deployments that differ from home setups
14:54 anoopcs ira, ^^
14:54 post-factum am I missing something? correct me if I do stupid things
14:55 post-factum (fuse mountpoint works just fine, but if there is a way to avoid using it, i'd rather stick to it)
14:55 ira Where are you getting the 100MB number?
14:56 post-factum htop, RES column
14:56 post-factum here is VM memory consumption chart: http://goo.gl/lnVNpK
14:57 post-factum umm. this one: http://goo.gl/OXtbA6
14:57 post-factum 00:30 — switched to vfs_glusterfs
14:57 post-factum 07:45 — ppl started to use samba
14:58 post-factum 09:30 — reverted back to fuse
14:59 ira RES includes shared pages :/
15:00 ira Yes, I'd expect vfs_glusterfs to take up more ram.
15:00 ira and resources.
15:00 post-factum anyway, according to htop, free -m and zabbix, ram was exhausted
15:00 rastar post-factum: your observation is correct
15:00 post-factum rastar: :(
15:00 rastar post-factum: glusterfs(gluster client process) footprint is around 100M
15:00 post-factum rastar: any possibility to share gluster connections between smbd processes?
15:01 post-factum otherwise it looks like huge waste
15:01 misc hagarth: so, I found the setting for the gerrit http auth, but I need to restart gerrit
15:01 rastar post-factum: because smbd spawns a different process for each windows for same volume we take up 100M X Number of win-clients worth of RAM
15:01 misc and I fogot if I can do that or if that's gonna break stuff :/
15:01 ira Why is the client side stack taking up 100M?
15:02 rastar ira: we take up around 32M + 16M for pre-registered mempools
15:02 rastar ira: so it is somewhere around 60M at start, but with some IO we go till 100M and stabilize
15:03 rastar post-factum: I don't know of any. The current architecture of smb vfs plugin does not allow that unless we have a gluster client daemon running somewhere
15:03 overclk joined #gluster-dev
15:03 rastar post-factum: how many Samba servers do you have?
15:04 post-factum rastar: it is 1 VM
15:04 ira rastar: Even if we did... we'd lose any advantage of trying to use vfs_glusterfs.
15:04 rastar post-factum: I have seen admins placing limit of max smbd limit to X processes in smb.conf
15:04 rastar post-factum: actually no,
15:04 post-factum rastar: we have limit of 256
15:05 post-factum now there are 56 smbd processes
15:05 rastar ideal setup is to have smbd running on all nodes of trusted storage pool clustered using smbd and having a set of virtual IPs being handed out in round robin fashion to clients
15:05 post-factum it would be 5,6G of RAM with vfs_
15:05 rastar clustered using ctdb
15:06 post-factum 56 clients: 5.6G of RAM vs 457M (using FUSE)
15:06 rastar then if you place a limit of say 40 processes per node and have a 4 node setup you can still handle 160 clients
15:06 rastar :(
15:07 rastar post-factum: I get your point, if all nodes of gluster cluster are VMs on same hypervisor, we are still wasting RAM
15:07 post-factum rastar: no
15:07 hagarth misc: ok, when can we schedule a downtime?
15:07 post-factum rastar: we use 2 hardware nodes for gluster replica
15:07 post-factum rastar: and 1 VM for samba
15:07 post-factum with FUSE VM eats 512M
15:08 post-factum with vfs_ it would take more than 6G
15:08 post-factum that is the point
15:09 misc hagarth: depend, that's just a gerrit restart, what would be the impact ?
15:09 rastar post-factum: :( yes..checking one more time if there is any workaround..
15:10 post-factum rastar: either we should have separate smbd-like process to proxy all connections to gluster cluster (maintaining the common pool)
15:10 post-factum rastar: or it is impossible with current samba architecture
15:10 ira joined #gluster-dev
15:11 rastar post-factum: yes it does not seem possible with samba architecture
15:11 ira Sorry, I got dropped.
15:11 ira Pardon my asking, why is this VM limited to 2G?
15:12 post-factum ira: because I do not want to give 20G to samba VM just to get rid of FUSE :)
15:14 ira post-factum: You have 2000 users, for 1 VM?
15:14 post-factum 200
15:14 post-factum not that much for mostly idling connections
15:14 post-factum but even idle connection consumes ram
15:15 rastar post-factum: I am just thinking aloud here. What if you placed a proxy in between win clients and samba server and used it to do port forwarding in such a way that they all come from same IP(that of proxy)?
15:15 ira rastar: Won't matter... it is per connection.
15:16 post-factum rastar: i thought about cifs proxy, but haven't found any solution
15:16 rastar ira: yes, I always mistake it to be per IP.
15:16 post-factum rastar: and yes, it is about connections
15:17 rastar post-factum: scratch that idea. what is your workload? Only immediate solution I can think of is reducing glusterfs client footprint based on your workload
15:18 post-factum rastar: uploading small videos, reading small sound files, storing documents, uploading/downloading large iso files occasionally, uploading jpg files etc
15:18 ira post-factum: You aren't doing anything with clustering are you here?
15:18 ira (clustered Samba.)
15:18 post-factum nope, not yet, but considering that for HA
15:19 ira As long as you avoid that, just use FUSE.  It won't do as well for streaming workloads, but you won't hit the major issues we have until you do clustering.
15:19 post-factum for sure, I can find 20G for samba, but that is not what i want, as you understand
15:19 * ira nods.
15:20 post-factum ira: i won't do clustering for spreading the load, just for failover
15:20 ira If you bring in CTDB, you'll need the backend.
15:20 ira Otherwise, I suspect, it may work fine.
15:20 ira I'm not recommending this... but... it is what it is.
15:21 rastar post-factum: If perf optimization for re-reading same blocks is not so important you can disable io-cache. That will save 32M per connection.
15:21 post-factum rastar: :(
15:21 post-factum rastar: that still does not solve the basic issue of sharing the connections
15:22 post-factum ok, rastar, ira, i've got your ideasm but see no solution :)
15:22 post-factum thanks, anyway!
15:22 post-factum *ideas,
15:22 rastar post-factum: :)
15:22 post-factum i'm fine with fuse
15:22 ira post-factum: I think you'd do well, to disable io-cache with samba anyways.
15:22 ira And read ahead/write behind.
15:23 post-factum ira: ok, will try that just to get bare numbers
15:24 post-factum ira: i doubt it will lower memory consumption to acceptable level
15:24 ira Well, the other question is, what is the 64MB mentioned also.
15:24 ira I doubt it will match FUSE's consumption.
15:24 ira But it should do better...
15:24 post-factum ira: that is what i want :D
15:25 post-factum had to leave the office
15:25 ira post-factum: I wouldn't expect it.  The question is can we get the number to reasonable ;)
15:25 post-factum feel free to contact me here in several hours
15:25 post-factum again, thanks rastar++ ira++
15:25 glusterbot post-factum: rastar's karma is now 34
15:25 glusterbot post-factum: ira's karma is now 4
15:29 nbalacha joined #gluster-dev
15:31 wushudoin joined #gluster-dev
15:32 pkalever joined #gluster-dev
15:47 ndevos kkeithley_: care to add your +1 on http://review.gluster.org/14035 again?
15:47 ndevos rjoseph: and are you ok with http://review.gluster.org/14035 too? the svc_* rename to gf_svc_*
15:51 atinm joined #gluster-dev
16:06 Manikandan joined #gluster-dev
16:07 overclk joined #gluster-dev
16:18 atinm rastar, have we changed anything recently on our test where cleanup gets called automatically after every test?
16:18 atinm rastar, somehow even if I comment out clean up at end, the test wipes out everything
16:51 jiffin joined #gluster-dev
16:51 rafi joined #gluster-dev
17:07 rafi joined #gluster-dev
17:31 rafi1 joined #gluster-dev
17:44 rraja joined #gluster-dev
17:47 spalai joined #gluster-dev
17:52 spalai1 joined #gluster-dev
18:39 jiffin joined #gluster-dev
18:41 rastar joined #gluster-dev
18:44 jiffin joined #gluster-dev
18:47 jiffin joined #gluster-dev
18:52 pkalever left #gluster-dev
18:54 penguinRaider_ joined #gluster-dev
19:16 sakshi joined #gluster-dev
19:27 luizcpg joined #gluster-dev
23:18 luizcpg joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary