Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-02-21

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 atm joined #gluster-dev
00:21 obnox vbellur: that was not precisely what I meant... as it is an absolute goal without reference to e.g. the memory available in a node
00:21 obnox vbellur: what I meant was more in the lines of this:
00:22 obnox vbellur: currently we have a 1:1 corresp between bricks as part of a volume and brick daemon processes, each of which has a certain memory footprint, adding up to a certain memory requirement for a given volume
00:23 obnox vbellur: e.g. assuming a memory footprint of 300MB for a brick server, we end up requiring a total of M * N * 300 MB of ram for a MxN volume (across the cluster, not per node)
00:24 obnox e.g. for a 3xN volume, roughly N GB of ram (across the nodes)
00:25 obnox i.e. for example the rough memory requirement for 100 3x1 volumes in a 3-node cluster is about 30GB per node
00:25 obnox (not counting snapshot and quota requirements)
00:26 obnox vbellur: so the actual question was: by how much can brick mux be expected to cut this memory requirement
00:27 obnox vbellur: e.g. is it as simple as one brick server will be serving a number X of bricks, hence it will be 1/X times the original requirement? and if so , what would X be?
00:35 obnox ok... i read in a mail by jeff from Nov 04, that he expects to be able to host 1000 bricks on a 32GB machine this way. That would be a factor of 10. i.e. we would be able to host 1000 instead of 100 3x1 volumes on a 3-node cluster with 32GB ram on the nodes.
00:35 obnox (ignoring the inability of glusterd to handle that, for a moment...)
00:35 obnox and with that ... /me goes to bed ;-)
00:36 obnox any further details / pointers are very welcome!
01:03 atm joined #gluster-dev
01:22 nishanth joined #gluster-dev
02:21 nigelb Gerrit will be down for 5 mins as I get replication working for gluster-block
02:22 shyam Ah :)
02:36 aravindavk joined #gluster-dev
03:09 atm joined #gluster-dev
03:30 vimal joined #gluster-dev
03:43 atinm joined #gluster-dev
03:56 Humble joined #gluster-dev
03:57 magrawal joined #gluster-dev
04:02 gyadav joined #gluster-dev
04:03 nishanth joined #gluster-dev
04:05 itisravi joined #gluster-dev
04:29 atm joined #gluster-dev
04:30 kdhananjay joined #gluster-dev
04:32 hgowtham joined #gluster-dev
04:41 vimal joined #gluster-dev
04:50 jiffin joined #gluster-dev
04:55 ndarshan joined #gluster-dev
04:56 nbalacha joined #gluster-dev
04:56 jiffin joined #gluster-dev
05:07 rafi joined #gluster-dev
05:08 sanoj joined #gluster-dev
05:08 skumar joined #gluster-dev
05:14 msvbhat joined #gluster-dev
05:15 sanoj joined #gluster-dev
05:16 karthik_us joined #gluster-dev
05:18 nbalacha joined #gluster-dev
05:29 Shu6h3ndu joined #gluster-dev
05:29 ankitr joined #gluster-dev
05:30 prasanth joined #gluster-dev
05:30 itisravi joined #gluster-dev
05:35 aravindavk joined #gluster-dev
05:36 ppai joined #gluster-dev
05:36 nbalacha joined #gluster-dev
05:39 apandey joined #gluster-dev
05:40 ndarshan joined #gluster-dev
05:42 skoduri joined #gluster-dev
05:44 msvbhat joined #gluster-dev
05:46 riyas joined #gluster-dev
05:47 karthik_us joined #gluster-dev
05:54 Humble joined #gluster-dev
05:56 Saravanakmr joined #gluster-dev
06:03 asengupt joined #gluster-dev
06:03 kotreshhr joined #gluster-dev
06:03 ndarshan joined #gluster-dev
06:06 rastar joined #gluster-dev
06:12 pkalever joined #gluster-dev
06:18 pkalever nigelb++ Thanks for bootstrapping gerrit for gluster-block
06:18 glusterbot pkalever: nigelb's karma is now 45
06:20 rjoseph joined #gluster-dev
06:27 itisravi_ joined #gluster-dev
06:35 pranithk1 joined #gluster-dev
06:36 apandey joined #gluster-dev
06:41 karthik_us joined #gluster-dev
07:09 Shu6h3ndu_ joined #gluster-dev
07:15 msvbhat joined #gluster-dev
07:23 rastar joined #gluster-dev
07:24 Shu6h3ndu joined #gluster-dev
07:25 kshlm ppai, you wanted me to merge this, https://github.com/gluster/glusterd2/pull/242 ?
07:29 k4n0 joined #gluster-dev
07:38 ppai joined #gluster-dev
07:56 hgowtham joined #gluster-dev
08:01 h4xr joined #gluster-dev
08:04 ashiq joined #gluster-dev
08:11 rastar joined #gluster-dev
08:14 sanoj joined #gluster-dev
08:22 ndevos anoopcs: care to +1 https://review.gluster.org/16649 if you do not have any other concerns? I can then merge and backport it to 3.10
08:44 anoopcs ndevos, Ah.. Somehow I missed your update.
08:44 chawlanikhil24 joined #gluster-dev
08:45 chawlanikhil24 Hi, I was setting up gluster using AWS servers
08:46 chawlanikhil24 But I'm facing a lot of Issues ,Like probe end point is not connected
08:46 chawlanikhil24 although installed, gluster over RHEL on server
08:47 chawlanikhil24 karthik_us, ppai, kshlm, anyone?
08:48 ppai chawlanikhil24, Hi. Could you provide logs and output of operation that you performed ?
08:48 anoopcs chawlanikhil24, Would you mind moving this conversation to #gluster which is more appropriate?
08:50 BatS9 joined #gluster-dev
08:52 itisravi_ joined #gluster-dev
08:57 anoopcs ndevos, Hm... https://fedoraproject.org/​wiki/Packaging:Python#The_.25python_provide_macro says that %python_provide will add a Obsoletes: part too if called with python2- argument.
08:57 nishanth joined #gluster-dev
08:57 hgowtham joined #gluster-dev
08:57 ndevos anoopcs: hmm, that does not seem to be happening?
08:59 anoopcs Is it?
09:00 anoopcs ndevos, Anyway we need to specify the version which may not be possible with %python_provide
09:01 ndevos anoopcs: there definitely wasnt a Obsoletes in the CentOS-7 RPMs
09:01 anoopcs Ok.
09:12 ndevos ppai++ anoopcs++ thankd!
09:12 glusterbot ndevos: ppai's karma is now 20
09:12 glusterbot ndevos: anoopcs's karma is now 43
09:12 ndevos *thanks
09:15 pranithk1 joined #gluster-dev
09:18 mchangir joined #gluster-dev
09:23 ShwethaHP joined #gluster-dev
09:26 Shu6h3ndu joined #gluster-dev
09:31 rjoseph joined #gluster-dev
09:35 itisravi_ joined #gluster-dev
09:54 pranithk1 xavih: hey merged the blocker patch for 3.10. if https://review.gluster.org/#/c/16697 passes regressions before you sign-off for the day, merge it. I already gave +2
09:54 xavih pranithk1: yes, I've already seen it. I was doing the backport when you uploaded it :)
09:55 xavih pranithk1: I don't have merge rights on 3.10 branch
09:55 pranithk1 xavih: ah! I see. then let them merge it.
09:55 pranithk1 xavih: One of these days I will ping you and find out this whole mmap and SELinux issues which lead to all these problems...
09:56 xavih pranithk1: ok
09:56 pranithk1 xavih: I think today and tomorrow Ashish and I will try to finish off the transaction and self-heal issues we found. Will you be available?
09:57 xavih pranithk1: yes, most probably I'll be here
09:58 pranithk1 xavih: great!
10:02 apandey joined #gluster-dev
10:02 skoduri_ joined #gluster-dev
10:08 hgowtham joined #gluster-dev
10:09 aravindavk joined #gluster-dev
10:12 apandey joined #gluster-dev
10:17 pranithk1 joined #gluster-dev
10:22 msvbhat joined #gluster-dev
10:26 kotreshhr joined #gluster-dev
10:27 kotreshhr1 joined #gluster-dev
10:30 msvbhat joined #gluster-dev
10:41 rjoseph joined #gluster-dev
10:51 pkalever nigelb: I have noticed that Reviewed-by and Review-on are absent after merge ? can we so something about it?
10:57 humblec joined #gluster-dev
11:01 nigelb pkalever: can you add a comment on the bug. I'm actually away today. I'll change it tomorrow.
11:01 pkalever nigelb: sure!
11:02 kotreshhr1 left #gluster-dev
11:33 Saravanakmr #REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC ( start ~in 27 minutes) in #gluster-meeting
11:40 msvbhat joined #gluster-dev
12:03 rastar joined #gluster-dev
12:16 kkeithley Saravanakmr++
12:16 glusterbot kkeithley: Saravanakmr's karma is now 32
12:17 Saravanakmr kkeithley++ rafi++ jiffin++ ndevos++
12:17 glusterbot Saravanakmr: kkeithley's karma is now 164
12:17 glusterbot Saravanakmr: rafi's karma is now 59
12:17 glusterbot Saravanakmr: jiffin's karma is now 62
12:17 glusterbot Saravanakmr: ndevos's karma is now 336
12:20 rjoseph joined #gluster-dev
12:25 ndevos Saravanakmr++
12:26 glusterbot ndevos: Saravanakmr's karma is now 33
12:26 msvbhat joined #gluster-dev
12:34 gyadav joined #gluster-dev
12:37 ppai joined #gluster-dev
12:49 jiffin1 joined #gluster-dev
13:04 bfoster joined #gluster-dev
13:11 bfoster joined #gluster-dev
13:16 humblec joined #gluster-dev
13:18 skoduri joined #gluster-dev
13:19 karthik_us joined #gluster-dev
13:20 msvbhat joined #gluster-dev
13:32 rraja joined #gluster-dev
13:36 karthik_us joined #gluster-dev
13:37 ira joined #gluster-dev
13:40 * ndevos hits his head against the brick multiplexing change, gfapi won't load .vol files anymore?
13:40 shyam joined #gluster-dev
13:41 * misc put a cushion on the wall to protect ndevos head
13:41 anoopcs :-)
13:41 skoduri :)
13:48 atinm joined #gluster-dev
14:08 jiffin joined #gluster-dev
14:13 rafi joined #gluster-dev
14:14 rafi joined #gluster-dev
14:18 lpabon joined #gluster-dev
14:22 gem joined #gluster-dev
14:25 Humble joined #gluster-dev
14:44 ankitr joined #gluster-dev
14:51 nbalacha joined #gluster-dev
14:56 aravindavk joined #gluster-dev
15:02 rjoseph joined #gluster-dev
15:04 rraja joined #gluster-dev
15:10 pkalever left #gluster-dev
15:24 atinm joined #gluster-dev
15:27 aravindavk joined #gluster-dev
15:29 gyadav joined #gluster-dev
16:04 msvbhat joined #gluster-dev
16:06 wushudoin joined #gluster-dev
16:07 wushudoin joined #gluster-dev
16:12 kkeithley why did my reply-all, which included gluster-dev@gluster.org, get bounced by gluster.org? "User unknown"
16:17 kkeithley misc, nigelb: ^^^
16:17 kkeithley misc: the coverity reports are big, so make sure the new machine has lots of disk space.
16:19 kkeithley NM. Because it's supposed to be gluster-devel@gluster.org. The email I replied-all to had it wrong.
16:27 msvbhat joined #gluster-dev
16:28 misc nope, I did it wrong first :)
16:35 * misc verify the size
16:35 skoduri joined #gluster-dev
16:39 misc 21G
16:42 misc so 2.2 per branch for coverty
16:44 misc I guess that if we reduce coverty bugs to 0, we consume less :)
16:46 riyas joined #gluster-dev
17:00 jiffin joined #gluster-dev
17:03 shyam gyadav: Hi, looks like you put up the same fix for https://bugzilla.redhat.co​m/show_bug.cgi?id=1421724
17:03 glusterbot Bug 1421724: low, low, ---, gyadav, POST , glusterd log is flooded with stale disconnect rpc messages
17:04 shyam gyadav: See, https://review.gluster.org/#/c/16699/1
17:04 shyam gyadav: Maybe you want to score the one above as +2, as we are waiting on this to tag RC1 for 3.10
17:15 kkeithley misc: yeah, and pigs might fly someday too.
17:20 misc kkeithley: ok, so I found that we need a job to clean 3.9
17:21 kkeithley oh, oops
17:21 misc I am adding that to ansible
17:22 misc in fact, a script that do detect the version would make sure we do not forget
17:30 shyam kkeithley: if we release 3.10 RC1, then do we need to rebuild the packages when we call that *the* 3.10 release? (from a package naming perspective)
17:32 vbellur obnox: noted your comments.. I don't think there has been a comprehensive study on number of bricks to resources, the plan is to get there for providing appropriate guidance for deployments
17:32 gyadav_ joined #gluster-dev
17:46 obnox vbellur: ok thanks!
17:47 atinm joined #gluster-dev
18:32 vbellur joined #gluster-dev
18:33 kkeithley shyam: yes, but more because, e.g. in Launchpad and Open SuSE Build System there isn't any way to rename the files.
18:34 kkeithley Also the CentOS pkgs in the Storage SIG.
18:35 kkeithley And for Fedora and Debian I will sign the 3.10 GA pkgs with a 3.10 gpg key (which I have not created yet). The RC0 pkgs were signed with the 3.9 gpg key.
18:37 kkeithley The number of pkgs to sign is small, and for CentOS, Ubuntu, and SuSE building is fire-and-forget, so rebuilding is not an onerous task.
18:38 kkeithley Also the tar file created by the Jenkins release has .../glusterfs-3.10.0rcX/... for RC releases, and the GA tar file will be .../glusterfs-3.10.0/....
18:40 kkeithley So I couldn't even just rebuild 3.10.0 from teh 3.10.0rc1 tar file. Not without some minor hackery to the pkg files.
18:41 kkeithley So I'd just tag V3.10.0rc1 and release. Whenever you're ready, tag V3.10.0 and release. And don't worry about the package building.
18:41 kkeithley IMO
19:19 shyam joined #gluster-dev
19:39 shyam joined #gluster-dev
20:03 vbellur joined #gluster-dev
20:37 glustin joined #gluster-dev
21:37 shyam joined #gluster-dev
22:05 Acinonyx joined #gluster-dev
22:25 rastar joined #gluster-dev
23:37 nishanth joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary