Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-10-14

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:01 hagarth joined #gluster-dev
01:30 luizcpg joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:37 sankarshan joined #gluster-dev
02:42 luizcpg_ joined #gluster-dev
03:02 jiffin joined #gluster-dev
03:09 jiffin joined #gluster-dev
03:16 nbalacha joined #gluster-dev
03:25 magrawal joined #gluster-dev
03:39 mchangir|afk joined #gluster-dev
04:02 luizcpg_ joined #gluster-dev
04:11 shubhendu joined #gluster-dev
04:15 itisravi joined #gluster-dev
04:28 atinm joined #gluster-dev
04:45 ppai joined #gluster-dev
04:55 msvbhat joined #gluster-dev
04:59 jiffin joined #gluster-dev
05:06 rafi joined #gluster-dev
05:07 apandey joined #gluster-dev
05:12 ankitraj joined #gluster-dev
05:13 ndarshan joined #gluster-dev
05:19 hgowtham joined #gluster-dev
05:28 riyas joined #gluster-dev
05:29 skoduri joined #gluster-dev
05:42 kotreshhr joined #gluster-dev
05:49 asengupt joined #gluster-dev
05:49 rafi joined #gluster-dev
05:54 mchangir|afk joined #gluster-dev
06:00 Saravanakmr joined #gluster-dev
06:05 sanoj joined #gluster-dev
06:09 ramky joined #gluster-dev
06:24 gem joined #gluster-dev
06:37 apandey joined #gluster-dev
06:38 msvbhat joined #gluster-dev
06:39 kdhananjay joined #gluster-dev
06:40 Muthu joined #gluster-dev
06:44 devyani7_ joined #gluster-dev
07:02 rafi joined #gluster-dev
07:02 nishanth joined #gluster-dev
07:06 riyas joined #gluster-dev
07:09 rastar joined #gluster-dev
07:48 spalai joined #gluster-dev
08:23 George_ joined #gluster-dev
08:31 GeorgeLian joined #gluster-dev
08:31 rraja joined #gluster-dev
08:33 GeorgeLian Hello, any expert is there for wirte-behind and/or md-cache xlator?
08:35 GeorgeLian We have an issue on fstat when write-behind feature enable
08:36 GeorgeLian the size of fstat seems not include the data in cache which write-behind feature not commit to server
08:37 jiffin GeorgeLian: Raghavendra G will be the right person to answer ur question
08:37 jiffin I guess he is not online atm
08:38 jiffin You should probably send a mail to gluster ML ccing him
08:38 GeorgeLian OK, thanks a lots
08:38 jiffin Raghavendra G <raghavendra@gluster.com>
08:40 wyklq joined #gluster-dev
08:47 pranithk1 joined #gluster-dev
08:55 wyklq left #gluster-dev
08:59 ramky joined #gluster-dev
09:00 ramky_ joined #gluster-dev
09:06 pranithk1 joined #gluster-dev
09:12 rraja joined #gluster-dev
09:45 jiffin1 joined #gluster-dev
09:45 skoduri joined #gluster-dev
09:50 gem joined #gluster-dev
09:50 atinm ndevos, can you please take a look at http://review.gluster.org/#/c/15631/ ?
09:56 gem_ joined #gluster-dev
10:08 shubhendu joined #gluster-dev
10:11 ppai joined #gluster-dev
10:16 bfoster joined #gluster-dev
10:22 jiffin joined #gluster-dev
10:23 bfoster joined #gluster-dev
10:43 ndevos atinm: what is that bz depends-on/blocks structure a mess! downstream bugs can *NEVER* block upstream community ones :-/
11:13 nishanth joined #gluster-dev
11:13 pranithk1 joined #gluster-dev
11:31 msvbhat joined #gluster-dev
11:32 pranithk1 joined #gluster-dev
11:37 devyani7_ joined #gluster-dev
12:00 ndevos hmm, "full stack" developers only need to know php, javascript and js? (isnt js javascript?!)
12:00 mchangir|afk joined #gluster-dev
12:15 kotreshhr left #gluster-dev
12:15 nishanth joined #gluster-dev
12:26 atinm ndevos, as I said earlier, our cloning practice needs to be changed
12:30 pranithk1 joined #gluster-dev
12:30 pranithk1 joined #gluster-dev
12:31 pranithk1 joined #gluster-dev
12:34 luizcpg joined #gluster-dev
12:47 ndevos atinm: I think people should just try and follow the reporting guidelines :-/ no idea how we can make people more aware
12:47 ndevos maybe patches that do not have correct bzs should not get merged
12:51 mchangir|afk joined #gluster-dev
12:51 akanksha_ joined #gluster-dev
13:09 nbalacha joined #gluster-dev
13:18 shyam joined #gluster-dev
13:27 shaunm joined #gluster-dev
13:31 lpabon joined #gluster-dev
13:37 mchangir|afk joined #gluster-dev
13:38 itisravi joined #gluster-dev
13:42 spalai left #gluster-dev
14:04 wushudoin joined #gluster-dev
14:11 anrao joined #gluster-dev
14:15 anrao joined #gluster-dev
14:16 ankitraj joined #gluster-dev
14:21 jiffin joined #gluster-dev
14:24 shyam joined #gluster-dev
14:34 skoduri joined #gluster-dev
14:35 anrao joined #gluster-dev
14:39 mchangir|afk joined #gluster-dev
14:53 anrao_ joined #gluster-dev
15:00 gem_ joined #gluster-dev
15:09 anrao joined #gluster-dev
15:09 shyam joined #gluster-dev
15:50 gem joined #gluster-dev
15:52 nbalacha joined #gluster-dev
16:10 gem joined #gluster-dev
16:13 shubhendu joined #gluster-dev
17:13 mchangir|afk joined #gluster-dev
17:27 akanksha_ Hello, I need a bit of help setting up a sample GlusterFS installation. I am new here. I was trying to setup a two node cluster using two fedora 22 vms on virtual manager. I was however not able to understand how much memory space will the two need?
17:27 akanksha_ I am following this tutorial
17:31 akanksha_ http://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Quickstart/
17:35 shyam akanksha_: Well you need enough for the VMs to run (say a GB each, I am pulling this number at random), but do you really need VMs to play with Gluster in what you are attempting to do?
17:36 akanksha_ I am supposed to run some benchmark tests
17:36 akanksha_ https://github.com/gluster/gbench/tree/master/bench-tests/bt-0000-0001
17:36 akanksha_ This one.
17:36 akanksha_ shyam:
17:37 shyam akanksha_: Ah! ok... I would keep it simpler and create a gluster volume on the host, rather than worrying about VMs for that
17:37 akanksha_ Oh alright. Can you point me to some resources? for that?
17:37 shyam akanksha_: I know that quickstart states have at least 2 nodes
17:37 shyam That is (well) not a must though
17:38 shyam akanksha_: Do you have gluster sources on your laptop?
17:38 akanksha_ So I dont need the nodes. I can simply setup GlusterFS on my host?
17:38 shyam yup
17:38 akanksha_ Will that muck things up?
17:38 akanksha_ I am sorry for asking a basic question? I am just new here
17:39 shyam No, it will not do much mucking... Do you have the gluster sources cloned? Cos' then I can paste a test script that can be run to create a gluster volume on your laptop (or it is (should be) as easy as about 4 lines of CLIs)
17:40 anrao joined #gluster-dev
17:40 akanksha_ shyam: I dont have the gluster sources yet. Can I just use this? https://launchpad.net/~gluster
17:40 anrao_ joined #gluster-dev
17:40 akanksha_ My host is ubuntu 14.04
17:41 akanksha_ If you could paste the script that will be great too :D
17:42 shyam akanksha_: Ok so you get the packages installed from the repo above (sorry if these are not Ubuntu terms, but...) that should be fine
17:42 shyam kkeithley: ^^^ if you are around, the above should be ok right?
17:43 luizcpg joined #gluster-dev
17:45 shyam akanksha_: once you do that... these steps should get you a local gluster volume and a FUSE (client) mount of the same...
17:45 * shyam is working on the steps
17:48 pcaruana joined #gluster-dev
17:52 shyam akanksha_: https://paste.fedoraproject.org/450233/14764675/ the steps
18:13 akanksha_ Thanks a lot shyam
18:13 akanksha_ :D
18:27 dlambrig_ joined #gluster-dev
18:32 raghu joined #gluster-dev
18:32 raghu pranithk1: there?
18:32 pranithk1 raghu: yes
18:32 pranithk1 raghu: Are you johnny/Du?
18:33 raghu pranithk1: johnny
18:33 raghu pranithk1: I have a doubt about the xattrop handles present in <brick>/.glusterfs/indices/xattrop directory
18:33 raghu They are newly created everytime a brick restarts. Is that correct?
18:34 spalai joined #gluster-dev
18:34 spalai left #gluster-dev
18:39 pranithk1 raghu: You mean the ones with 'xattrop-'
18:39 pranithk1 raghu: yes
18:41 shyam pranithk1: What are those files? I am curious, as I saw them as well in some testing that I was performing.
18:41 pranithk1 shyam: They are base files to which gfid-string files get hardlinked to
18:42 pranithk1 shyam: The stale ones get deleted eventually...
18:42 shyam oh, ok, so xattrop-fileK, and other GFID files within there are hard linked to this file, not the actual GFID in .glusterfs space, did I get that right?
18:43 shyam within there: meaning inside <brick>/.glusterfs/indices/xattrop
18:43 akanksha__ joined #gluster-dev
18:43 pranithk1 shyam: yeah yeah, the thinking is that you can essentially place the indices directory in a different disk too.
18:43 shyam pranithk1: cool!
18:43 shyam So nothing there *links* to the real space, understood (now back to that experiment and seeing it myself :) )
18:43 pranithk1 shyam: Although it is not exposed as option so far because no one is keen on doing it at least as of now...
18:43 pranithk1 shyam: yeah
18:44 pranithk1 shyam: Do you know more about the outreachy stuff? Seems like soumya is from my college :-)
18:44 pranithk1 shyam: What is the goal of this?
18:45 pranithk1 shyam: I saw the landing page...
18:45 pranithk1 shyam: it is to increase participation?
18:46 shyam pranithk1: ok... well, that would be one goal
18:46 shyam It is like  GSoC
18:46 gem joined #gluster-dev
18:46 overclk_ joined #gluster-dev
18:46 shyam So simply put consider it to be 'like a' GSoC for all purposes...
18:48 owlbot` joined #gluster-dev
18:49 pranithk1 shyam: Okay...
18:51 raghu` joined #gluster-dev
18:51 raghu` pranithk1: Did you answer it? I might have missed it. I got disconnected
18:53 pranithk1 raghu`: Yes, copy pasting it would be a problem... shyam also asked more questions. May be you should check out https://botbot.me/freenode/gluster-dev/ ?
18:55 hagarth pranithk1: I would recommend generating a different volfile for gfapi in 3.9
18:55 hagarth pranithk1: with client-io-threads, the number of threads in an application can be mind boggling with gfapi
18:55 raghu` pranithk1: Yes. Say if there are gfids that need
18:55 raghu` healing which are present in the xattrop directory
18:55 raghu` and hardlinked to xattrop handle. Now if the brick
18:55 raghu` restarts and we have a new xattrop handle. Now what
18:55 raghu` happens?
18:56 pranithk1 raghu`: New gfids will be linked to newer handle
18:56 pranithk1 raghu`: Older ones will stay as is. When xattrop-fileK handle is not the current handle and has only one link, it is deleted.
18:57 raghu` pranithk1: ok. So older xattrop handle linked gfids will be healed
18:57 pranithk1 hagarth: thinking
18:57 raghu` pranithk1: is that correct? Or can it cause some problems for healing?
18:57 pranithk1 raghu`: It doesn't matter what handle gfids are linked to
18:57 pranithk1 raghu`: It is just an implementation detail
18:58 raghu` pranithk1: Yes. I saw the code, where index does a readdir of the xattrop directory and returns the gfids to self-heal daemon
18:58 raghu` pranithk1: But wanted to clarify with you
18:58 pranithk1 raghu`: One of the reasons for the refresh is that older filesystems had limits on the number of hardlinks, restart of a brick is a good time for a refresh
18:59 raghu` pranithk1: Hmm, OK. So in no way multiple xattrop handles being present should cause a problem to the self-healing process. Is it right?
18:59 tdasilva joined #gluster-dev
18:59 pranithk1 raghu`: Yes sir
19:00 hagarth pranithk1: client-io-threads would be bad for gnfs and cause the same kind of contentions that Jeff is dabbling with
19:00 pranithk1 hagarth: gnfs doesn't have io-threads enabled by default.
19:01 pranithk1 hagarth: only fuse/gfapi
19:02 hagarth pranithk1: ok..even for fuse unless there are as many cores, we may cause thrashing
19:04 pranithk1 hagarth: It starts with just 1 thread. It increases only if the workload is that demanding
19:06 hagarth pranithk1: right, idle threads do not thrash .. mostly active threads do
19:09 pranithk1 hagarth: We can set a lower maximum for number of threads if that helps, but it is time we start enabling this by default.
19:12 hagarth pranithk1: hmm, what necessitates us to enable it by default for all use cases?
19:13 pranithk1 hagarth: Better performance for the I/O fuse thread is becoming a bottle-neck
19:15 hagarth pranithk1: more threads != better performance if the hardware cannot support it .. moreover we might incur a performance penalty as Jeff is observing with increased io-threads
19:18 raghu` hagarth: pranithk1: Can we just set a limit of say 2 or  4 threads? Lets not spawn any more threads than those values. Can it help?
19:18 pranithk1 hagarth: I agree. So far I have seen two perf numbers where the performance was better. https://bugzilla.redhat.com/show_bug.cgi?id=1349953, I forget the other one.
19:18 glusterbot Bug 1349953: unspecified, unspecified, ---, pkarampu, ASSIGNED , thread CPU saturation limiting throughput on write workloads
19:19 hagarth raghu`: we might not get performance benefits then
19:20 hagarth pranithk1: all we need is the fuse thread to be not doing encoding right?
19:21 raghu` hagarth: pranithk1: What encoding?
19:21 pranithk1 hagarth: It helped replication too. EC had 3.5X and AFR had 1.5X perf improvement
19:21 pranithk1 raghu`: EC encoding,
19:24 hagarth pranithk1: have we measured the impact of enabling this with intensive workloads and cpu core limited hardware?
19:24 hchiramm_ joined #gluster-dev
19:26 pranithk1 hagarth: Could you give an example? How many cores?
19:29 hagarth pranithk1: maybe 2 or 4 cores with the kind of tests being run in 1349953?
19:29 pranithk1 hagarth: I think the number of threads that manoj set in the bz is 4 if I remember correctly
19:31 pranithk1 hagarth: oh, I think it was event-threads. I can speak to Manoj about it. I think he didn't catch up on mails yet.
19:31 hagarth pranithk1: right.. having one such comparison would be helpful
19:32 pranithk1 hagarth: I can ask him for that...
19:39 Acinonyx joined #gluster-dev
19:47 luizcpg joined #gluster-dev
20:27 raghu joined #gluster-dev
20:29 shaunm joined #gluster-dev
20:41 Acinonyx_ joined #gluster-dev
22:12 hchiramm_ joined #gluster-dev
23:44 gem joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary