Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 dlambrig joined #gluster
00:13 theron joined #gluster
00:20 dlambrig joined #gluster
00:32 chirino joined #gluster
00:57 ChrisHolcombe joined #gluster
01:02 m0zes joined #gluster
01:16 niknakpaddywak joined #gluster
01:32 badone joined #gluster
01:34 jcastill1 joined #gluster
01:39 jcastillo joined #gluster
01:39 dgandhi joined #gluster
01:45 Pupeno joined #gluster
01:50 rafi joined #gluster
01:50 hchiramm_home joined #gluster
01:59 haomaiwang joined #gluster
02:01 haomaiwang joined #gluster
02:15 m0zes joined #gluster
02:16 baojg joined #gluster
02:23 baojg joined #gluster
02:32 RedW joined #gluster
02:41 haomaiwa_ joined #gluster
02:42 atalur joined #gluster
03:01 haomaiwa_ joined #gluster
03:08 TheSeven joined #gluster
03:13 gem joined #gluster
03:20 atalur joined #gluster
03:42 David_Varghese hello, i have 6 vm and replicate to all 6. is it that a bad practive to have gluster client on all 6 vm? im using it to LB web using haproxy
03:50 nangthang joined #gluster
03:51 kshlm joined #gluster
03:53 skoduri joined #gluster
04:01 TheCthulhu joined #gluster
04:19 gem joined #gluster
04:25 kshlm joined #gluster
04:25 skoduri joined #gluster
04:51 kshlm joined #gluster
04:54 jiffin joined #gluster
04:54 beeradb joined #gluster
05:19 jiffin joined #gluster
05:49 baojg joined #gluster
05:49 64MADTQNI joined #gluster
06:01 haomaiwang joined #gluster
06:05 rafi joined #gluster
06:07 maveric_amitc_ joined #gluster
06:22 rafi joined #gluster
06:25 jcastill1 joined #gluster
06:30 jcastillo joined #gluster
06:42 rafi1 joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 atalur joined #gluster
07:04 kshlm joined #gluster
07:08 leeyaa joined #gluster
07:08 leeyaa hello
07:08 glusterbot leeyaa: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:09 leeyaa which volume type offers best performance (not looking for redundancy)
07:09 leeyaa is it distributed ?
07:17 ppai joined #gluster
07:21 doekia joined #gluster
07:28 atalur joined #gluster
07:38 jiffin leeyaa: yeah it is distributed
07:42 leeyaa thanks jiffin
07:43 jiffin leeyaa: np
08:01 haomaiwa_ joined #gluster
08:03 haomaiwa_ joined #gluster
08:04 leeyaa jiffin: and if you loose on brick you loose data on that brick is that correct ?
08:04 leeyaa one brick*
08:04 leeyaa ah the spelling today
08:06 jiffin incase of distributed , yes
08:07 leeyaa jiffin: what happens if one file is split across bricks? or distributed does not work that way
08:07 jiffin leeyaa: in distributed entire file will be in a single brick
08:08 leeyaa jiffin: hm how come i got x2 the write throughput with one of my tests if file is on single disk? does it write to all disk and then move it or something ?
08:09 jiffin leeyaa: if u want to split file across different bricks , you can use shard volume
08:09 leeyaa jiffin: i am just trying to understand how distributed works
08:09 jiffin leeyaa: k
08:10 jiffin leeyaa: if u are using fuse client, then file will be written to that brick only(glusterfs-sever)
08:11 leeyaa jiffin: i understand. but why do you get faster performance for writes
08:11 jiffin leeyaa: may be i didn't understand your context correctly
08:11 leeyaa well
08:12 jiffin leeyaa: are getting better performance for writes only?
08:13 leeyaa jiffin: no reads and writes
08:13 leeyaa around x2 faster compared to dedicated disk without gluster
08:14 leeyaa but thats just simple dd
08:15 leeyaa if it is writing to single disk i thought i should not be getting increased speeds. maybe im missing something
08:15 jiffin how are you accessing the gluster volume?
08:16 leeyaa locally, ive mounted it to the same server via fstab : vm-nfs1.bg.hm:/data /mnt/nfs glusterfs auto,rw 0 0
08:17 leeyaa it is exactly what i need since ill be using this volume for packages building and i need it fast. just need to understand why it is that way ;p
08:17 jiffin correct me if i am wrong , u are comparing the write performance dedicated disk with locally mounted gluster volume
08:18 leeyaa yes
08:18 leeyaa two nodes, two vms, gluster has dedicated disk for each brick. if i test the same drives out of gluster (e.g. sda) i get 2 times slower performance
08:19 jiffin well , since you are using native fuse client, it can happen, since it requires lots of context switching in kernel fuse module
08:19 leeyaa you mean to be faster ?
08:20 jiffin you are saying gluster is faster than dedicated disk?
08:20 jiffin i thought in another way
08:20 leeyaa yes
08:20 leeyaa thats why it is weird to be
08:20 leeyaa to me*
08:20 jiffin k. it seems weird to me to
08:20 leeyaa if it is writing to single disk it is not supposed to be faser
08:21 jiffin may be its due to caching effects , performance translators put in glusterfs client stack
08:21 jiffin i am not sure
08:21 leeyaa https://bpaste.net/show/1cd02de4c550
08:21 jiffin you can drop a mail in gluster-devel mailing list
08:22 leeyaa yeah ill research it. if i can get consistent performance boost for other tests ill probably use it not only for my lab but other place where i need more speed
08:22 jiffin k
08:22 leeyaa ill try adding 2 more bricks later
08:23 jiffin i am going out for the lunch
08:23 jiffin bye
08:31 ekuric joined #gluster
08:32 night joined #gluster
08:43 skoduri joined #gluster
08:52 leeyaa erm thats weird, why does df report wrong disk usage for gluster volumes
08:52 leeyaa https://bpaste.net/show/ffb87469a625
08:52 leeyaa it should be nearly 20gb
09:01 haomaiwang joined #gluster
09:15 qubozik joined #gluster
09:21 Pupeno joined #gluster
09:21 Pupeno joined #gluster
09:23 ekuric joined #gluster
09:30 spalai joined #gluster
09:35 nishanth joined #gluster
09:49 baojg joined #gluster
09:50 Pupeno joined #gluster
10:01 haomaiwa_ joined #gluster
10:13 LebedevRI joined #gluster
10:16 yangfeng joined #gluster
11:00 kxseven joined #gluster
11:01 haomaiwa_ joined #gluster
11:44 mhulsman joined #gluster
11:44 qubozik joined #gluster
11:49 zhangjn joined #gluster
11:50 zhangjn joined #gluster
11:50 spalai joined #gluster
11:51 zhangjn joined #gluster
11:52 shyam joined #gluster
12:00 mhulsman joined #gluster
12:01 haomaiwa_ joined #gluster
12:27 mhulsman joined #gluster
12:47 zhangjn joined #gluster
12:57 onorua joined #gluster
12:58 rafi joined #gluster
12:59 onorua guys, I have a problem with migration from 3.5 to 3.7, local NFS server doesn't work with erro: [rpcsvc.c:1370:rpcsvc_program_register_portmap] 0-rpc-service: Could not register with portmap 100005 3 38465
13:00 onorua I've searched in google, guys recommented to remove -w from rpcbind - I did but no luck
13:00 onorua what else can I try?
13:01 onorua the interesting part is that I've updated 2 servers, one is working fine, the ther one doesn't work
13:01 yosafbridge joined #gluster
13:01 haomaiwa_ joined #gluster
13:09 bfoster joined #gluster
13:15 rafi1 joined #gluster
13:23 shyam joined #gluster
13:28 hagarth onorua: check if you have kernel nfs server running on the server where it is failing
13:29 hagarth onorua: this might help - http://www.gluster.org/community/documentation/ind‚Äčex.php/Gluster_3.1:_NFS_Frequently_Asked_Questions
13:29 onorua hagarth: I've done it already, there is nothing like this
13:30 onorua I've also did rpcinfo -p
13:30 onorua to check that there is no such binding, but it shows me only portmapper in services
13:40 onorua what i sthe best option to ask for a help with all the information?
13:49 rafi joined #gluster
13:58 dlambrig left #gluster
14:01 haomaiwa_ joined #gluster
14:14 shyam joined #gluster
14:38 atalur joined #gluster
14:50 kalzz joined #gluster
14:58 raghu joined #gluster
15:01 haomaiwa_ joined #gluster
15:11 hagarth onorua: sending a mail on gluster-users could be useful
15:19 chirino joined #gluster
15:39 shaunm joined #gluster
15:44 atalur joined #gluster
15:44 rafi joined #gluster
16:01 haomaiwa_ joined #gluster
16:11 atalur joined #gluster
16:15 maveric_amitc_ joined #gluster
16:31 julim joined #gluster
17:01 xiu joined #gluster
17:01 haomaiwa_ joined #gluster
17:19 shyam joined #gluster
17:22 dgandhi joined #gluster
17:26 atalur joined #gluster
17:32 julim joined #gluster
17:34 papamoose joined #gluster
17:38 gem joined #gluster
17:40 skoduri joined #gluster
17:41 htrmeira left #gluster
17:43 htrmeira joined #gluster
18:01 haomaiwa_ joined #gluster
18:33 julim joined #gluster
18:36 spalai joined #gluster
18:44 rafi1 joined #gluster
18:46 TheCthulhu2 joined #gluster
18:57 beeradb joined #gluster
18:58 DavidVargese joined #gluster
18:59 DavidVargese hello, im trying to copy 1.2GB files to gluster. its very slow and sometimes its stuck/hang. how can i improve the performance when copying file.
19:01 haomaiwa_ joined #gluster
19:06 spalai left #gluster
19:16 shyam joined #gluster
19:32 calisto joined #gluster
19:47 cliluw joined #gluster
19:58 cuqa_ joined #gluster
20:01 haomaiwa_ joined #gluster
20:03 rafi joined #gluster
20:11 DV joined #gluster
20:21 julim joined #gluster
20:25 rafi1 joined #gluster
20:32 masterzen joined #gluster
20:38 hagarth1 joined #gluster
20:52 kxseven joined #gluster
21:01 haomaiwa_ joined #gluster
21:53 badone joined #gluster
22:01 haomaiwa_ joined #gluster
22:07 DV__ joined #gluster
22:17 shyam joined #gluster
22:24 uebera|| joined #gluster
22:38 cholcombe joined #gluster
22:50 prg3 joined #gluster
23:01 haomaiwa_ joined #gluster
23:36 dlambrig joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary