Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-12-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 haomaiwang joined #gluster
00:29 virusuy joined #gluster
00:35 marlinc joined #gluster
01:01 plarsen joined #gluster
01:06 marlinc joined #gluster
01:07 haomaiwang joined #gluster
02:05 phileas joined #gluster
02:07 haomaiwang joined #gluster
02:35 derjohn_mobi joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:52 sanoj joined #gluster
02:59 Wizek joined #gluster
03:09 haomaiwang joined #gluster
04:06 webeindustry joined #gluster
04:07 webeindustry hello I have a quick question. I'm testing glusterfs currently replicated volume, replica 2, 2 nodes. When I reboot there is the famous 42s timeout of the remaining node for the volume. Is there a way to gracefully reboot a computer without this occurring? Wil this be the same in a production environment with more nodes? I plan on having 5 node cluster.
04:24 Gnomethrower joined #gluster
04:25 Nebraskka joined #gluster
05:03 jiffin joined #gluster
05:17 bluenemo joined #gluster
05:25 kraynor5b joined #gluster
05:48 marlinc joined #gluster
05:54 kramdoss_ joined #gluster
05:58 d0nn1e joined #gluster
06:16 Ramereth joined #gluster
06:22 haomaiwang joined #gluster
06:29 Lee1092 joined #gluster
06:36 msvbhat joined #gluster
06:54 buvanesh_kumar joined #gluster
06:56 plarsen joined #gluster
07:06 purpleidea joined #gluster
07:11 haomaiwang joined #gluster
07:14 haomaiwang joined #gluster
08:08 kramdoss_ joined #gluster
08:14 msvbhat joined #gluster
08:14 haomaiwang joined #gluster
08:59 mb_ joined #gluster
09:14 haomaiwang joined #gluster
09:54 post-factum joined #gluster
09:55 sanoj joined #gluster
10:12 kramdoss_ joined #gluster
10:14 haomaiwang joined #gluster
10:31 purpleidea joined #gluster
11:02 jri joined #gluster
11:05 [diablo] joined #gluster
11:08 BitByteNybble110 joined #gluster
11:12 sanoj joined #gluster
11:14 haomaiwang joined #gluster
11:22 msvbhat joined #gluster
12:14 haomaiwang joined #gluster
12:14 kramdoss_ joined #gluster
12:25 sanoj joined #gluster
12:47 yalu joined #gluster
12:48 kramdoss_ joined #gluster
13:08 sanoj joined #gluster
13:14 haomaiwang joined #gluster
13:40 johnmilton joined #gluster
14:14 haomaiwang joined #gluster
14:18 ahino joined #gluster
14:33 sanoj joined #gluster
14:46 B21956 joined #gluster
15:07 haomaiwang joined #gluster
15:14 haomaiwang joined #gluster
15:19 Gambit15 joined #gluster
15:29 ns32k hi all, i made some performance tests with fio (4 threads, 4GB per thread, direct=1) and observe the following: ~800 iops on non replicated volume, ~200 iops on replica 3 arbiter 1 volume - is this loss of 75% performance normal? the link between all 3 hosts is 2GBit (bond-rr, scp: 220MB/s) but i tested with single 1GBit link too
15:35 post-factum ns32k: what is the latency between hosts?
15:37 ns32k post-factum: from fio output?
15:39 ns32k ping roundtrip: avarge 0.25 msec
15:40 ns32k fio ~200 iops: lat (msec) : 10=1.51%, 20=59.50%, 50=37.61%, 100=1.27%, 250=0.10%
15:41 ns32k fio ~800 iops: lat (usec) : 500=0.88%, 750=60.22%, 1000=23.39% - lat (msec) : 2=6.83%, 4=0.01%, 10=7.85%, 20=0.82%, 100=0.01%
15:42 colm joined #gluster
15:42 ns32k ok - i see... how to improve latency? any hints/guides/... i should read?
15:44 post-factum ns32k: no, network latency
15:46 ns32k post-factum: but why is latency on replicated volume so high? is same network as not replicated volume and i make all tests with remote mounts?
15:50 post-factum ns32k: what is the network latency&
15:50 post-factum ?
15:52 ns32k post-factum: how to measure? with iperf?
15:52 gem joined #gluster
15:52 post-factum ns32k: man ping :)
15:56 ns32k post-factum: ping -U -> 0.398/0.447/0.484/0.040 ms
15:58 post-factum ns32k: no, RTT please
15:58 post-factum without -U
15:59 ns32k post-factum: normal ping -> 0.340/0.412/0.544/0.061 ms
16:00 post-factum unbelievable
16:01 post-factum anyway, that's why IOPSes look okay
16:01 post-factum single file operation (fop) might be ~3 RTT
16:02 post-factum so, it might take more that 1 msec
16:02 ns32k post-factum: are bad values? shall i try with other switch?
16:02 post-factum 1/0.001 ~~ 1000 IOPSes, or 800, whatever
16:02 post-factum if you use triple replication, it is /3
16:03 post-factum so, somewhere under 300 IOPSes
16:03 ns32k understand
16:03 post-factum or 200, if you like
16:03 post-factum and no, this is normal latency
16:03 post-factum for usual Ethernet, you know
16:03 post-factum if you need lower one, use InfiniBand
16:03 ns32k :(
16:04 post-factum or you can switch to ceph, set ack replica to 1, and feel better
16:04 ns32k the point: ovirt/kvm guests have another factor 2 - result there ~100 iops
16:05 plarsen joined #gluster
16:05 post-factum poorly designed storage i see
16:06 ns32k yep - i see too
16:06 msvbhat joined #gluster
16:09 ns32k post-factum: ty anyway!
16:10 post-factum np
16:10 arif-ali joined #gluster
16:19 mhulsman joined #gluster
16:33 mhulsman joined #gluster
16:41 mhulsman joined #gluster
17:18 hchiramm joined #gluster
17:38 armyriad joined #gluster
17:40 webeindustry I didn't receive an answer yesterday so reposting my question.
17:40 webeindustry hello I have a quick question. I'm testing glusterfs currently replicated volume, replica 2, 2 nodes. When I reboot there is the famous 42s timeout of the remaining node for the volume. Is there a way to gracefully reboot a computer without this occurring? Wil this be the same in a production environment with more nodes? I plan on having 5 node cluster.
18:04 webeindustry @ns32k try tuned. yum install tuned -y; tuned-adm profile network-latency
18:05 webeindustry https://i.imgur.com/VTUqUm2.png
18:12 JoeJulian webeindustry: most distros don't stop the network before glusterfsd is killed. If your distro does, you'll need to find a way to make it stop glusterfsd before that. Killing glusterfsd gracefully closes the tcp connection preventing the timeout wait.
18:14 webeindustry I'm on centos7.2 with 4.9 kernel. So I have tried to umount volume, then service stop glusterd then issue reboot. Same effect. How to gracefully kill glasterfsd?
18:17 ahino joined #gluster
18:20 JoeJulian systemctl enable glusterfsd
18:21 JoeJulian I thought it should have been enabled anyway, but <shrug>
18:21 JoeJulian Once it's enabled, it should stop as part of the shutdown.
18:21 webeindustry service glusterfsd stop worked.
18:21 JoeJulian systemctl
18:21 JoeJulian learn it. ;)
18:22 webeindustry Yes, coming from debian recently. Everyone told me debian is a no-no for clusters.
18:22 JoeJulian Gotta run, family's on our way to a movie. You should have to tools to figure it out, now, I think.
18:22 JoeJulian Good luck. :)
18:22 webeindustry Thank you, and enjoy!
18:25 webeindustry FYI systemctl enable glusterfsd does not prevent the timeout on reboot for 7.2 with kernel 4.9 on gluster 3.9.0-2
18:26 webeindustry must manually: systemctl stop glusterfsd
18:33 mhulsman joined #gluster
18:52 mhulsman joined #gluster
19:24 msvbhat joined #gluster
19:37 gem joined #gluster
20:03 d0nn1e joined #gluster
20:52 webeindustry I made a simple /etc/init.d/glusterfsd with systemctl stop glusterfsd, exit & symlink it to rc0 rc6. problem solved ;)
21:17 mhulsman joined #gluster
21:17 mhulsman joined #gluster
21:18 mhulsman joined #gluster
21:19 mhulsman joined #gluster
21:20 mhulsman joined #gluster
21:21 mhulsman joined #gluster
21:22 mhulsman joined #gluster
21:22 mhulsman joined #gluster
21:23 arpu joined #gluster
21:23 mhulsman joined #gluster
21:24 mhulsman joined #gluster
22:08 plarsen joined #gluster
22:29 wiza joined #gluster
22:39 derjohn_mob joined #gluster
22:46 plarsen joined #gluster
22:50 PaulCuzner joined #gluster
22:50 PaulCuzner left #gluster
23:05 side_control joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary