Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:10 d-fence joined #gluster
00:13 kovshenin joined #gluster
00:17 davpenguin hi all
00:17 davpenguin i need help
00:17 davpenguin i've problems with some files on my gluster server
00:17 davpenguin i see the following error "fd cleanup on /var/www/loquesea"
00:18 davpenguin i've problems to access from my frontals
00:41 ctria joined #gluster
00:42 shyam joined #gluster
00:43 MikeLupe joined #gluster
00:49 dlambrig joined #gluster
00:49 luizcpg joined #gluster
00:58 davpenguin Launching heal operation to perform index self heal on volume glstvol01 has been unsuccessful on bricks that are down. Please check if all brick processes are running
01:01 haomaiwang joined #gluster
01:05 DV joined #gluster
01:15 amye joined #gluster
01:19 nathwill joined #gluster
01:19 ws2k3 joined #gluster
01:21 EinstCrazy joined #gluster
01:28 davpenguin anyone??
01:33 DV joined #gluster
01:36 nathwill joined #gluster
01:37 EinstCrazy joined #gluster
01:38 EinstCrazy joined #gluster
01:39 ghenry joined #gluster
01:39 ghenry joined #gluster
01:40 Lee1092 joined #gluster
01:46 hagarth joined #gluster
01:46 harish joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 EinstCra_ joined #gluster
01:55 EinstCrazy joined #gluster
01:57 EinstCra_ joined #gluster
01:58 EinstCrazy joined #gluster
02:00 EinstCrazy joined #gluster
02:00 harish joined #gluster
02:01 haomaiwang joined #gluster
02:03 haomaiwang joined #gluster
02:05 EinstCrazy joined #gluster
02:24 EinstCrazy joined #gluster
02:31 EinstCra_ joined #gluster
02:41 vshankar joined #gluster
02:46 haomaiwang joined #gluster
03:01 haomaiwang joined #gluster
03:10 haomaiwang joined #gluster
03:22 ramteid joined #gluster
03:25 ghenry joined #gluster
03:38 hchiramm joined #gluster
03:39 hagarth joined #gluster
03:43 itisravi joined #gluster
03:47 F2Knight joined #gluster
03:52 RameshN joined #gluster
03:53 shubhendu joined #gluster
03:55 amye joined #gluster
03:58 hagarth joined #gluster
04:06 nehar joined #gluster
04:06 nehar_ joined #gluster
04:07 nbalacha joined #gluster
04:09 raghug joined #gluster
04:14 atinm joined #gluster
04:15 aspandey joined #gluster
04:16 beeradb joined #gluster
04:26 haomaiwang joined #gluster
04:43 cuqa_ joined #gluster
04:43 kdhananjay joined #gluster
04:44 prasanth joined #gluster
04:45 hagarth left #gluster
04:46 raghug joined #gluster
04:49 hchiramm joined #gluster
04:49 nbalacha joined #gluster
04:53 kenansulayman joined #gluster
04:54 atinm joined #gluster
04:55 Apeksha joined #gluster
04:56 rafi joined #gluster
04:56 sakshi joined #gluster
04:59 gem joined #gluster
05:01 haomaiwang joined #gluster
05:04 ndarshan joined #gluster
05:07 xavih_ joined #gluster
05:07 malevolent joined #gluster
05:08 nixpanic joined #gluster
05:08 nixpanic joined #gluster
05:09 tom][ joined #gluster
05:10 jiffin joined #gluster
05:10 karthik___ joined #gluster
05:11 EinstCrazy joined #gluster
05:11 Nebraskka_ joined #gluster
05:12 necrogami_ joined #gluster
05:12 EinstCra_ joined #gluster
05:12 mattmcc_ joined #gluster
05:13 haomaiwang joined #gluster
05:13 cliluw joined #gluster
05:13 luizcpg joined #gluster
05:13 sakshi joined #gluster
05:14 XpineX joined #gluster
05:14 foster joined #gluster
05:15 shubhendu joined #gluster
05:15 poornimag joined #gluster
05:15 ahino joined #gluster
05:18 haomaiwang joined #gluster
05:20 kotreshhr joined #gluster
05:23 lanning joined #gluster
05:30 harish joined #gluster
05:48 nishanth joined #gluster
05:49 hgowtham joined #gluster
05:52 rastar joined #gluster
06:09 raghug joined #gluster
06:11 atinm joined #gluster
06:13 skoduri_ joined #gluster
06:16 ppai joined #gluster
06:21 EinstCrazy joined #gluster
06:25 jtux joined #gluster
06:27 arcolife joined #gluster
06:27 skoduri joined #gluster
06:29 gowtham joined #gluster
06:32 d0nn1e joined #gluster
06:33 anil_ joined #gluster
06:36 hchiramm joined #gluster
06:44 andy-b joined #gluster
06:45 kshlm joined #gluster
06:47 nbalacha joined #gluster
06:50 andy-b joined #gluster
06:55 Saravanakmr joined #gluster
06:59 karnan joined #gluster
07:05 hagarth joined #gluster
07:06 ivan_rossi joined #gluster
07:16 robb_nl joined #gluster
07:29 deniszh joined #gluster
07:30 level7 joined #gluster
07:32 claude_ joined #gluster
07:34 ramky joined #gluster
07:39 claude__ joined #gluster
07:40 claude_ left #gluster
07:41 fsimonce joined #gluster
07:45 yoavz joined #gluster
07:55 MikeLupe joined #gluster
08:08 ahino joined #gluster
08:17 level7_ joined #gluster
08:18 rafi joined #gluster
08:22 kovshenin joined #gluster
08:25 hackman joined #gluster
08:29 RameshN joined #gluster
08:35 monotek1 joined #gluster
08:41 jvandewege_ joined #gluster
08:41 dlambrig_ joined #gluster
08:42 d4n13L_ joined #gluster
08:42 muneerse2 joined #gluster
08:43 nishanth joined #gluster
08:43 liewegas joined #gluster
08:44 haomaiwang joined #gluster
08:44 rideh- joined #gluster
08:44 syadnom_ joined #gluster
08:44 yawkat` joined #gluster
08:44 Iouns_ joined #gluster
08:45 gbox_ joined #gluster
08:45 dastar_ joined #gluster
08:45 fubada- joined #gluster
08:45 Vaizki_ joined #gluster
08:45 tessier_ joined #gluster
08:45 ueberall joined #gluster
08:46 nathwill joined #gluster
08:46 foster joined #gluster
08:46 legreffier joined #gluster
08:46 shubhendu joined #gluster
08:46 Ulrar joined #gluster
08:46 fsimonce joined #gluster
08:47 klaxa joined #gluster
08:47 cuqa_ joined #gluster
08:49 spalai joined #gluster
08:50 virusuy joined #gluster
08:50 inodb joined #gluster
08:52 scubacuda joined #gluster
08:53 billputer joined #gluster
08:54 anoopcs joined #gluster
08:54 mmckeen joined #gluster
08:55 Pintomatic joined #gluster
08:56 PotatoGim joined #gluster
08:57 wnlx joined #gluster
08:57 sc0 joined #gluster
08:58 DV__ joined #gluster
09:01 twisted` joined #gluster
09:01 k4n0 joined #gluster
09:02 d0nn1e joined #gluster
09:02 kovshenin joined #gluster
09:03 fyxim joined #gluster
09:16 nishanth joined #gluster
09:20 amye joined #gluster
09:21 mmckeen joined #gluster
09:30 jiffin1 joined #gluster
09:31 spalai joined #gluster
09:34 jiffin joined #gluster
09:40 legreffier joined #gluster
09:43 arcolife joined #gluster
09:48 RameshN joined #gluster
09:50 level7 joined #gluster
09:50 yoavz joined #gluster
09:57 n0b0dyh3r3 joined #gluster
10:18 RameshN joined #gluster
10:21 jiffin1 joined #gluster
10:22 aspandey joined #gluster
10:35 Apeksha joined #gluster
10:35 jiffin joined #gluster
10:44 RameshN joined #gluster
10:46 Slashman joined #gluster
10:53 johnmilton joined #gluster
11:00 spalai joined #gluster
11:09 Apeksha_ joined #gluster
11:11 nishanth joined #gluster
11:12 rafi joined #gluster
11:13 amye joined #gluster
11:16 KenM joined #gluster
11:19 KenM Hello, Can someone tell me (or point me to write-up) what logic gluster uses to select a server to write/replicate files to in replica 2 (of 4 servers) mode?  I'm testing this config as a newbie and am trying to understand why 2 of the servers are overloaded from a particular client.  The other 2 servers do get some of the files, but nowhere near what the other pair does.
11:21 deniszh joined #gluster
11:21 KenM in this test, all were 100g files written via dd bs=1024 .  Of ~20 files written to 1 pair, only 2 were written to the other (all from same client)
11:24 karthik___ joined #gluster
11:29 kshlm joined #gluster
11:30 RameshN joined #gluster
11:35 ashka` joined #gluster
11:39 julim joined #gluster
11:40 wnlx joined #gluster
11:48 jiffin KenM: basically glusterfs uses hashing for the distribution
11:49 nottc joined #gluster
11:49 jiffin AFAIK , if u have two pairs, hash value divide between 0-50 in one pair and 51-99 in another
11:49 KenM jiffin: so there's no actual load balancing?
11:49 lezo joined #gluster
11:50 jiffin KenM: I am not sure about that
11:52 KenM My only thought was that I'd setup 4 test volumes on the less loaded servers, and I only have one volume that has bricks from all 4 so I got to wondering if the number of volumes on each server was coming into play (even though only one volume is actually started).
11:53 jiffin i don't think it depends on no. of volumes
11:54 raghug joined #gluster
11:55 skoduri joined #gluster
11:58 ira joined #gluster
11:58 hagarth joined #gluster
11:59 ndarshan joined #gluster
12:01 rastar joined #gluster
12:03 post-factum everyone joins #gluster-meeting
12:10 kotreshhr left #gluster
12:15 spalai left #gluster
12:15 shubhendu joined #gluster
12:16 RameshN joined #gluster
12:16 nottc joined #gluster
12:16 nishanth joined #gluster
12:21 hagarth joined #gluster
12:32 Slashman joined #gluster
12:39 Debloper joined #gluster
12:43 spalai joined #gluster
12:43 spalai left #gluster
12:44 nishanth joined #gluster
12:46 hi11111 joined #gluster
12:47 shaunm joined #gluster
12:50 andy-b joined #gluster
12:50 shubhendu joined #gluster
12:52 vshankar joined #gluster
13:00 nehar joined #gluster
13:06 luizcpg joined #gluster
13:10 amye joined #gluster
13:13 plarsen joined #gluster
13:13 dlambrig joined #gluster
13:13 chirino joined #gluster
13:16 nbalacha joined #gluster
13:23 haomaiwang joined #gluster
13:38 Debloper joined #gluster
13:38 Saravanakmr amye++
13:38 glusterbot Saravanakmr: amye's karma is now 5
13:39 amye Happy to halp.
13:40 jobewan joined #gluster
13:41 andy-b joined #gluster
13:41 hagarth joined #gluster
13:42 shyam joined #gluster
13:45 nishanth joined #gluster
13:47 skoduri joined #gluster
13:49 shubhendu joined #gluster
13:52 spalai joined #gluster
13:56 Slashman joined #gluster
14:04 amye joined #gluster
14:10 nishanth joined #gluster
14:12 ivan_rossi left #gluster
14:12 MikeLupe joined #gluster
14:13 MikeLupe joined #gluster
14:13 Gnomethrower joined #gluster
14:14 MikeLupe joined #gluster
14:14 harish_ joined #gluster
14:14 hi11111 if i change the permissions on a gluster mount, is that also supposed to change the permissions of the brick directory?
14:15 rwheeler joined #gluster
14:16 hagarth hi11111: yes
14:16 hi11111 hagarth: ok, thanks
14:17 MikeLupe joined #gluster
14:19 mowntan joined #gluster
14:19 mowntan joined #gluster
14:19 mowntan joined #gluster
14:21 shubhendu joined #gluster
14:30 ramky joined #gluster
14:36 DV joined #gluster
14:53 marcvw joined #gluster
14:54 arcolife joined #gluster
14:55 muneerse joined #gluster
15:00 poornimag joined #gluster
15:01 wushudoin joined #gluster
15:14 skylar joined #gluster
15:16 kpease joined #gluster
15:21 muneerse2 joined #gluster
15:32 karnan joined #gluster
15:34 marcvw left #gluster
15:35 F2Knight joined #gluster
15:38 level7_ joined #gluster
15:43 level7 joined #gluster
16:22 cholcombe ndevos, you're right.  I did mean to put in that bug ticket with glusterfs not red hat storage.  I didn't see the link when i was searching for it for some reason
16:24 robb_nl joined #gluster
16:38 Bhaskarakiran joined #gluster
16:39 F2Knight joined #gluster
16:45 rafi joined #gluster
16:47 Lee1092 joined #gluster
17:02 dlambrig I am getting a recurring smoke error that is unrelated to my patch… “IOError: [Errno 13] Permission denied: '/usr/lib/python2.6/site-packag​es/gluster/glupy/__init__.pyc’”
17:02 dlambrig anyone know why this would be? #14362
17:05 luizcpg joined #gluster
17:33 amye joined #gluster
17:34 nottc joined #gluster
17:34 rafi joined #gluster
17:38 ahino joined #gluster
17:51 atinm joined #gluster
17:51 robb_nl joined #gluster
17:53 rafi joined #gluster
18:06 skylar joined #gluster
18:09 kovshenin joined #gluster
18:25 plarsen joined #gluster
18:34 robb_nl joined #gluster
18:41 hagarth joined #gluster
18:44 lpabon joined #gluster
18:53 dlambrig joined #gluster
19:15 hackman joined #gluster
19:20 muneerse joined #gluster
19:28 dlambrig joined #gluster
19:35 nishanth joined #gluster
19:42 jnix joined #gluster
19:47 deniszh joined #gluster
19:51 dlambrig joined #gluster
19:57 shirwa joined #gluster
19:57 johnmilton joined #gluster
20:15 johnmilton joined #gluster
20:49 muneerse joined #gluster
20:52 luizcpg Hi everyone
20:52 luizcpg I’m facing a very weird issue with gluster fuse mount...
20:52 luizcpg I have a replica 3(with arbiter) and on gluster fuse mount on one host… the performance is pretty good …
20:52 luizcpg and
20:53 luizcpg I have exactly the same fuse mount on another host…. same config
20:53 luizcpg when copying files to this gluster fuse mount from this another host and the performance is horrible.
20:53 luizcpg any ideas what might be happening ?
20:53 luizcpg 3.7.11
20:53 luizcpg the latest one
21:01 post-factum luizcpg: what about network connectivity?
21:01 post-factum luizcpg: and some more info about tools and numbers...
21:01 luizcpg no way
21:02 luizcpg it’s exactly the same network segment.
21:02 luizcpg which kind of infos you are looking for ?
21:02 post-factum how do you copy files?
21:04 luizcpg http://pastebin.com/0Rh5tbdd
21:04 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:04 luizcpg gluster configs on the pastebin
21:04 luizcpg copying using cp
21:04 luizcpg I tested from host A and host B
21:05 luizcpg both mounting using fuse mount the same gluster volume
21:05 luizcpg raw_data
21:05 luizcpg host A performance is good… it’s on production and serving ~1.3TB of images and so on
21:05 luizcpg performance is good.
21:06 luizcpg host B is a backup server …
21:06 luizcpg mounting exactly the same gluster volume, same parameters
21:06 luizcpg performance is horrible.
21:06 post-factum would you mind trying iperf between host B and one of gluster node?
21:08 luizcpg I did something different now
21:08 luizcpg the “entrypoing was configured to gluster37node1.xyz.com
21:08 luizcpg this is the same production entrypoint
21:08 luizcpg and on the backup host as well
21:08 luizcpg performance was bad
21:09 luizcpg now changed the entrypoint on the backup server to gluster37node2.xyz.com
21:09 luizcpg let’s see
21:09 post-factum i bet it won't make any difference
21:11 luizcpg horrible as well
21:11 luizcpg crazy man
21:11 luizcpg have you seen the volume config ?
21:11 luizcpg looks good for you ?
21:11 post-factum yup
21:11 post-factum i'd wonder why you have performance.write-behind disabled
21:12 luizcpg should I enable this ?
21:12 post-factum it is enabled by default, but you've disabled it. why?
21:12 luizcpg let me change
21:12 luizcpg hold on
21:14 post-factum oh no, you should really know why that happened. there might be reasons
21:18 luizcpg by changing this param looks faster
21:18 luizcpg [root@app1 ~]# time cp dental-release-2.11.1.tar /raw_data/encrypted_fs_ssd/mysql-backup/
21:18 luizcpg real0m16.336s
21:18 luizcpg user0m0.004s
21:18 luizcpg sys0m0.311s
21:18 luizcpg this file has 334MB
21:19 luizcpg from the backup host I was able to create the backup now
21:19 luizcpg 179M May 18 21:17 db-backup.2016.05.18_21.16.sql.gz
21:19 luizcpg no as fast as on the app1 server but works ...
21:20 post-factum you should re-analyze your volume options as well as inspect network stack on host B
21:20 luizcpg I have no issues with the network stack… I’m 100% sure about it…
21:20 luizcpg They are two centos7 vm’s … same kernel… same everything...
21:21 luizcpg I can transfer files at a high speed for backhost to gluster
21:21 luizcpg let me show
21:21 post-factum i guess the key word here is "vm"
21:21 luizcpg why ?
21:22 post-factum because it depends on underlying hypervisor
21:24 luizcpg root@backup-tools ramdisk]# scp db-dental-backup.2016.05.18_21.16.sql.gz root@gluster37node2.xyz.com:/tmp
21:24 luizcpg The authenticity of host 'gluster37node2.xyz.com (10.1.51.13)' can't be established.
21:24 luizcpg ECDSA key fingerprint is 87:dd:44:b3:08:c7:4b:b5:23:49:85:05:17:df:51:74.
21:24 luizcpg Are you sure you want to continue connecting (yes/no)? yes
21:24 luizcpg Warning: Permanently added 'gluster37node2.xyz.com,10.1.51.13' (ECDSA) to the list of known hosts.
21:24 luizcpg root@gluster37node2.xyz.com's password:
21:24 luizcpg db-dental-backup.2016.05.18_21.16.sql.gz                                                                                                                             100%  179MB 178.6MB/s   00:01
21:24 luizcpg look this
21:25 luizcpg over scp the speed was 178.6 MB/s
21:25 luizcpg I’m using oVirt , therefore, kvm...
21:25 luizcpg oVirt 3.6.5
21:26 luizcpg look good, right ?
21:26 post-factum looks irrelevant to gluster traffic :)
21:27 luizcpg this is just to demonstrate we have fast speed between gluster entrypoint and the backup host….
21:27 luizcpg performance.quick-read: off
21:27 luizcpg performance.read-ahead: off
21:27 luizcpg performance.io-cache: off
21:27 luizcpg performance.stat-prefetch: off
21:27 luizcpg ^ should I enable these parameters ?
21:27 luizcpg Would you recommend ?
21:29 post-factum those scp numbers demonstrate *nothing* because gluster is highly-dependend on RTT
21:31 luizcpg ok ….
21:31 luizcpg [root@backup-tools ramdisk]# iperf -c 10.1.51.13
21:31 luizcpg ------------------------------​------------------------------
21:31 glusterbot luizcpg: ------------------------------​----------------------------'s karma is now -3
21:31 luizcpg Client connecting to 10.1.51.13, TCP port 5001
21:31 luizcpg TCP window size: 85.0 KByte (default)
21:32 luizcpg ------------------------------​------------------------------
21:32 glusterbot luizcpg: ------------------------------​----------------------------'s karma is now -4
21:32 luizcpg [  3] local 10.1.51.49 port 38727 connected with 10.1.51.13 port 5001
21:32 luizcpg [ ID] Interval       Transfer     Bandwidth
21:32 luizcpg [  3]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec
21:32 luizcpg what about iperf?… any extra config ?
21:32 level7 joined #gluster
21:34 glusterbot spam detected: please use a pastebin
21:36 post-factum nope, looks good
21:37 post-factum so, what is the copy speed now?
21:37 luizcpg over gluster ?
21:37 post-factum yep
21:37 luizcpg let me run again with time
21:39 luizcpg http://pastebin.com/DWGMv7Cp
21:39 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:40 luizcpg 179M in ~46s
21:40 luizcpg pretty slow…
21:40 post-factum awfully slow
21:40 luizcpg but faster than before…
21:40 luizcpg reading from a ramdisk….
21:40 luizcpg yeah pretty bad.
21:40 luizcpg should I enable the other parameters ?
21:40 luizcpg the performance ones I mean… above.
21:41 nottc joined #gluster
21:41 post-factum i'd start with default values
21:42 luizcpg http://pastebin.com/Kim6q6hX
21:42 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:43 luizcpg ^ this is over the “faster” host
21:43 post-factum but really, i'd start with inspecting hypervisors
21:44 luizcpg I’m using oVirt with iscsi local with targetcli…
21:44 post-factum are gluster nodes also vms?
21:44 luizcpg I’m connecting to the baremetal directly… but attaching to this vm’s
21:44 luizcpg yes
21:45 post-factum then sorry, never used vms for serving gluster
21:45 post-factum only for clients
21:45 luizcpg What means gluster “defaults” ?
21:46 luizcpg all performance.* enabled ?
21:46 post-factum "sudo gluster volume set help" on server
21:46 post-factum it will show default values for all acceptable options
21:46 fortpedro joined #gluster
21:47 luizcpg got it
21:54 luizcpg basically it enabled , by default, all performance.* above
21:54 luizcpg I’ve just enabled everything...
21:55 ecoreply joined #gluster
22:05 luizcpg post-factum: what would you expect in terms of performance ?
22:07 luizcpg the existing gluster volume has ~2 million image files organized on different directories… the application can fetch these images pretty fast currently…
22:07 luizcpg this performance penalty seems to be happening over bigger files…
22:07 post-factum ideally, it would be your network throughput divided by replica count
22:09 luizcpg ~300Mbits/sec +-
22:09 luizcpg I have replica 3 with arbiter in this case
22:09 luizcpg the arbiter impacts performance ?
22:10 luizcpg I forgot to say… the lvm’s on each gluster nodes are sitting on top of an encrypted filestore (LUKS)
22:11 luizcpg vdb                                   252:16   0  3.7T  0 disk
22:11 luizcpg `-vdb1                                252:17   0  3.7T  0 part
22:11 luizcpg `-encryptedfs                       253:2    0  3.7T  0 crypt
22:11 luizcpg `-encryptedfs--vg-encryptedfs--lv 253:3    0  3.7T  0 lvm   /encryptedfs
22:11 glusterbot luizcpg: `-encryptedfs--vg-encryptedfs's karma is now -1
22:26 post-factum yup, arbiter also has some impact on performance
22:26 post-factum but that is connected mainly with metadata updates
22:31 luizcpg got it…regarding to the vm’s … I don’t see any reason to have performance impact… all the underlying storage is over iSCSI
22:32 luizcpg the encryption might cause a performance penalty as well… but be a vm hosting gluster is not an issue necessarly …
22:32 luizcpg what do you think ?
22:41 post-factum i think if you have same hosts and same hypervisors but still see the difference in behavior, you should try to examine everything starting from hypervisor
22:41 post-factum probably, suddenly, you have lots of TCP retransmissions… or non-optimal MTU size
22:42 post-factum or whatever
23:00 luizcpg I don’t have jumpo packets ...
23:00 luizcpg the MTU is 1500
23:01 luizcpg it could be 9000 , but not sure if it’s the root cause of the performance penalty.
23:10 dlambrig joined #gluster
23:33 gbox joined #gluster
23:48 harish_ joined #gluster
23:58 nottc joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary