Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 rastar joined #gluster
00:24 msvbhat joined #gluster
00:29 timotheus1_ joined #gluster
00:29 map1541 joined #gluster
00:34 timotheus1_ joined #gluster
00:48 ws2k3 joined #gluster
01:27 bluenemo joined #gluster
01:41 jbrooks joined #gluster
02:00 ws2k3 joined #gluster
02:01 ws2k3 joined #gluster
02:02 ws2k3 joined #gluster
02:02 ws2k3 joined #gluster
02:02 gospod3 joined #gluster
02:03 ws2k3 joined #gluster
02:03 ws2k3 joined #gluster
02:21 kpease joined #gluster
02:28 shyam joined #gluster
02:38 hgichon joined #gluster
02:38 hgichon_ joined #gluster
02:41 hgichon hi guys ... is there any method for changing gluster-nfs mount port?
02:56 ilbot3 joined #gluster
02:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:03 gyadav joined #gluster
03:06 susant joined #gluster
03:12 nbalacha joined #gluster
03:29 kramdoss_ joined #gluster
03:42 sunnyk joined #gluster
04:05 psony joined #gluster
04:08 itisravi joined #gluster
04:15 rwheeler joined #gluster
04:20 sanoj joined #gluster
04:26 Saravanakmr joined #gluster
04:29 mdeanda i've been wanting to setup something whereby my local cluster somehow gets replicated to a remote location (my parents house for example) in order to keep an offsite backup, i can easily setup a site-to-site vpn but my concern is actually with security. what options do i have to encrypt the data that is stored offsite? i figure i can ssh via cron to the remote pc to pass along any password/key
04:30 mdeanda information whenever i want to initiate the backup but don't have any idea of what to use. am i just crazy?
04:33 atinm joined #gluster
04:37 skumar joined #gluster
04:55 azhar joined #gluster
04:57 azhar_ joined #gluster
04:59 Saravanakmr joined #gluster
05:04 rastar joined #gluster
05:04 aravindavk joined #gluster
05:10 PatNarciso_ joined #gluster
05:12 PatNarciso joined #gluster
05:15 humblec joined #gluster
05:29 rafi1 joined #gluster
05:31 ppai joined #gluster
05:31 int-0x21 Anyone have any tips for replicated volume performance
05:32 int-0x21 Im topping out atm at 400MB/sec with nvme disks and 100GB network
05:34 karthik_us joined #gluster
05:37 uebera|| joined #gluster
05:37 uebera|| joined #gluster
05:39 int-0x21 im using a zfs raidz1 3+1 nvme brick on each host
05:41 int-0x21 writing directly to the zfs volume i get bw=3831.7MB/s, iops=30652
05:42 int-0x21 and to mounted gluster volume i get bw=411529KB/s, iops=3215
05:43 int-0x21 So just about 10% of local performance
05:43 int-0x21 Using fio --name=randwrite --ioengine=libaio --iodepth=8 --rw=randwrite --bs=128k --direct=0 --size=1024M --numjobs=8 --runtime=60 --group_reporting --fallocate=none as test
05:44 int-0x21 Its replicate 3 arbiter 1
05:44 hgowtham joined #gluster
05:44 Prasad joined #gluster
05:46 apandey joined #gluster
05:46 int_0x21 joined #gluster
05:47 Prasad joined #gluster
06:03 ws2k3 joined #gluster
06:03 itisravi int-0x21: what version of gluster are you using?
06:04 ws2k3 joined #gluster
06:04 ws2k3 joined #gluster
06:05 ws2k3 joined #gluster
06:05 ws2k3 joined #gluster
06:06 ws2k3 joined #gluster
06:09 daMaestro joined #gluster
06:10 int-0x21 3.12
06:11 int-0x21 glusterfs 3.12.1 to be more precise
06:11 itisravi both clients and servers?
06:12 int-0x21 Yea to reduce issues with test im just doing a local mount on the first server to test the speed atm
06:14 int-0x21 Im a bit confused since im not so sure how to find out what aspect of the write is reducing the performance
06:15 susant joined #gluster
06:16 itisravi okay. Well the speed definitely won't match that of a direct write to the disk but you could try things like comparing the speed with a plain distribute volume, replica 3 volume etc and see how the replica 3 arbiter 1 setup performs.
06:18 int-0x21 I did test with replica 2 and there wasnt any performance diffrence
06:19 int-0x21 Striped i havnt tested since that wouldnt work for me in production
06:19 int-0x21 I tested striped replica so i had bricks directly on each disk but that got worse performance
06:20 itisravi striped is not supported. you need to use sharding if you want to split files.
06:20 int-0x21 Yea sharding lowerd the performance a bit
06:21 int-0x21 well havnt test striped replica with sharding and taking zfs out of the equation
06:21 apandey_ joined #gluster
06:21 gbox int-0x21: 3.12.1 has had a few issues, although nothing performance related.  I think the problem is Gluster & ZFS don't work great together
06:22 itisravi Try `gluster v profile..` and see if you find anything unusual in the output like a FOP latency being too high (say fsync taking too long).
06:22 gbox int-0x21:  It's copy-on-write which people have discussed here being antithetical to gluster's approach
06:22 int-0x21 I guess i can make another stab at striped replica with sharding
06:23 itisravi gbox has a point, it might be worthwhile to use XFS for the bricks and see if that makes a difference.
06:24 int-0x21 Yea il remake it striped replica with sharding since thats one of the few things i havnt tried and il se what i get on that
06:26 int-0x21 Thanks for the tips
06:27 kdhananjay joined #gluster
06:28 int-0x21 Any preferd way of doing xfs on nvme (thinking of aligment) or does it fix that ?
06:28 apandey__ joined #gluster
06:31 gbox int-0x21: It seems to have good heuristics, but for testing it might not matter much
06:33 [diablo] joined #gluster
06:33 xavih joined #gluster
06:33 gbox int-0x21: I think tuning the whole system might be the best option.  Look at these: https://www.redhat.com/cms/managed-files/st-gluster-storage-supermicro-reference-architecture-f7640-v2-201705-en.pdf
06:36 gbox https://people.redhat.com/dblack/summit2017/rh-summit-2017-dblack-bturner-gluster-architecture-and-performance-20170502-final-with-101.pdf
06:39 gbox int-0x21: RH has some proprietary tuning techniques but a lot of the options are listed there.  With replica 2 arbiter 1 you get a boost on reads but not really on writes (at all)
06:40 gbox int-0x21: anyway you have a cool set up there, just keep hacking at it!
06:40 int-0x21 The arbiter is not so much for performance its so i dont end up in a nasty split brain and have to update my linkedin page
06:40 int-0x21 Yea i think i should end up with a good result
06:41 gbox int-0x21: sure, I have the same setup.  Arbiter seems to use like 1% as much space as the actual data
06:41 int-0x21 The  blocks are good and honestly 500MB/sec is not bad (it far outperforms the legacy lefthand system) but i think i should get it a bit higher, once its prodcution the time of tweaking is over
06:42 int-0x21 If i get it to 30% of local im happy
06:42 int-0x21 10% i feel im missing a bit
06:42 karthik_us joined #gluster
06:42 gbox int-0x21: Yeah what distro?  I mean, in terms of XFS on SSD there are pages saying swap out the scheduler (Arch hackers!) but I am not sure I'd do that without a lot of testing
06:43 int-0x21 Running centos  7.4 on these
06:43 psony joined #gluster
06:43 gbox int-0x21:  How about this.  Compare it to kernel-nfs (or even ganesha-nfs on its own).  That's a more realistic comparison
06:45 int-0x21 Straight up zfs and nfs or iscsi i get quite a lot better performance, but yea :) performance is not worth loosing the customer data
06:45 kotreshhr joined #gluster
06:48 gbox int-0x21: Ah, how about this?  Try parallel writes to gluster.  You can even mount the gluster volume multiple times to create multiple client threads.  Then push a bunch of data across those mounts
06:50 int-0x21 Hmm il try but the end result is im planing on exporting this with ganesha so ganesha will pick up the volume and export it (vmware will be using it as a datastore)
06:50 gbox int-0x21: a datastore for vm disks?
06:51 int-0x21 Yea
06:52 gbox int-0x21: Hmm, you're gonna have a lot going on in that system, but at least it's all the stuff gluster is trying to get good at
06:53 gbox int-0x21: I think the JBOD approach is supposed to work well for VM datastores.  Unless you want LOTS of redundancy.
06:53 int-0x21 Yea, i probably would have gone with drdb / zfs or something silimir if it was just performance
06:53 int-0x21 But gluster have some key parts that are realy realy atractive
06:53 gbox int-0x21: Yeah if HA is your goal
06:54 int-0x21 The arbiter as a split brain protection is realy nice, and the fact that i can present one nfs from each storage host
06:54 int-0x21 And then let vmware with nfs 4 handle those paths
06:54 int-0x21 No frustrating failover that always tends to break
06:55 int-0x21 As a list of tickboxes for reliable ha storage it just ticks the boxes, last part for me is the performance and i dont se why i should get there ;)
06:55 gbox int-0x21: Yeah test the hell out of it though.  That RH summit guide has some current benchmarks
06:56 mbukatov joined #gluster
06:56 int-0x21 I also have a jbod of spinners connected to it that i will use as a samba share and the fact its in the same solution is just brilliant
06:56 int-0x21 So yea tickboxes all over if you consider a midsize to enterprise need for ha storage
06:57 int-0x21 but now i noticed that its way time to get to work :) Il try the distributed replica  with sharding later today and get back with results
06:58 gbox I wonder does anybody run just ganesha nfs on its own?
06:58 int-0x21 I had extremly minor tests with it and if just looking on it on its on i dont se why it should be used instead of kernel one
06:58 gbox int-0x21: Yeah please share!
06:59 int-0x21 End result no matter what the solution i intend to put up a paper on the solution
06:59 gbox It offers parallel/clustering/HA
06:59 int-0x21 For this market segment there realy arnt any guides
06:59 int-0x21 When it comes to ha storage
07:00 gbox it's weird that vmware doesn't have a solution of their own?
07:02 int-0x21 It has vsan that its pushing but its expensive
07:02 int-0x21 and it puts the load on the vm machines and i dont want that
07:02 int-0x21 vm hosts i meen
07:03 int-0x21 Anyway shower :) il test more and come back when im at work :)
07:04 gbox ha I gotta sleep :)
07:06 karthik_us joined #gluster
07:10 int-0x21 Night :) thanks for the help
07:16 kramdoss_ joined #gluster
07:16 jtux joined #gluster
07:18 poornima_ joined #gluster
07:20 om2 joined #gluster
07:43 rastar joined #gluster
07:50 kramdoss_ joined #gluster
07:57 cloph_away joined #gluster
08:16 jiffin1 joined #gluster
08:21 David_H__ joined #gluster
08:32 Prasad_ joined #gluster
08:34 Prasad__ joined #gluster
08:37 _KaszpiR_ joined #gluster
08:49 ivan_rossi joined #gluster
09:07 percevalbot joined #gluster
09:09 xavih joined #gluster
09:10 aravindavk joined #gluster
09:14 kramdoss_ joined #gluster
09:23 poornima_ joined #gluster
09:26 sanoj joined #gluster
09:30 ws2k3 joined #gluster
09:30 Humble joined #gluster
09:39 kramdoss_ joined #gluster
09:43 Prasad_ joined #gluster
10:06 humblec joined #gluster
10:07 ws2k3 joined #gluster
10:07 ws2k3 joined #gluster
10:08 ws2k3 joined #gluster
10:08 ppai joined #gluster
10:08 ws2k3 joined #gluster
10:09 ws2k3 joined #gluster
10:09 ws2k3 joined #gluster
10:12 Prasad__ joined #gluster
10:19 susant joined #gluster
10:21 MrAbaddon joined #gluster
10:29 Prasad__ joined #gluster
10:40 ahino joined #gluster
10:44 poornima_ joined #gluster
10:49 Prasad joined #gluster
10:51 rafi joined #gluster
11:12 rafi3 joined #gluster
11:17 nishanth joined #gluster
11:22 susant joined #gluster
11:37 kdhananjay left #gluster
11:45 rwheeler joined #gluster
11:50 shyam joined #gluster
11:56 bfoster joined #gluster
12:02 Jacob8432 joined #gluster
12:03 int-0x21 joined #gluster
12:11 MrAbaddon joined #gluster
12:16 ctria joined #gluster
12:23 karthik_us joined #gluster
12:29 ahino joined #gluster
12:33 nbalacha joined #gluster
12:42 karthik_us joined #gluster
12:53 kdhananjay joined #gluster
12:54 kdhananjay left #gluster
13:03 phlogistonjohn joined #gluster
13:05 msvbhat joined #gluster
13:06 int_0x21 Quick update from this morning, replicate stripe works better then replicate zfs
13:06 int_0x21 Now on Jobs: 8 (f=8): [W(8)] [100.0% done] [0KB/595.4MB/0KB /s] [0/4763/0 iops] [eta 00m:00s] for 128k random write
13:06 int_0x21 128k block that is (8g written data)
13:19 atinm joined #gluster
13:47 dominicpg joined #gluster
13:55 psony joined #gluster
13:57 boutcheee520 joined #gluster
13:58 nbalacha joined #gluster
14:08 atinm joined #gluster
14:19 skumar joined #gluster
14:24 shyam joined #gluster
14:28 jiffin joined #gluster
14:28 boutcheee520 joined #gluster
14:29 jkroon joined #gluster
14:40 gyadav joined #gluster
14:53 skylar1 joined #gluster
14:55 hmamtora joined #gluster
14:55 hmamtora_ joined #gluster
14:56 DV joined #gluster
15:10 manu__ joined #gluster
15:10 manu__ Hi! anyone can help me with git access?
15:14 xavih joined #gluster
15:17 sanoj joined #gluster
15:23 bfoster joined #gluster
15:34 phlogistonjohn joined #gluster
15:35 NuxRo joined #gluster
15:39 msvbhat joined #gluster
15:43 farhorizon joined #gluster
15:45 om2 joined #gluster
15:49 tacoboy joined #gluster
15:54 Prasad joined #gluster
16:01 jstrunk joined #gluster
16:03 msvbhat joined #gluster
16:04 ahino joined #gluster
16:04 wushudoin joined #gluster
16:07 ThHirsch joined #gluster
16:22 pladd joined #gluster
16:30 pladd joined #gluster
16:37 ThHirsch joined #gluster
16:38 skumar joined #gluster
16:39 bowhunter joined #gluster
16:43 baber joined #gluster
16:46 Asako joined #gluster
16:47 Asako Good morning.  I'm having some issues with geo-replication which appear to be related to ssh command failures.  The logs show an error like this: Popen: command returned error     cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing rsync --sparse --bwlimit=128 --xattrs --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i/var/lib/glusterd/ge
16:47 Asako o-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-zDGrHQ/56571a5b06d0e4d43bc1c01811f71cc7.sock --compress root@gluster-srv3:/proc/22986/cwd        error=1
16:48 Asako any idea what would cause this?
16:56 gyadav joined #gluster
16:56 timotheus1_ joined #gluster
17:03 wushudoin joined #gluster
17:03 ThHirsch joined #gluster
17:07 Asako rsync: -oPasswordAuthentication=no: unknown option
17:07 Asako hmm
17:13 Asako figured it out, I had incompatible rsync options specified
17:14 Asako rsync: --sparse cannot be used with --inplace
17:14 glusterbot Asako: rsync's karma is now 0
17:14 Asako gluster should give warnings about stuff like that
17:17 jkroon how can i start a single brick up again?  server rebooted, and glusterd started, but the bricks did not start ... volume start ... force causes all bricks to restart on new port numbers.
17:18 jkroon the other use-case is that one of the bricks is on a known faulty disk so until we can get that replaced I'd prefer to stick with running from the replica.
17:56 DV joined #gluster
18:24 _KaszpiR_ joined #gluster
18:38 malevolent joined #gluster
18:41 JoeJulian "volume start ... force causes all bricks to restart on new port numbers" it does? That's new.
18:42 JoeJulian I wonder if that's a bug.
18:45 ahino joined #gluster
18:52 _dist joined #gluster
19:02 gyadav joined #gluster
19:13 shyam joined #gluster
19:23 MrAbaddon joined #gluster
19:27 ThHirsch joined #gluster
19:40 rafi1 joined #gluster
19:42 tacoboy joined #gluster
19:54 Jacob843 joined #gluster
20:04 boutcheee520 joined #gluster
20:10 anthony25 joined #gluster
20:32 wushudoin joined #gluster
20:33 cliluw joined #gluster
20:42 int-0x21 Now im getting somewhere :)  [0KB/767.9MB/0KB /s] [0/6143/0 iops] on 128k blocksize
20:43 int-0x21 random write
20:43 int-0x21 replicate 3 arbiter 1
20:44 int-0x21 I think its time to move on to the nfs part of the issue now :) Something for tomorrow :)
20:53 xavih_ joined #gluster
20:55 Gambit15 joined #gluster
21:08 anthony25 joined #gluster
21:15 major joined #gluster
21:39 jiffin joined #gluster
21:39 ThHirsch joined #gluster
21:41 skylar1 joined #gluster
21:41 bowhunter joined #gluster
21:58 jbrooks joined #gluster
21:59 mallorn1 joined #gluster
21:59 delhage_ joined #gluster
22:04 decayofmind joined #gluster
22:04 wistof joined #gluster
22:08 owlbot joined #gluster
23:14 andrws joined #gluster
23:14 major joined #gluster
23:20 cholcombe anyone using the diagnostics.dump-fs-stats?  I turned it on and it kept appending to the same file until my FS blew up haha
23:20 cholcombe i thought it overwrote the file every x seconds
23:20 cholcombe i'm running gluster 3.12
23:21 protoporpoise joined #gluster
23:21 major joined #gluster
23:37 protoporpoise @JoeJulian - ended up giving that talk on Gluster last night - https://smcleod.net/tech/getting-started-with-gluster/
23:37 msvbhat joined #gluster
23:43 cholcombe i don't get it.  there's absolutely nothing going into this volume.  it's not even mounted yet and it's blowing up my disk with fop logs

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary