Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 TvL2386 joined #gluster
00:25 Klas joined #gluster
00:29 Caveat4U joined #gluster
00:35 zat left #gluster
00:44 TvL2386 joined #gluster
00:55 Muthu joined #gluster
01:29 shdeng joined #gluster
01:49 prth joined #gluster
02:07 Gambit15 joined #gluster
02:18 magrawal joined #gluster
02:20 ahino joined #gluster
02:29 haomaiwang joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:09 fang64 joined #gluster
03:19 Javezim From a gluster-client our server is coming back as "Read-Only Server" this has just happened out of the blue
03:19 Javezim Anyone know what this is about?
03:19 Javezim through the client i can touch a file to create and modify it, but for example re-naming an existing file fails
03:30 Caveat4U joined #gluster
03:38 Javezim itisravi Do you know, can you remove-bricks when the bricks machine is offline
03:38 Javezim Ie. We've turned the Arbiter off
03:38 Javezim If we run that Remove-bricks command from one of the brick machines, that'll be okay?
03:38 dnorman joined #gluster
04:04 kramdoss_ joined #gluster
04:06 itisravi joined #gluster
04:16 riyas joined #gluster
04:17 atinm joined #gluster
04:17 suliba joined #gluster
04:17 buvanesh_kumar joined #gluster
04:17 jiffin joined #gluster
04:18 riyas joined #gluster
04:18 Shu6h3ndu joined #gluster
04:19 buvanesh_kumar joined #gluster
04:34 ankitraj joined #gluster
04:36 dnorman joined #gluster
04:37 sanoj joined #gluster
04:40 rafi joined #gluster
04:46 kdhananjay joined #gluster
04:46 RameshN joined #gluster
04:55 prth joined #gluster
05:07 apandey joined #gluster
05:12 prasanth joined #gluster
05:18 aravindavk joined #gluster
05:21 JoeJulian Javezim: All the management daemons that participate in the volume, except for the one being removed, need to be up.
05:21 ndarshan joined #gluster
05:33 Javezim Awesome, We're finally back to a working state
05:33 Javezim Thanks @JoeJulian itisravi and @kshlm for all your help this week
05:35 JoeJulian <cheer!>
05:36 itisravi Javezim: no problem...it would be great if you can do some testing of arbiter add-brick to see if there is really an issue with healing other than the network issues you observed.
05:37 ppai joined #gluster
05:38 JoeJulian I should submit another rfe... split-brain directories that are only split-brain because of trusted.dht should just automatically fix-layout that directory.
05:38 JoeJulian @file a bug
05:38 glusterbot JoeJulian: I do not know about 'file a bug', but I do know about these similar topics: 'fileabug'
05:38 JoeJulian pfft
05:38 JoeJulian file a bug
05:38 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
05:42 Muthu joined #gluster
05:43 JoeJulian Javezim: In case you're interested in following this bug: https://bugzilla.redhat.co​m/show_bug.cgi?id=1396341
05:43 glusterbot Bug 1396341: unspecified, unspecified, ---, bugs, NEW , Split-brain directories that only differ by trusted.dht should automatically fix-layout
05:44 skoduri joined #gluster
05:45 kramdoss_ joined #gluster
05:47 hgowtham joined #gluster
05:52 Saravanakmr joined #gluster
05:55 Javezim @JoeJulian Anything to get rid of split-brains I am super keen on
06:00 kotreshhr joined #gluster
06:16 nbalacha joined #gluster
06:21 msvbhat joined #gluster
06:24 derjohn_mob joined #gluster
06:25 Philambdo joined #gluster
06:29 prth joined #gluster
06:59 jtux joined #gluster
07:08 rastar joined #gluster
07:11 Klas joined #gluster
07:12 Klas joined #gluster
07:32 Muthu joined #gluster
07:33 jtux joined #gluster
07:38 riyas joined #gluster
07:44 [diablo] joined #gluster
07:48 [diablo] joined #gluster
07:49 ivan_rossi joined #gluster
07:49 ivan_rossi joined #gluster
07:50 owitsches joined #gluster
07:51 nishanth joined #gluster
07:59 owitsches joined #gluster
08:05 BuBU joined #gluster
08:06 BuBU Hi
08:06 glusterbot BuBU: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:09 BuBU I'm using latest gluster 3.8 under ArchLinux… creating and starting volumes is fine
08:09 BuBU but when trying to mount the volume locally using mount -t glusterfs localhost:/docker /var/docker
08:09 BuBU I get WARNING: getfattr not found, certain checks will be skipped..
08:10 BuBU and attr is installed !
08:10 BuBU ~> getfattr
08:10 BuBU Utilisation : getfattr [-hRLP] [-n name|-d] [-e en] [-m pattern] path...
08:10 BuBU any hint on that ?
08:11 BuBU I've 6 nodes and created a striped replicated volume (replica=2 and stripe=3)
08:12 nishanth_lunch joined #gluster
08:15 nishanth_lunch joined #gluster
08:17 JoeJulian BuBU: Do you have "which" installed? (It's core, so you should)
08:17 JoeJulian The mount script runs "which getfattr" and throws that warning if it errors
08:22 abyss^ JoeJulian: what are you doing so early?;)
08:23 abyss^ (or so late in your case;))
08:23 JoeJulian abyss^: Moving some files during a maintenance window.
08:24 JoeJulian I was going to script it and do it next week while I was sleeping, but a window came up suddenly and I decided to take advantage.
08:24 abyss^ oh, and you use that time to help people intead of sleeping?;)
08:24 abyss^ ok :)
08:24 JoeJulian If I'm here waiting for stuff, I might as well. :)
08:24 abyss^ :)
08:25 abyss^ eerrr, so... ;)
08:25 JoeJulian Plus, I run Arch as well so it piqued my interest.
08:25 abyss^ ArchLinux?
08:25 JoeJulian Yep
08:26 abyss^ yeah, I used it long time in my home but I'm to old to struggle with almost each upgrade;) I just installed something that work always ;)
08:26 BuBU JoeJulian: ok which is missing ! It's a minimal installation of arch
08:27 BuBU thx
08:27 JoeJulian You're welcome.
08:27 abyss^ and don't bother me when I upgrade system or need something ;) I prefer now if everything is out of box;)
08:27 sbulage joined #gluster
08:28 JoeJulian I'll ask Sergej to add that dependency.
08:29 abyss^ JoeJulian: btw: ;) question:D Do you remember when I paste result of getfattr -m .  -d -e hex and I saw only one trusted.afr.saas_bookshelf-client-X= value? Should I reset that only one value?
08:30 JoeJulian Only if the other copy contradicts this one. Otherwise it shouldn't matter.
08:30 abyss^ wait I have to check what does mean contradicts lol
08:31 abyss^ JoeJulian: ok, it is in split-brain, so...
08:35 devyani7 joined #gluster
08:41 flomko joined #gluster
08:42 prth joined #gluster
08:45 jri joined #gluster
08:46 fsimonce joined #gluster
08:54 flomko Hi all! I use Gluster for few days and i have a some trouble. My env  so simple - distribute volume (UMG) across 2 brick (brick1 & brick2). Ш started removing
08:54 flomko gluster volume remove-brick UMG glerver1:/data/brick1/umg1 start
08:54 flomko and rebalansing files from brick1 to brick2 started, but in my case i have process dowloading a  bunch of files to this volume. Yesterday was fine, but now all traffic go to brick1, which in process of removing
08:55 nbalacha flomko, that is the expected behaviour. As you are removing the brick, no new creates will go there
08:55 nbalacha sorry - misread
08:55 nbalacha are you saying the files are going to the removed brick?
08:55 nbalacha flomko, which version are you using?
08:56 flomko brick1 not have been removed already
08:56 flomko [root@glerver1 ~]# gluster volume remove-brick UMG glerver1.muzis.ru:/data/brick1/umg1 status
08:56 flomko Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
08:56 flomko ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
08:56 flomko localhost             8096        36.7GB          8333             0             0          in progress        1:40:6
08:56 glusterbot flomko: -------'s karma is now -9
08:56 glusterbot flomko: ---------'s karma is now -6
08:56 glusterbot flomko: ---------'s karma is now -7
08:56 glusterbot flomko: ---------'s karma is now -8
08:56 glusterbot flomko: ---------'s karma is now -9
08:56 glusterbot flomko: ---------'s karma is now -10
08:56 glusterbot flomko: ----------'s karma is now -5
08:56 glusterbot flomko: ------------'s karma is now -3
08:56 flomko glusterd 3.8
08:57 panina joined #gluster
08:58 flomko i tried to restart process of removing brick, it helpless
09:01 nbalacha flomko, please don't do that
09:01 nbalacha flomko, how do you know files are going to the brick being removed?
09:02 flying joined #gluster
09:06 flomko i have monitoring sistem - zabbix, and network bitrate on node is usefully
09:06 kramdoss_ joined #gluster
09:07 flomko restart of downloading is good idea - all go correctly, but i tried this more then one, and this problem will repeat again
09:13 Slashman joined #gluster
09:14 nbalachandran_ joined #gluster
09:17 derjohn_mob joined #gluster
09:20 ashiq joined #gluster
09:21 juhaj joined #gluster
09:24 pfactum @paste
09:24 glusterbot pfactum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
09:24 pfactum flomko, ^^
09:38 mhulsman joined #gluster
09:39 owitsches joined #gluster
09:39 nbalachandran_ flomko, do you see any messages in the client logs?
09:41 suliba joined #gluster
09:42 joex32 joined #gluster
09:44 poornima_ joined #gluster
09:47 nishanth joined #gluster
09:49 hackman joined #gluster
09:49 toredl joined #gluster
09:52 Gnomethrower joined #gluster
09:54 devyani7 joined #gluster
09:56 owitsches joined #gluster
09:57 BuBU29 joined #gluster
10:02 msvbhat joined #gluster
10:06 joex32 Hi All, i'm experience high memory usage (70%) of glusterd process. I've using glusterfs version 3.7.11. Anyone faced similar issue?
10:15 msvbhat joined #gluster
10:19 cloph high memory usage by itself isn't a bad thing - are you starving memory/does your machine swap already?
10:21 joex32 high <cloph>, our vm has 16GB of RAM and we're using 1GB swap space. we've used up all the available swap space.
10:22 joex32 I know, the setup is not optimum. Should i request additional swap space.
10:24 cloph gluster only using 11 gig here, but the  machine has 256GB RAM, so no prob at all.. what kind of volume do you use (distributed replica here, 2x(2+1))
10:26 joex32 distributed replicated volume, 2 node each 6 bricks. i know it's not recommended to use 2 node because of possibility of split-brain :)
10:32 bkunal joined #gluster
10:40 post-factum joex32, update to latest 3.7 available
10:41 post-factum joex32, lots of memleaks fixed
10:41 post-factum joex32, incl. glusterd-related
10:41 BuBU29 does gluster on top of zfs concidered production ready ? any caveheat ?
10:42 BuBU29 I've 6 nodes with 7TB each I want to use with replica=2 and stripe=3
10:42 joex32 <post-factum>, thanks, we're also experiencing issue with geo-replication which is fixed in latest releases.
10:43 BuBU29 the 6 nodes are 72 cores + 256GB ram each
10:43 BuBU29 on that nodes there will be gluster + docker (using rancher)
10:44 post-factum BuBU29, by default, any other filesystem that is not recommended by your distro vendor, will eat your data
10:44 flomko <nbalachandran_> what kind of message can be? i check both client and server log - but only handshake info and dir_selfheal from client
10:45 BuBU29 post-factum: using zfs (under ArchLinux) with other projects since few years… So zfs is quite stable for me..
10:46 post-factum BuBU29, then, you are on your own, and only you can conclude, whether it is stable :)
10:46 BuBU29 but I'm quite new to gluster
10:47 post-factum BuBU29, glusterfs should work on top of any posix-compatible fs
10:47 BuBU29 the first test I did today was a mess… probably because of misconfiguration…
10:47 post-factum that is likely
10:48 BuBU29 I did a gluster volume with replica=2 and stripe=3 for 6 nodes
10:48 post-factum BuBU29, striping is deprecated, do not use it
10:48 cloph for filesystems like xfs you'd make sure that the extended attributes fit into the inode (so you'd configure it to use larger than the default inode size), not sure whether that is needed for zfs as well..
10:48 BuBU29 I mounted on each node something like localhost:/docker /var/docker glusterfs defaults,_netdev 0 0
10:49 BuBU29 then on the first host I created a file in /var/docker/ and rebooted one of the nodes
10:49 sbulage joined #gluster
10:49 BuBU29 first when doing simple df on other nodes it hangs for few minutes (while the server is rebooting)
10:50 BuBU29 is it something related with my mount options ?
10:50 post-factum BuBU29, everything is unrelated, just because don't use striping
10:50 glusterbot post-factum: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
10:50 post-factum glusterbot-- u dumb
10:50 glusterbot post-factum: glusterbot's karma is now 9
10:50 post-factum BuBU29, unfortunately, docs still contain striping info, but just don't use that
10:51 BuBU29 post-factum: Oh ok… so I can just use replica+distribution ? with just replica=2 ?
10:51 post-factum BuBU29, yes, and enable sharding if you need to span huge files over multiple distributed bricks
10:52 post-factum BuBU29, it would be better replica 3 arbiter 1, though
10:52 post-factum but you have 6 nodes
10:53 BuBU29 actually I've 3 enclosures of 2 nodes each…
10:53 post-factum okay then
10:54 BuBU29 is there a way to force replication per enclosure
10:54 BuBU29 ?
10:54 post-factum BuBU29, yes, when you create the volume, bricks order matters
10:54 BuBU29 I mean replica does not occure in same enclosure
10:54 BuBU29 ok so if I have srv01-06 what should be the order ?
10:55 post-factum but I dunno how they are layered within enclosures
10:56 BuBU29 srv01+02 on enclosure 1, srv03+04 on 2 and srv05+06 on 3
10:56 BuBU29 ideally I would like to replica occure on 1+3+5 and 2+4+6
10:57 BuBU29 I mean no replica on same enclosure
10:57 post-factum Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set.
10:58 post-factum 2 3 4 5 6 1
10:59 BuBU29 ok then should be something gluster volume create docker replica=3 transport tcp srv01:/brick srv03:/brick srv05:/brick srv02:/brick srv04:/brick srv06:/brick
10:59 post-factum because (distributed (replica 2 3) (replica 4 5) (replica 6 1))
10:59 post-factum you want replica 3?
11:00 BuBU29 that was your advice no ?
11:00 post-factum no
11:00 BuBU29 BuBU29, it would be better replica 3 arbiter 1, though
11:00 post-factum yes, arbiter
11:00 post-factum but that doesn't fit well for 6 nodes
11:01 BuBU29 in the doc: https://gluster.readthedocs.io/en/latest/Admi​nistrator%20Guide/arbiter-volumes-and-quorum/
11:02 glusterbot Title: Arbiter volumes and quorum options - Gluster Docs (at gluster.readthedocs.io)
11:02 BuBU29 seems there are 6 nodes right ?
11:02 post-factum BuBU29, you will not be able to utilize free space of arbiter brick for actual data
11:03 post-factum BuBU29, you can go with that if you want, but i doubt you want that
11:04 BuBU29 with 7TB raw space on each nodes what should be the total usable capacity ?
11:05 post-factum BuBU29, 1 disk per node?
11:05 post-factum BuBU29, 1 RAID per node?
11:05 BuBU29 no this is already hardware raid5 of 8*1TB
11:06 post-factum BuBU29, okay, so 1 virtual disk per node of 7 TB
11:06 post-factum RAID5 O_o???
11:06 post-factum you are kidding
11:06 BuBU29 raid10 is adviced ?
11:06 post-factum noone uses raid5 nowadays if he is sober and healthy
11:06 flomko joined #gluster
11:07 post-factum raid6 for your case, i guess
11:07 post-factum you will get 6 TB of usable space per node, but you will be able to survive if 2 disks are out
11:08 post-factum in this case if you have 6 nodes of 6 tb each = 36 tb common raw, and replica 2, then 18 tb of usable space
11:08 post-factum if replica 3, then 12 tb
11:08 BuBU29 indeed usually using raid6 but as I planned to use gluster with replica, was thinking raid5 was acceptable
11:09 post-factum how many space do you need, actually?
11:09 post-factum *much
11:09 post-factum because there are options
11:09 BuBU29 actually this is a development environment for our products…
11:09 post-factum ah, test lab?
11:09 BuBU29 18TB is far acceptable :)
11:09 BuBU29 and test lab yes
11:10 BuBU29 What I need is trust on this setup ! :)
11:10 post-factum do you plan to expand or shrink it somehow? new disks?
11:10 BuBU29 in comming months I will have bigger setup for my production...
11:11 BuBU29 replacing actual aging EMC VNX5300 with a gluster setup
11:11 * post-factum will be back in 1 hour
11:11 BuBU29 the lab env will not need to be expanded..
11:25 jiffin joined #gluster
11:28 devyani7 joined #gluster
11:29 blues-man joined #gluster
11:30 blues-man hello, I was wondering how to modify the ganesha export template, I thought it was the hook in /var/lib/glusterd/hooks/1/start/post/S31ganesha-start.sh but modifying it on all ganesha nodes doesn't put my changes in the /etc/ganesha/exports/export_my.conf there are still default settings
11:33 arc0 joined #gluster
11:33 flomko joined #gluster
11:36 flomko joined #gluster
11:36 flomko left #gluster
11:36 flomko joined #gluster
11:46 Lee1092 joined #gluster
11:50 derjohn_mob joined #gluster
11:55 ju5t joined #gluster
12:01 post-factum BuBU29, so ok, go with raid6 and replica 2 with the order i've written above
12:09 msvbhat_ joined #gluster
12:10 Karan joined #gluster
12:13 Wizek joined #gluster
12:16 kotreshhr left #gluster
12:24 aravindavk joined #gluster
12:26 flomko joined #gluster
12:36 rafi1 joined #gluster
12:47 p7mo joined #gluster
12:54 ira joined #gluster
12:56 haomaiwang joined #gluster
13:02 BuBU29 post-factum: thx
13:02 BuBU29 post-factum: do you have any hints on the mount options ?
13:02 post-factum BuBU29, default
13:02 BuBU29 on each node I will have local mounts
13:03 BuBU29 something like: localhost:/docker /var/docker glusterfs defaults,_netdev 0 0
13:03 post-factum _netdev,noauto,x-systemd.automount
13:04 rafi1 joined #gluster
13:05 post-factum however, afaik, there are still some issues with local mounts on boot
13:06 post-factum so i'd just go with dedicated systemd unit that deps on glusterd to start
13:06 BuBU29 post-factuum: is this normal I still have df hang while rebooting one of the box ?
13:07 post-factum BuBU29, no
13:08 BuBU29 this is ok after 1 minute or so...
13:08 post-factum BuBU29, you should check why glusterfsd on that box are terminated in another way than SIGTERM, and also consider adjusting timeout which is 42 secs by default
13:12 BuBU29 I may need to use backupvolfile-server option ?
13:13 BuBU29 to use both replica mount on a given server
13:16 longsleep joined #gluster
13:18 ankitraj joined #gluster
13:18 nbalachandran_ joined #gluster
13:20 owitsches joined #gluster
13:22 post-factum yeah, why not
13:22 post-factum BuBU29, ^^
13:24 B219561 joined #gluster
13:30 haomaiwang joined #gluster
13:31 plarsen joined #gluster
13:38 d0nn1e joined #gluster
13:45 bluenemo joined #gluster
13:52 Longkong joined #gluster
13:52 Longkong Hi, I have a question about gluster. When I connect using nfs. What happens if the server I specify in the fstab is down? Will it disconnect?
13:53 mhulsman joined #gluster
13:56 ws2k3 Longkong yes it will disconnect
13:57 ws2k3 Longkong but its not recommanded to put nfs servers in fstab cause it may cause some issues booting the machine when the nfs server is down. instead you could add the nfs mount in ur rc.local(debian/ubuntu) then it wont have issues booting when ur nfs server is down
13:58 Plam hey ws2k3 you are here too? ^^
14:00 unclemarc joined #gluster
14:01 Longkong ws2k3: ok, thanks. So true HA is only possible with the native client?
14:02 hchiramm joined #gluster
14:04 ankitraj joined #gluster
14:09 bluenemo joined #gluster
14:14 shyam joined #gluster
14:14 derjohn_mob joined #gluster
14:16 nbalachandran_ joined #gluster
14:17 vbellur joined #gluster
14:21 johnmilton joined #gluster
14:24 MadPsy ws2k3, I've not heard about problems with nfs in fstab before - what problems would it cause except a timeout if the NFS export isn't available ?
14:27 mhulsman joined #gluster
14:35 jiffin1 joined #gluster
14:35 skylar joined #gluster
14:37 dnorman joined #gluster
14:40 msvbhat joined #gluster
14:44 nbalachandran_ joined #gluster
14:47 hchiramm joined #gluster
14:50 longsleep I have troubles mounting a root-squash enabled volume with root-squash=off mount option, mount works but root is still squashed - is that supposed to work?
14:52 squizzi joined #gluster
14:58 Caveat4U joined #gluster
15:09 mhulsman joined #gluster
15:29 nbalachandran_ joined #gluster
15:31 Caveat4U joined #gluster
15:34 Vaelatern joined #gluster
15:39 farhorizon joined #gluster
15:42 Caveat4U joined #gluster
15:42 RameshN joined #gluster
15:42 Caveat4U joined #gluster
15:45 farhoriz_ joined #gluster
15:48 annettec joined #gluster
15:52 dnorman joined #gluster
15:56 dnorman joined #gluster
15:58 shersi joined #gluster
16:01 ivan_rossi Longkong: you can have NFS HA using ganesha
16:06 farhorizon joined #gluster
16:07 wushudoin joined #gluster
16:09 farhoriz_ joined #gluster
16:12 jtux joined #gluster
16:21 nbalachandran_ joined #gluster
16:25 Caveat4U joined #gluster
16:26 BitByteNybble110 joined #gluster
16:42 farhorizon joined #gluster
16:43 shersi Hi all, What is the best way to backup glusterfs volume? I'm also using geo-replication and daily backup using rsync. The performance of rsyncing glusterfs volume  is really slow.  Does anyone know how to improve this?
16:44 JoeJulian There's been a lot of work figuring out how to make rsync more efficient for the amount of storage you can get with clustered systems. The result of that is geo-replication.
16:45 Caveat4U joined #gluster
16:46 JoeJulian At some scale, unfortunately, it becomes difficult (if not impossible) to back up. At that point all you can do is engineer for the greatest possible reliability within the available budget.
16:49 snehring JoeJulian, Georeplication uses rsync behind the scenes right?
16:49 shersi <@JoeJulian>: Thanks for the reply. i haven't reached that scale yet, currently backing up 1TB volume using rsync and its takes around 17hrs.
16:52 JoeJulian snehring: Right.
16:53 snehring does it tunnel over ssh by default, and if so is there a way to turn that off?
16:55 snehring not suitable for every environment, but at least in my case it's a dedicated fiber line to the replication target
16:57 JoeJulian I think it does, yes. I'd have to take a look to see if there's a way to avoid that... Not sure off the top of my head. I, personally, have never had a use case for it.
16:57 dnorman joined #gluster
16:58 snehring We don't know yet if the hardware will be the bottleneck or the software, or the ssh encryption
16:58 snehring but in 'just rsync' tests we actually got improvements with rsyncd over ssh-rsync
16:58 alvinstarr joined #gluster
16:59 snehring shersi, are they lots of little files?
17:01 shersi <snehring> - the largest file is 20MB but there is alot of small files.
17:03 JoeJulian snehring: There's a table just above https://gluster.readthedocs.io/en/la​test/Administrator%20Guide/Geo%20Rep​lication/#starting-geo-replication which shows the ability to specify your own ssh command. You could perhaps make use of that.
17:03 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.io)
17:04 snehring JoeJulian, nice that's probably exactly what I need
17:05 snehring shersi, that's probably why iirc rsync (or more likely the underlying filesystem) kinda chokes on all the stat() calls for tiny files
17:10 snehring shersi, are there many files per directory (like 10s of 1000s or more) or are they just spread out all over the place?
17:12 shersi The folder structure is /vol/content/year/months/day/hour/minute/<files>
17:12 snehring I ask because there's a mkfs.xfs tunable that supposedly helps with directories containing 100s of 1000s of files (ftype=1)
17:12 snehring may not be your issue
17:15 farhorizon joined #gluster
17:16 MidlandTroy joined #gluster
17:17 jkroon joined #gluster
17:18 derjohn_mob joined #gluster
17:27 jiffin joined #gluster
17:27 Caveat4U joined #gluster
17:35 Caveat4U joined #gluster
17:36 bluenemo joined #gluster
17:44 squizzi joined #gluster
17:51 Gambit15 joined #gluster
17:54 rastar joined #gluster
17:58 rastar_ joined #gluster
17:59 Caveat4U joined #gluster
18:01 dnorman joined #gluster
18:01 jri joined #gluster
18:06 Caveat4U joined #gluster
18:10 rastar joined #gluster
18:17 jiffin joined #gluster
18:17 vbellur joined #gluster
18:21 ivan_rossi left #gluster
18:36 msvbhat joined #gluster
19:01 kpease_ joined #gluster
19:01 dnorman joined #gluster
19:05 msvbhat joined #gluster
19:10 kpease joined #gluster
19:13 haomaiwang joined #gluster
19:21 mhulsman joined #gluster
19:26 ju5t joined #gluster
19:31 mhulsman joined #gluster
19:40 Jacob843 joined #gluster
19:50 farhoriz_ joined #gluster
19:59 kpease joined #gluster
20:01 shyam1 joined #gluster
20:02 dnorman joined #gluster
20:48 dnorman joined #gluster
20:49 ic0n joined #gluster
21:08 Caveat4U joined #gluster
21:13 ic0n joined #gluster
21:15 Caveat4U joined #gluster
21:25 kpease joined #gluster
21:29 squizzi joined #gluster
21:34 panina joined #gluster
21:52 shyam joined #gluster
22:07 annettec joined #gluster
22:08 annettec joined #gluster
22:11 msvbhat joined #gluster
22:18 squizzi joined #gluster
22:39 raghu joined #gluster
22:42 Caveat4U joined #gluster
22:45 Caveat4U_ joined #gluster
22:46 farhorizon joined #gluster
22:55 dnorman joined #gluster
23:06 Caveat4U joined #gluster
23:09 cliluw joined #gluster
23:25 hchiramm joined #gluster
23:32 Wizek joined #gluster
23:40 Wizek joined #gluster
23:42 Caveat4U joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary