Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 BitByteNybble110 joined #gluster
00:16 davidb2111 joined #gluster
00:20 masber joined #gluster
00:31 daMaestro joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:14 MrAbaddon joined #gluster
02:46 MrAbaddon joined #gluster
02:57 rastar joined #gluster
03:07 shyu joined #gluster
03:09 shyam joined #gluster
03:35 kramdoss_ joined #gluster
03:57 plarsen joined #gluster
04:13 Humble joined #gluster
04:17 MrAbaddon joined #gluster
04:26 sadbox joined #gluster
04:48 sanoj joined #gluster
04:55 rouven joined #gluster
05:03 kramdoss_ joined #gluster
05:10 rouven joined #gluster
05:15 rouven joined #gluster
05:20 rouven joined #gluster
05:25 rouven joined #gluster
05:44 shortdudey123 joined #gluster
05:46 daMaestro|isBack joined #gluster
06:22 jtux joined #gluster
06:29 omie888777 joined #gluster
06:34 sage__ joined #gluster
06:35 omie888777 joined #gluster
06:45 ivan_rossi joined #gluster
07:24 bEsTiAn joined #gluster
07:35 rouven joined #gluster
07:40 fsimonce joined #gluster
08:33 bartden joined #gluster
08:34 jkroon joined #gluster
08:35 bartden Hi, what is the “fastest” method for syncing data from a gluster A to gluster B? I would like to preserve ACL rights
08:36 misc rsync would be what I would do
08:36 misc not sure if that's the fastest however
08:37 kramdoss_ joined #gluster
08:37 fsimonce joined #gluster
08:46 _KaszpiR_ joined #gluster
09:06 skumar joined #gluster
09:09 map1541 joined #gluster
09:11 anoopcs joined #gluster
09:14 ivan_rossi left #gluster
09:32 msvbhat joined #gluster
09:37 buvanesh_kumar joined #gluster
09:37 shdeng joined #gluster
10:13 mrcirca_ocean joined #gluster
10:14 mrcirca_ocean Hello, i have a probem. I go to add-brick in an existing volume but says volume add-brick: failed: Host gfs2.home.mrcirca.com is not in 'Peer in Cluster' state
10:15 mrcirca_ocean when i do gluster peer status , i get this Hostname: gfs2
10:15 mrcirca_ocean Uuid: 7d12896b-dacb-43e4-b989-bafb49227076
10:15 mrcirca_ocean State: Probe Sent to Peer (Connected)
10:18 bartden Hi, if i have a 4 node distributed cluster and i mount a volume on a client using NFS, will all data transfer via the server that i mount, even when the data is on another node?
10:38 mattmcc joined #gluster
10:47 nbalacha joined #gluster
10:49 rouven joined #gluster
10:58 sanoj joined #gluster
11:10 msvbhat joined #gluster
11:26 Wizek_ joined #gluster
11:31 msvbhat joined #gluster
11:51 buvanesh_kumar joined #gluster
12:24 brayo joined #gluster
12:32 dkossako joined #gluster
12:34 dkossako hi there, is it possible to view any replication status? i needed to change disk in on of gluster host to new one, it is completely empty and I would like to know when it will be synced.
12:37 plarsen joined #gluster
12:46 ThHirsch joined #gluster
12:47 baber joined #gluster
12:49 Kins joined #gluster
13:04 _dist joined #gluster
13:06 nbalacha joined #gluster
13:10 tom[] major: if, in an lxc, systemd runs something and it needs to send an email via the mta on its containing os, what would you suggest? install heirloom mailx on the container to talk to the host's mta?
13:10 cloph geo status detail will show you date/state it is synced at and whether a checkpoint has been reached or not.
13:10 cloph There is no other "progress" feature than that.
13:28 john51 joined #gluster
13:31 dkossako I'm reaching timeout when trying to check geo status
13:31 dkossako gluster volume geo-replication backups status / Error : Request timed out / geo-replication command failed
13:39 shyam joined #gluster
13:42 annhah joined #gluster
13:46 hmamtora joined #gluster
13:49 baber joined #gluster
13:49 kramdoss_ joined #gluster
13:50 side_control joined #gluster
13:55 [diablo] joined #gluster
14:02 nbalacha joined #gluster
14:04 ivan_rossi joined #gluster
14:09 skylar1 joined #gluster
14:10 vbellur joined #gluster
14:13 jstrunk joined #gluster
14:17 dr-gibson joined #gluster
14:17 dr-gibson joined #gluster
14:23 annhah left #gluster
14:24 msvbhat joined #gluster
14:35 pladd joined #gluster
14:43 baber joined #gluster
14:45 farhorizon joined #gluster
14:49 Wizek_ joined #gluster
15:03 wushudoin joined #gluster
15:05 baber joined #gluster
15:26 kpease_ joined #gluster
15:34 ivan_rossi left #gluster
15:38 kpease joined #gluster
15:38 kpease_ joined #gluster
15:43 vbellur joined #gluster
15:45 phlogistonjohn joined #gluster
15:45 vbellur joined #gluster
15:45 kpease joined #gluster
15:46 kpease_ joined #gluster
15:52 omie888777 joined #gluster
15:57 xavih joined #gluster
16:31 kramdoss_ joined #gluster
16:35 shyam joined #gluster
16:59 rouven joined #gluster
17:04 rouven joined #gluster
17:08 rouven joined #gluster
17:14 rouven joined #gluster
17:16 major tom[], yah, I would have a local 'smart relay' that all the containers dispatch mail to
17:17 major tom[], one of the things to consider is that some of the simple mailers that just hand mail off to a smart relay do not 'queue' mail, so if there is a delivery problem it can just end up being deleted
17:18 rouven joined #gluster
17:26 jkroon joined #gluster
17:28 arif-ali joined #gluster
17:35 Abbott_ joined #gluster
17:38 Abbott_ Help! I am using gluster 3.10.6 on Ubuntu 16.04. Can I please know the name of the glusterfs samba vfs package to install on Ubuntu? I get the error "Error loading module '/usr/lib/x86_64-linux-gnu/samba/vfs/glusterfs.so': /usr/lib/x86_64-linux-gnu/samba/vfs/glusterfs.so: cannot open shared object file: No such file or directory". Any help is much appreciated. Thanks
17:50 msvbhat joined #gluster
18:15 alvinstarr1 joined #gluster
18:18 _KaszpiR_ joined #gluster
18:22 plarsen joined #gluster
18:23 rouven joined #gluster
18:36 rouven joined #gluster
18:38 Zeroadrenaline joined #gluster
18:38 Zeroadrenaline Hello
18:38 glusterbot Zeroadrenaline: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
18:38 glusterbot answer.
18:41 Zeroadrenaline I'm looking for the best DFS solution for a high scale gitlab cloud implementation. I'm not an expert on this matter and I'm looking for some guide on defining the critical aspects that I should take in consideration while choosing the best option.
18:41 rouven joined #gluster
18:42 Zeroadrenaline My list of goals to be meat by the posible candidates includes: High Availability no matter how many additions/remotions of nodes are done to the cluster.
18:42 Zeroadrenaline Data persistency and availability while the cluster is increasing or decreasing his size
18:43 Zeroadrenaline and speed Performance for small (hundreds of bytes) and big (hundreds of gigs) files during both read and write operations.
18:43 Zeroadrenaline That's all what I've so far. Am I missing something relevant?
18:51 farhorizon joined #gluster
18:51 rouven joined #gluster
19:04 rouven joined #gluster
19:11 farhoriz_ joined #gluster
19:17 shyam joined #gluster
19:23 dkossako joined #gluster
19:30 gbox Looking back a few days, it looked like the CentOS SIG packaged 3.12.1 packages crashed with tiering.  kkeithly pointed to unreleased 3.12.2 packages, but does anyone know the state of tiering on 3.12?  What version does anyone have running with tiering?
19:32 gbox Tiering seems to provide a way to keep data close to the faster app & web servers, as long as it doesn’t thrash constantly between cold & hot
19:55 shyam joined #gluster
19:56 plarsen joined #gluster
20:02 tom[] thanks major. i'll try to figure out what is a 'smart relay'
20:09 rouven joined #gluster
20:13 rouven joined #gluster
20:46 rouven_ joined #gluster
20:54 bartden joined #gluster
20:54 bartden Hi, if i have a 4 node distributed cluster and i mount a volume on a client using NFS, will all data transfer go via the server that i mount, even when the data is on another node? Or will the client download the data direct from the server containing the data?
21:07 kharloss joined #gluster
21:14 gbox bartden: I think it built-in NFS just pulls from whatever server you’ve mounted.  I think nfs-ganesha utilizes libgfapi so it would know which brick data resides on.  Are you using the built-in, legacy nfs?
21:15 bartden was trying out nfs v3 due to local filesystem cache.
21:15 bartden gbox:
21:16 bartden so no distributed read/write
21:17 bartden does nfsv4 have file system cache? I have a application writing large files with 512bytes per write call
21:18 masber joined #gluster
21:19 rouven joined #gluster
21:21 shyam joined #gluster
21:23 rouven joined #gluster
21:49 cliluw joined #gluster
21:54 rouven joined #gluster
22:03 rouven joined #gluster
22:07 msvbhat joined #gluster
22:22 gbox bartden: Sorry had a phone call.  I don’t see how the NFS client would know how to distribute read/writes.  They all go the mounted server, which then pulls/pushes data to the other gluster bricks.  Also built-in, legacy nfs is v3 only I believe.  nfs-ganesha is v4 with hooks for distributed read/writes (AFAIK).  Can you use the native gluster client?
22:23 bartden due to small writes (512bytes) local network latency is killing me
22:23 bartden write-behind does not solve the problem gbox
22:23 gbox All the protocols (even native) will have difficulty with small files
22:24 bartden nfs uses local filesystem cache
22:24 gbox There have been some improvements, especially in recent versions.  Search for the info and tuning parameters
22:24 bartden k thx
22:24 gbox gluster native client seems to have a cache too
22:25 gbox I shutdown a client box the other day and my gluster boxes went nuts
22:26 bartden cache in the sense of write-behind window size and io-cache for read?
22:26 gbox yeah all those parameters
22:26 gbox what version do you run?
22:27 bartden 3.10
22:30 gbox yeah it seems like a lot of work has gone into smallfile performance.  Can you check the gluster blogs for release notes on the recent versions up until yours and see if they talk about small file improvements.  Come back if you find anything good!
22:30 gbox native client works great with distributed read/writes.  Even small files
22:30 bartden strange because tuning write-behind window size to 32MB and enabling flush-behind still showed write calls on wireshark every 1~2KB
22:31 bartden perhaps i’m missing an option :)
22:31 gbox I have a similar situation.  I’ve actually hit open file handle limits (even with very high ulimit -n)
22:31 gbox If you can tune all those parameters very high and see what happens
22:31 gbox somewhere there’s an overview of all options and their practical maximum values
22:32 bartden well in my case its one file but application writes at 512bytes
22:32 bartden hmm will check
22:32 gbox weird like a fifo
22:32 bartden bio informatic tools, not written according to best practices :)
22:33 gbox sure none are
22:33 gbox oh I see the app adds 512B to a file iteratively?
22:33 bartden yes
22:34 gbox Hmm, sounds like keep that file in a temp space on the app server, then dump it when it’s done
22:34 bartden is indeed a possible solution we are looking into
22:34 rouven joined #gluster
22:34 gbox like /run/user/# for ramfs speed, then give it to gluster
22:35 bartden hence the question about nfs (which uses local filesystem cache)
22:35 gbox Yeah nfs would suck at that too
22:35 bartden files can grow till 300 GB
22:35 gbox so then just a local space that exceeds 300GB.  Buy some RAM
22:36 bartden :)
22:36 gbox or an SSD
22:38 bartden i want to squeeze everything out of it to optimize cost
22:38 gbox time==$
22:38 rouven joined #gluster
22:39 gbox Tiering might work well too, but I’m actually here trying to make that work so I won’t advocate it for you
22:39 bartden :)
22:39 bartden i’m off .. thanks for the information!
22:40 gbox sure
22:40 bartden bye
22:40 bartden left #gluster
23:17 anthony25 hi
23:17 glusterbot anthony25: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offe
23:17 anthony25 I have 2 servers exposing glusterfs volumes
23:18 anthony25 some bricks are in replicas 2, but there are bricks that are only on one server
23:18 anthony25 I'm on debian stretch, gluster 3.8.8
23:18 anthony25 and when I do a bit of trafic, my cluster just… freeze
23:19 anthony25 I tried on 3.12.1, and it was the same
23:19 anthony25 so when I do a `gluster volume status`, I get a request timed out
23:23 anthony25 `gluster peer status` shows the other peer without any issues, it's just volumes
23:25 anthony25 I'm using zfs behind the hood
23:57 pdrakeweb joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary