Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 msvbhat joined #gluster
00:23 msvbhat__ joined #gluster
00:23 msvbhat_ joined #gluster
00:54 baber joined #gluster
00:55 rouven_ joined #gluster
01:00 masuberu joined #gluster
01:25 msvbhat joined #gluster
01:25 msvbhat_ joined #gluster
01:25 msvbhat__ joined #gluster
01:29 Wizek_ joined #gluster
01:54 masber joined #gluster
01:56 ilbot3 joined #gluster
01:56 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:56 Wizek_ joined #gluster
01:58 plarsen joined #gluster
02:21 omie888777 joined #gluster
02:25 msvbhat joined #gluster
02:42 blu_ joined #gluster
02:45 shyu joined #gluster
02:56 kramdoss_ joined #gluster
03:12 amye joined #gluster
03:20 pcdummy joined #gluster
03:21 DJClean joined #gluster
03:22 d4n13L joined #gluster
03:26 msvbhat_ joined #gluster
03:26 msvbhat joined #gluster
04:27 msvbhat joined #gluster
04:52 susant joined #gluster
05:14 msvbhat__ joined #gluster
05:14 msvbhat_ joined #gluster
05:14 msvbhat joined #gluster
05:15 masber joined #gluster
05:16 sage__ joined #gluster
05:17 masuberu joined #gluster
05:40 kramdoss_ joined #gluster
06:09 susant joined #gluster
06:10 kramdoss_ joined #gluster
06:12 rouven joined #gluster
06:19 msvbhat joined #gluster
06:19 msvbhat_ joined #gluster
06:19 msvbhat__ joined #gluster
06:27 rofl____ how can i see the progress of a volume heal?
06:27 rofl____ heal info just pops out gfid
06:27 rofl____ and exits
06:27 rofl____ and when i run info again it continues
06:27 TBlaar joined #gluster
06:30 kramdoss_ joined #gluster
06:35 bEsTiAn joined #gluster
06:38 jtux joined #gluster
06:48 rouven joined #gluster
06:49 ivan_rossi joined #gluster
06:53 rouven joined #gluster
06:56 fsimonce joined #gluster
07:12 jkroon joined #gluster
07:20 ThHirsch joined #gluster
07:24 rwheeler joined #gluster
07:50 yoavz joined #gluster
07:59 [diablo] joined #gluster
08:02 kramdoss_ joined #gluster
08:06 marbu joined #gluster
08:48 Wizek_ joined #gluster
09:05 ahino joined #gluster
09:12 map1541 joined #gluster
09:13 _KaszpiR_ joined #gluster
09:13 buvanesh_kumar joined #gluster
09:35 bartden joined #gluster
09:35 bartden Hi, any suggestions what performance tweaks I can do for small file writes?
09:38 Wizek_ joined #gluster
09:40 kramdoss_ joined #gluster
09:50 timfi joined #gluster
09:51 timfi Hello! I'm trying to trick ovirt into thinking I have a replica 3
09:52 timfi is this possible by starting with 3 nodes and just remove the last two?
10:05 susant joined #gluster
10:06 MrAbaddon joined #gluster
10:12 renout joined #gluster
10:13 Klas why would you do that?
10:13 Klas I mean, why would you use gluster as single node?
10:14 Klas what would be the benefit?
10:14 Klas (and, yeah, that would probably work)
10:25 susant joined #gluster
10:44 baber joined #gluster
11:05 genunix joined #gluster
11:18 genunix Hello, does anyone seen this error? E [MSGID: 114044] [client-handshake.c:1140:client_setvolume_cbk] 0-aptly-client-1: SETVOLUME on remote-host failed [No such file or directory] We are investigating weird issue where one client is not able to write to volume (while others can) + random corruptions of aptly database that's living on that volume.
11:22 ic0n joined #gluster
12:21 msvbhat__ joined #gluster
12:21 msvbhat_ joined #gluster
12:21 msvbhat joined #gluster
12:22 kramdoss_ joined #gluster
12:28 appno_matt joined #gluster
12:29 appno_matt G'day, folks :]
12:46 NoctreGryps G'day appno_matt
13:16 baber joined #gluster
13:17 shyam joined #gluster
13:23 shyam joined #gluster
13:28 rouven joined #gluster
13:29 pdrakeweb joined #gluster
13:33 rouven joined #gluster
13:38 rouven joined #gluster
13:47 skylar1 joined #gluster
13:52 dgandhi joined #gluster
14:06 dgandhi joined #gluster
14:09 pdrakeweb joined #gluster
14:17 hmamtora joined #gluster
14:21 anthony25 hi
14:21 glusterbot anthony25: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offe
14:22 anthony25 I have a online server with multiple gluster volumes, and I would like to do some backups on my home nas
14:22 anthony25 so tight bandwidth
14:23 anthony25 I was looking at geo-replication, but is it possible to me to only enable the replication at night?
14:23 anthony25 or it will be hackish and would not work well?
14:23 cloph anthony25: you can have a cronjob that pauses and resumes the replication, but it is kinda hackish.
14:24 anthony25 cloph: ok
14:24 cloph In general geo-replication is kinda bad at catching up, so if the data changes a lot, then this might cause gluster not being able to catch up.
14:24 anthony25 hum, I see
14:24 cloph if the data is mostly stable or few files, then will surely work.
14:24 [diablo] joined #gluster
14:24 cloph (you can also assign bandwidth limit via rsync options to the replication)
14:25 Klas rsync seems like the best idea tbh
14:25 Klas not perfect, and is kinda slow
14:26 anthony25 ok, thanks a lot!
14:27 anthony25 I was thinking about rsync too, but yeah, if I could avoid to scan every files for each backups, it would have been neat
14:27 anthony25 but as my backups would be in a best-effort way…
14:28 anthony25 (meaning that my nas could be down during a few days because I'm moving)
14:29 cloph anthony25: geo-replication can spam your log quite a bit in error-cases, so make sure that if you have it enabled, then that the slave volume is working properly (otherwise make sure to monitor free diskspace on your log-partition :-))
14:35 dgandhi joined #gluster
14:47 msvbhat joined #gluster
14:47 msvbhat_ joined #gluster
14:47 msvbhat__ joined #gluster
14:49 anthony25 cloph: hehe, ok, thanks for the advise
14:50 anthony25 but I think I'll go on rsync only, not geo-replication
14:53 dgandhi joined #gluster
15:11 farhorizon joined #gluster
15:13 pioto_ joined #gluster
15:29 wushudoin joined #gluster
15:49 melliott joined #gluster
15:54 jstrunk joined #gluster
16:03 msvbhat_ joined #gluster
16:03 msvbhat joined #gluster
16:03 msvbhat__ joined #gluster
16:10 HB joined #gluster
16:12 Guest54296 left #gluster
16:15 Guest54296 joined #gluster
16:16 Guest54296 Hi, is this the best place to ask for assistance in trouble shooting a gluster tiering issue or is there a forum that would be better?
16:16 kpease joined #gluster
16:17 kpease_ joined #gluster
16:21 kramdoss_ joined #gluster
16:21 melliott joined #gluster
16:27 Guest54296 left #gluster
16:27 farhorizon joined #gluster
16:35 dr-gibson joined #gluster
16:35 dr-gibson joined #gluster
16:40 ivan_rossi left #gluster
16:52 pdrakeweb joined #gluster
16:56 dgandhi joined #gluster
17:05 cliluw joined #gluster
17:06 msvbhat joined #gluster
17:07 ahino joined #gluster
17:17 _dist joined #gluster
17:27 dgandhi joined #gluster
17:52 ilbot3 joined #gluster
17:52 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
17:53 rouven joined #gluster
17:57 siel joined #gluster
18:07 vbellur1 joined #gluster
18:08 vbellur joined #gluster
18:09 vbellur joined #gluster
18:14 baber joined #gluster
18:14 vbellur joined #gluster
18:14 vbellur joined #gluster
18:15 dgandhi joined #gluster
18:18 jstrunk joined #gluster
18:26 mallorn Decreasing features.locks-revocation-max-blocked to an absurdly low number is letting our distributed-disperse set heal again.
18:26 mallorn And gluster volume heal [volume] info doesn't take 19 minute; it's only a minute or so.
18:27 guhcampos joined #gluster
18:30 slashk joined #gluster
18:31 ingard mallorn: we've seen similar long heal info cmds
18:32 ingard mallorn: how did you come about changing that setting?
18:33 rouven joined #gluster
18:35 jkroon joined #gluster
18:36 mallorn The bad thing is that it also caused whatever was waiting on locks to fail, but it was a one-time issue.
18:36 mallorn gluster volume set [volume] features.locks-revocation-max-blocked 25
18:38 rouven joined #gluster
18:40 hmamtora Folks - I have gfs 3.8 with 2 X 6 Distributed-Replicate bricks; I need to change IP address of few nodes without changing the hostname. Do you think this can be done without any impact. I plan to make changes for one node in the cluster at a time....
18:42 bartden joined #gluster
18:42 PatNarciso hmm.
18:42 bartden Hi, is it possible to write to gluster asynchronous?
18:43 bartden I have an application which writes 512 bytes to a file each time, for a file in total of 300GB
18:43 PatNarciso hmamtora, I would imagine there would be a hiccup -- if you could make the transition slowly: ie: both IP addresses attached to the same host, and perform host name changes.  (this is how I would do it).
18:43 bartden So i guess local lan latency is killing my performance ...
18:44 PatNarciso hmamtora, if the concern is business impacting: test this theory first using a few VPS instances.
18:45 PatNarciso bartden, I have a very similar issue.  At this time, I believe geo-replication is the only async -- as such, it is not my solution as I need a master-master setup... and I'm unsure geo-replication supports this.
18:46 PatNarciso bartden, I will tell you -- the moment I see a blog post regarding this, I'll be one of the first testing it.
18:46 bartden well i have a distributed setup … but by asyn i mean between client and server
18:47 bartden write-behind does not give any performance boost
18:48 hmamtora PatNarciso: I plan to make changes for one node at a time only.
18:49 slashk joined #gluster
18:49 PatNarciso I believe there is a blog post regarding hosting websites on a gluster volume.  some of those performance suggestions may apply to what you're doing.
18:52 ingard mallorn: but how did you know what variable to change? I'm asking because we're wondering exactly what is happening when the cmd is run, and then just proceeds to print lots and lots and lots of gfids. No stats about eta or whatever
18:52 PatNarciso hmamtora, if I had to guess: a connection is made between servers (peers).  it would need to be broken for a moment to re-connect, and at that time it would use the updated IP address.
18:57 hmamtora ok btw I did peer probe using the DNS FQDNs that we have and not IP addresses, I also see the same(hostnames) in the /var/lib/glusterd/peers/{UUID_files}
18:58 _KaszpiR_ joined #gluster
19:00 plarsen joined #gluster
19:00 mallorn @ingard, we had the same problem in April and I was messing around with variables out of desperation -- our heals weren't happening.  I changed the locks and suddenly the info request was much more responsive.
19:01 MrAbaddon joined #gluster
19:01 mallorn I lowered it even more and we suddenly saw dramatic changes in our self-heal that had been stuck for weeks.  Within 20 hours everything was healed.
19:02 mallorn This time we were trying to avoid it because lowering locks caused problems for clients, but now we're at the point that gluster slowness because of the self-heal process is hurting clients more than losing the ability to lock.
19:19 dgandhi joined #gluster
19:22 ingard mallorn: weird :S. How big were the brick(s) you were healing?
19:32 mallorn They're only 6TB, but we have 15 of them.
19:33 mallorn We're running a 5 x (2 + 1).
19:33 jstrunk joined #gluster
19:34 renout joined #gluster
19:35 atrius joined #gluster
19:38 pdrakeweb joined #gluster
19:39 rouven joined #gluster
19:49 NoctreGryps Hi, I'm starting a new Gluster installation and trying to make a volume dedicated only to logs to prevent problems. Is it the /etc/glusterfs/gluster-rsyslog-7.2conf file that I need to adjust, or is there an easier location/method to accomplish this?
20:05 pdrakeweb joined #gluster
20:07 rouven joined #gluster
20:12 rouven joined #gluster
20:15 baber joined #gluster
21:02 rouven joined #gluster
21:06 rouven joined #gluster
21:15 rouven_ joined #gluster
21:26 pdrakeweb joined #gluster
21:34 baber joined #gluster
22:00 pdrakeweb joined #gluster
22:01 smohan[m] joined #gluster
22:13 gbox Hi all, I’m migrating from my initial prototype 3.7 install (works great in production!) to a separate 3.12 and want to take advantage of the improvements.  Tiering seems stable and useful for keeping data on faster bricks close to the app & web servers.  JBOD seems more viable now than keeping RAID underneath, especially if switching to 3x replication.  Both Gluster & RH docs seem thorough and up2date.  Any red flags in my approach?
22:17 rouven joined #gluster
22:17 pdrakeweb joined #gluster
22:21 rouven joined #gluster
22:23 hmamtora joined #gluster
22:30 gbox RH deprecated 2x replication in favor of 3x.  Do most people still use 2x?
22:33 gbox I have encountered instability where lots of files fail healing.  Does 3x prevent that to a greater degree?
22:34 gbox That’s been rare, and was due to unstable clients and workflows
22:35 gbox The RH docs are really good, I hope someone was paid well for those.
22:36 tamalsaha[m] joined #gluster
22:36 marin[m] joined #gluster
22:44 rouven_ joined #gluster
22:46 map1541 joined #gluster
23:04 vbellur joined #gluster
23:07 farhorizon joined #gluster
23:12 rouven joined #gluster
23:16 rouven joined #gluster
23:50 farhorizon joined #gluster
23:51 pdrakeweb joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary