Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-10-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 rouven_ joined #gluster
01:10 omie888777 joined #gluster
01:54 ilbot3 joined #gluster
01:54 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:02 edong23_ joined #gluster
02:02 gospod2 joined #gluster
02:02 samppah joined #gluster
02:13 prasanth joined #gluster
02:54 skoduri joined #gluster
03:23 rafi joined #gluster
03:23 pioto joined #gluster
03:25 nbalacha joined #gluster
03:26 aravindavk joined #gluster
03:38 ppai joined #gluster
03:39 Gambit15 joined #gluster
03:48 atinmu joined #gluster
03:55 psony joined #gluster
04:03 apandey joined #gluster
04:05 aravindavk joined #gluster
04:10 kramdoss_ joined #gluster
04:11 kotreshhr joined #gluster
04:15 Prasad joined #gluster
04:24 dominicpg joined #gluster
04:24 PatNarciso joined #gluster
04:30 sanoj joined #gluster
04:34 jiffin joined #gluster
04:57 itisravi joined #gluster
05:02 msvbhat joined #gluster
05:02 msvbhat_ joined #gluster
05:04 karthik_us joined #gluster
05:10 psony joined #gluster
05:11 psony joined #gluster
05:12 xavih joined #gluster
05:19 aravindavk joined #gluster
05:21 jkroon joined #gluster
05:21 Shu6h3ndu joined #gluster
05:26 jiffin1 joined #gluster
05:27 p7mo joined #gluster
05:30 marbu joined #gluster
05:30 skumar joined #gluster
05:31 msvbhat joined #gluster
05:31 msvbhat_ joined #gluster
05:35 jiffin1 joined #gluster
05:41 gyadav joined #gluster
05:50 hgowtham joined #gluster
05:52 aravindavk joined #gluster
05:57 Saravanakmr joined #gluster
06:01 atinmu joined #gluster
06:06 rouven joined #gluster
06:07 skumar_ joined #gluster
06:11 poornima joined #gluster
06:12 kdhananjay joined #gluster
06:18 Wizek__ joined #gluster
06:22 sanoj joined #gluster
06:23 skoduri joined #gluster
06:26 jtux joined #gluster
06:27 mlg9000 joined #gluster
06:30 skumar__ joined #gluster
06:40 sanoj joined #gluster
06:46 rafi1 joined #gluster
07:00 arif-ali joined #gluster
07:00 ivan_rossi joined #gluster
07:03 skumar joined #gluster
07:05 aravindavk joined #gluster
07:07 apandey joined #gluster
07:09 atinmu joined #gluster
07:12 jkroon joined #gluster
07:15 mlg9000 joined #gluster
07:15 weller hi again, I have a gluster volume exported with samba/ctdb. when I fuse mount, everything seems fine. the same thing with vfs_gluster makes powerpoint freeze/hang on first file save (~5 minutes?), but then subsequent saving works fine. since a mounted gluster volume exported by path works, I assume this might be a gluster issue!? thanks in advance for hints! :)
07:26 msvbhat joined #gluster
07:26 msvbhat_ joined #gluster
07:27 rafi joined #gluster
07:38 rafi joined #gluster
07:44 rafi joined #gluster
07:53 _KaszpiR_ joined #gluster
07:55 jkroon joined #gluster
07:56 ThHirsch joined #gluster
08:02 _KaszpiR_ joined #gluster
08:06 arif-ali joined #gluster
08:10 fsimonce joined #gluster
08:12 Prasad joined #gluster
08:23 kotreshhr joined #gluster
08:31 arpu joined #gluster
08:43 sanoj joined #gluster
08:49 msvbhat joined #gluster
08:49 msvbhat_ joined #gluster
08:54 lcami1 joined #gluster
08:58 kotreshhr joined #gluster
09:16 jiffin1 joined #gluster
09:17 ppai joined #gluster
09:25 skoduri joined #gluster
09:26 asciiker joined #gluster
09:27 atinmu joined #gluster
09:53 [diablo] joined #gluster
10:03 atinmu joined #gluster
10:08 shyam joined #gluster
10:12 jkroon joined #gluster
10:31 ppai joined #gluster
10:34 atinmu joined #gluster
10:36 karthik_us joined #gluster
10:37 jiffin1 joined #gluster
10:46 MrAbaddon joined #gluster
10:48 sanoj joined #gluster
10:48 msvbhat joined #gluster
10:48 msvbhat_ joined #gluster
10:50 baber joined #gluster
10:52 Saravanakmr joined #gluster
10:54 skoduri joined #gluster
10:54 rwheeler joined #gluster
10:54 hgowtham joined #gluster
10:55 squeakyneb joined #gluster
10:57 nbalacha joined #gluster
11:04 shyam joined #gluster
11:08 Wizek_ joined #gluster
11:12 gyadav joined #gluster
11:41 kdhananjay joined #gluster
11:42 * weller timidly bumps
11:46 shyam joined #gluster
11:49 Prasad joined #gluster
12:04 hgowtham joined #gluster
12:05 kotreshhr joined #gluster
12:06 msvbhat joined #gluster
12:06 msvbhat_ joined #gluster
12:07 Saravanakmr joined #gluster
12:08 sanoj joined #gluster
12:14 rafi joined #gluster
12:15 kotreshhr joined #gluster
12:16 rafi1 joined #gluster
12:24 Slydder joined #gluster
12:24 Slydder hey all
12:27 Prasad joined #gluster
12:28 baber joined #gluster
12:29 poornima joined #gluster
12:31 Slydder am trying to update my gluster install but would like to be able to upgrade without going down but that would seem to not be possible. After upgrading the first node gluster will no longer start and errors out with the message below. therefore am scheduling a maint window to upgrade the second (now master) node. I am hoping that once the second node is updated the first node will actually start again.
12:31 jiffin1 joined #gluster
12:32 Slydder [2017-10-05 12:32:33.865172] W [socket.c:593:__socket_rwv] 0-management: readv on 10.1.5.55:24007 failed (Connection reset by peer)
12:33 Slydder [2017-10-05 12:32:33.865454] I [MSGID: 106005] [glusterd-handler.c:6034:__glusterd_brick_rpc_notify] 0-management: Brick gfsc1b:/data/gfsc1 has disconnected from glusterd.
12:35 Saravanakmr joined #gluster
12:45 skoduri joined #gluster
12:53 kotreshhr joined #gluster
12:56 lcami1 joined #gluster
12:56 amazingchazbono joined #gluster
13:00 lcami2 joined #gluster
13:11 msvbhat_ joined #gluster
13:11 msvbhat joined #gluster
13:16 flomko joined #gluster
13:16 renout joined #gluster
13:17 shyam joined #gluster
13:30 msvbhat_ joined #gluster
13:30 msvbhat joined #gluster
13:31 skylar joined #gluster
13:34 shyam joined #gluster
13:37 apandey joined #gluster
13:39 Wayke91 joined #gluster
13:42 ThHirsch joined #gluster
13:47 jiffin joined #gluster
13:58 plarsen joined #gluster
13:59 weller are there any other options to get support with the issue? (vfs_gluster samba share freezes some applications, fuse-mounted export not)
14:01 Asako good luck
14:04 Klas weller: have you verified that HA-samba works?
14:04 Klas cause that sounds like a typical stale-mount situation, which FUSE client is good at handling and the others, well, aren't
14:05 weller Klas: samba/ctdb in principle does what it is supposed to do. what do you mean with HA-samba?
14:05 psony joined #gluster
14:06 Klas samba/ctdb should indeed provide Highly Available samba
14:06 weller failover/takeover is fully functioning
14:06 Klas even while writing to the volume?
14:06 jstrunk joined #gluster
14:06 weller on the other hand: ms word does not have these problems
14:07 weller haven't checked that, will do!
14:10 skoduri joined #gluster
14:12 weller Klas: checked
14:18 Klas it worked?
14:18 Klas what authentication are you using for mounting?
14:18 weller transferring a single large file has caused no problems
14:18 weller no authentication
14:20 Klas transferring while stopping the volumes, both gracefully and non-gracefully?
14:20 weller you mean the gluster volume? I only tried ctdb disable
14:22 Klas I believe in real-world tests
14:22 Klas meaning, the things should go DOWN
14:22 Klas this was why I didn't use ssl-cert between clients and servers, cause I could break gluster with it (in 3.7.14)
14:23 weller would be worth a test. but anyways: when I save in powerpoint/excel/matlab, there is no failover.
14:25 weller forcing a takeover (ctdb disable on the active node) keeps matlab frozen, file locks are gracefully handed over
14:25 Klas strange with the difference within the office suite
14:25 weller yep, indeed.
14:26 buvanesh_kumar joined #gluster
14:27 Klas ah, I see, you are using the ctdb mechanic, that does seem sane in your case, yes
14:30 hmamtora joined #gluster
14:30 hmamtora_ joined #gluster
14:32 Klas weller: btw, you might get better support from samba support channels, your problem does seem more samba-related rather than gluster
14:33 weller might be true :/ thanks anyways
14:34 weller but dont gluster developers contribute with vfs_gluster code? since the fuse-mounted stuff has no issues, it might indeed be gluster related
14:34 Klas no idea, only a helpful user =)
14:35 Klas I wouldn't give up on this channel either, but it might be smart to explore both routes =)
14:36 weller sure thing. up to now i got 'dDay changed to 05 Oct 2017
14:36 weller sure thing. up to now i got 'dDay changed to 05 Oct 2017' a few times ;-)
14:36 Klas hehe
14:41 xavih joined #gluster
14:42 farhorizon joined #gluster
14:59 kotreshhr left #gluster
15:00 Saravanakmr joined #gluster
15:02 kkeithley there are gluster+samba devs like obnox, gd, and jarrpa over in #samba-technical. I don't know why they aren't here as well. You could ask them. :-/
15:03 ndevos and anoopcs is here of course
15:04 omie888777 joined #gluster
15:05 kkeithley anoopcs++
15:05 glusterbot kkeithley: anoopcs's karma is now 3
15:08 weller everyone++
15:08 glusterbot weller: everyone's karma is now 1
15:08 aravindavk joined #gluster
15:11 wushudoin joined #gluster
15:17 kpease joined #gluster
15:36 vbellur joined #gluster
15:40 kramdoss_ joined #gluster
15:56 gyadav joined #gluster
16:07 skumar_ joined #gluster
16:09 plarsen joined #gluster
16:10 lcami1 joined #gluster
16:22 ivan_rossi left #gluster
16:23 asciiker left #gluster
16:27 snehring joined #gluster
16:40 jstrunk joined #gluster
16:53 WebertRLZ joined #gluster
16:54 rouven joined #gluster
16:54 msvbhat joined #gluster
16:54 msvbhat_ joined #gluster
17:09 jstrunk joined #gluster
17:15 farhorizon joined #gluster
17:24 _KaszpiR_ joined #gluster
17:32 owlbot joined #gluster
17:55 gem joined #gluster
17:57 ThHirsch joined #gluster
18:01 gem_ joined #gluster
18:04 baber joined #gluster
18:17 cliluw joined #gluster
18:18 jefarr_ joined #gluster
18:20 rouven joined #gluster
18:24 kramdoss_ joined #gluster
18:32 rastar joined #gluster
18:44 CharliePace joined #gluster
18:45 nh2 can I tell glusterfs's mount to retry mounting instead of exiting with an exit code when the target gluster server can't be reached yet?
18:49 CharliePace Were running an rsync from another server to a virtual server that has a gluster mount ( where the data is being rsyncd to ). The mount is running over infinband. The main issue here is on the client the glusterfs process memory utilization gross non stop and never frees itself even after the sync has stopped. This eventually leads to the virtual server crashing. Does anyone have any familiarity with this issue
18:49 CharliePace ?
18:50 nh2 CharliePace: maybe this is some caching option that you have on, or some caching that doesn't have a way to turn off?
18:51 CharliePace I used a pretty generic setup for the mount and volume creation. Im not immediately familiar with looking for any oddball options that may be enabled. Ill locate my mount command and send that
18:51 CharliePace It was an initial thought though
18:52 CharliePace This is the basic mount command used on the client: mount -t glusterfs -o transport=rdma 11.11.11.1:/DB34298 /data
18:53 nh2 CharliePace: do you have lots of data that goes to the rsync, or is the amount of memory growth unreasonably large, even too large to be explained by something like a lot of metadata caching?
18:53 CharliePace Only option listed for the volume is "nfs.disable: on"
18:54 nh2 CharliePace: another way to investigate in that direction might be to `rsync -v --dry-run` and see if that still happens (and how many files it would consider)
18:54 kkeithley CharliePace: what version of glusterfs?
18:54 jvargas joined #gluster
18:55 CharliePace 3.12 for the version, I am getting the info on just about how much data is transferred before it crashes.
18:55 jvargas Hello. I configured a Replica 2 volume for a Wordpress app. However, it is slow, even when no writes are performed. So I think it could be due to the amount of small files that Wordpress uses.
18:57 CharliePace Around 3.6GB was transferred last time before the VM crashed
18:57 jvargas How can I tune up this volume to deliver a better response time for Wordpress? Is there a way to first use the local copy and delay the sync with other servers for later? Or something like that?
18:57 nh2 kkeithley: is it currently possible to have `mount -t glusterfs` retry on failure? I've seen a couple of requests for that in the issue tracker but I can't really tell if it was implemented at some point
18:58 kkeithley not that I'm aware of
18:58 dxlsm joined #gluster
18:59 dxlsm Hi gluster folks. I was wondering if anyone was hanging around and had time to talk through a problem I'm seeing.
19:00 kramdoss_ joined #gluster
19:07 CharliePace nh2: we have at one point been able to complete the sync, after multiple tries, but even after its finished it never comes down unless the mount is removed and remounted. Were testing the dry run now and the memory is spiking very quickly still which seems bizarre.
19:08 lcami1 left #gluster
19:14 nh2 CharliePace: to me this sounds like metadata caching; the number of files printed by the dry run would certainly be interested to know. Also when you've done that, the output of `strace -fwc rsync ...` might give you hints too
19:15 CharliePace sent 106043349 bytes  received 7743123 bytes  241328.68 bytes/sec
19:15 CharliePace total size is 193026833754  speedup is 1696.40 (DRY RUN)
19:17 bluenemo joined #gluster
19:19 omie888777 joined #gluster
19:20 kramdoss_ joined #gluster
19:23 CharliePace Gonna run the strace in a few. Can you think of any metadatacache adjustments I could make that may help with this situation?
19:23 CharliePace Im browsing through documentation now to see if I can spot anything
19:25 blu_ joined #gluster
19:28 farhorizon joined #gluster
19:30 dxlsm So I'm just going to throw this out there.. maybe someone has some clue as to what is going on. We have a sizeable gluster deployment. The original cluster was eight nodes with sixteen 6TB bricks each. That worked great. No real problems over the course of a year, except for the occasional failed disk or node going down for other system problems. This week, we went to add eight more nodes, each with twelve 10TB bricks. As soon as we added the first set (w
19:35 msvbhat joined #gluster
19:37 jkroon joined #gluster
19:37 msvbhat_ joined #gluster
19:43 dxlsm er, forgot to start this in a tmux.. back in a sec.
19:43 aronnax joined #gluster
19:44 dxlsm joined #gluster
19:44 dxlsm better.
20:02 CharliePace Running a find on the directory of files on the mounted share also spikes the memory usage for the gluster process on the client that never seems to free up either
20:06 kramdoss_ joined #gluster
20:27 kramdoss_ joined #gluster
20:39 msvbhat__ joined #gluster
20:39 primehaxor joined #gluster
20:39 msvbhat_1 joined #gluster
20:48 gem joined #gluster
20:51 skylar joined #gluster
20:54 rouven joined #gluster
20:59 CharliePace joined #gluster
21:09 jvargas joined #gluster
21:10 melliott joined #gluster
21:31 omie88877777 joined #gluster
21:35 CharliePace joined #gluster
21:38 nottesla joined #gluster
21:39 nottesla left #gluster
21:41 ogelpre left #gluster
21:47 skylar joined #gluster
22:49 msvbhat_ joined #gluster
22:49 msvbhat joined #gluster
22:54 nh2 that really sounds like metadata caching then
22:54 nh2 or a memory leak bug
23:24 jkroon joined #gluster
23:50 msvbhat joined #gluster
23:50 msvbhat_ joined #gluster
23:57 ThHirsch joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary