Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 Teraii joined #gluster
00:22 baber joined #gluster
00:22 Seth_Karlo joined #gluster
00:23 Seth_Karlo joined #gluster
01:18 Seth_Kar_ joined #gluster
01:29 shdeng joined #gluster
01:33 plarsen joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:00 jiffin joined #gluster
02:13 derjohn_mob joined #gluster
02:35 skoduri joined #gluster
02:37 doc|work joined #gluster
02:49 doc|work joined #gluster
02:54 kramdoss_ joined #gluster
03:06 Gambit15 joined #gluster
03:10 itisravi joined #gluster
03:18 Jacob8432 joined #gluster
04:06 riyas joined #gluster
04:12 atinm joined #gluster
04:20 nbalacha joined #gluster
04:22 ankitr joined #gluster
04:23 susant joined #gluster
04:23 buvanesh_kumar joined #gluster
04:51 Shu6h3ndu joined #gluster
04:57 skumar joined #gluster
04:58 devyani7 joined #gluster
04:58 kdhananjay joined #gluster
05:05 k0nsl joined #gluster
05:05 k0nsl joined #gluster
05:06 k0nsl joined #gluster
05:06 k0nsl joined #gluster
05:07 gyadav joined #gluster
05:10 jiffin joined #gluster
05:15 jkroon joined #gluster
05:15 apandey joined #gluster
05:21 devyani7 joined #gluster
05:24 Humble joined #gluster
05:25 Karan joined #gluster
05:31 aravindavk joined #gluster
05:32 gem_ joined #gluster
05:33 Prasad joined #gluster
05:40 ppai joined #gluster
05:45 prasanth joined #gluster
05:56 kramdoss_ joined #gluster
05:59 skoduri joined #gluster
06:00 ayaz joined #gluster
06:01 ashiq joined #gluster
06:03 rafi joined #gluster
06:04 itisravi joined #gluster
06:05 apandey_ joined #gluster
06:06 rafi joined #gluster
06:13 apandey joined #gluster
06:15 kraynor5b_ joined #gluster
06:25 hgowtham joined #gluster
06:29 sona joined #gluster
06:30 buvanesh_kumar joined #gluster
06:38 kotreshhr joined #gluster
06:38 buvanesh_kumar joined #gluster
06:43 apandey_ joined #gluster
06:44 jtux joined #gluster
06:47 itisravi_ joined #gluster
06:50 ivan_rossi joined #gluster
06:57 Peppard joined #gluster
06:59 Saravanakmr joined #gluster
07:04 Gnomethrower joined #gluster
07:11 ankitr joined #gluster
07:12 apandey__ joined #gluster
07:21 prasanth joined #gluster
07:21 kramdoss_ joined #gluster
07:25 apandey_ joined #gluster
07:28 rafi joined #gluster
07:30 apandey__ joined #gluster
07:37 fsimonce joined #gluster
07:41 Wizek_ joined #gluster
08:04 Seth_Karlo joined #gluster
08:19 mbukatov joined #gluster
08:20 Gnomethrower joined #gluster
08:23 itisravi joined #gluster
08:58 apandey joined #gluster
09:07 derjohn_mob joined #gluster
09:09 ankitr joined #gluster
09:13 rafi1 joined #gluster
09:22 msvbhat joined #gluster
09:28 MrAbaddon joined #gluster
09:34 ayaz joined #gluster
09:46 [diablo] joined #gluster
09:48 Seth_Karlo joined #gluster
09:49 Seth_Karlo joined #gluster
09:52 vinurs joined #gluster
09:53 apandey joined #gluster
10:16 nishanth joined #gluster
10:44 skumar joined #gluster
10:48 msvbhat joined #gluster
11:13 gyadav joined #gluster
11:20 mbukatov joined #gluster
11:22 Gnomethrower joined #gluster
11:37 bartden joined #gluster
11:48 _KaszpiR_ joined #gluster
11:48 Wizek_ joined #gluster
11:50 marbu joined #gluster
11:53 gyadav joined #gluster
12:07 rastar joined #gluster
12:14 Ashutto joined #gluster
12:16 Ashutto Hello. I have several errors like this "[posix-aio.c:276:posix_aio_writev_complete] 0-vol-gaz-homes-posix: writev(async) failed fd=25,offset=8192 (-22) [Invalid argument]" apparently related to an application that cyclically writes a large amount of small files. Is there something i can do to mitigate my problems ?
12:22 gyadav joined #gluster
12:47 Saravanakmr joined #gluster
12:49 rwheeler joined #gluster
12:52 baber joined #gluster
12:56 msvbhat joined #gluster
12:58 plarsen joined #gluster
13:00 susant left #gluster
13:15 kotreshhr left #gluster
13:28 kramdoss_ joined #gluster
13:40 shyam joined #gluster
13:43 msvbhat joined #gluster
13:43 nh2 joined #gluster
13:49 skumar joined #gluster
13:52 skylar1 joined #gluster
13:54 vinurs joined #gluster
13:56 shaunm joined #gluster
14:00 Saravanakmr joined #gluster
14:08 buvanesh_kumar joined #gluster
14:11 _KaszpiR_ I'v got something like this on Centos 5 with gluster 3.7 https://pastebin.com/0V7mzCBp
14:11 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:11 _KaszpiR_ nothing else in the logs
14:14 jiffin joined #gluster
14:16 Karan joined #gluster
14:28 farhorizon joined #gluster
14:46 tom[] joined #gluster
14:53 nbalacha joined #gluster
14:57 gyadav joined #gluster
15:08 wushudoin joined #gluster
15:15 shaunm joined #gluster
15:15 jtux left #gluster
15:18 farhorizon joined #gluster
15:22 tom[] joined #gluster
15:25 farhoriz_ joined #gluster
15:27 farhori__ joined #gluster
15:32 m0zes joined #gluster
15:42 Humble joined #gluster
15:48 ankitr joined #gluster
15:49 skoduri joined #gluster
15:56 ankitr joined #gluster
16:03 msvbhat joined #gluster
16:04 plarsen joined #gluster
16:06 farhorizon joined #gluster
16:07 jiffin joined #gluster
16:07 kramdoss_ joined #gluster
16:15 Shu6h3ndu joined #gluster
16:19 armyriad joined #gluster
16:25 farhorizon joined #gluster
16:25 gyadav joined #gluster
16:27 jiffin1 joined #gluster
16:29 jiffin joined #gluster
16:30 ivan_rossi left #gluster
16:31 nbalacha joined #gluster
16:44 gyadav joined #gluster
16:58 jiffin joined #gluster
17:00 Saravanakmr joined #gluster
17:03 nbalacha joined #gluster
17:13 gyadav joined #gluster
17:25 Karan joined #gluster
17:48 cliluw joined #gluster
18:13 ChrisHolcombe joined #gluster
18:19 skoduri joined #gluster
18:35 baber joined #gluster
18:40 marlinc joined #gluster
18:44 Karan joined #gluster
18:52 MrAbaddon joined #gluster
18:59 bit4man joined #gluster
19:06 msvbhat joined #gluster
19:10 nh2 joined #gluster
19:14 MrAbaddon joined #gluster
19:26 bennyturns joined #gluster
19:27 Wizek_ joined #gluster
19:31 Gambit15 Hey #gluster, anyone around?
19:31 bennyturns Gambit15, hi yas
19:32 bennyturns Gambit15, whats up?
19:32 Gambit15 I've just had an odd quorum issue
19:32 bennyturns ok
19:32 bennyturns what vol type?  what wuorum type?
19:33 Gambit15 This particular cluster is using 2+1 (arbiter), with cluster.server-quorum-type: server & cluster.quorum-type: auto
19:33 bennyturns ok
19:33 bennyturns whats happening?
19:33 Gambit15 I just pulled out 1 of the nodes, gluster complained of lost quorum, and locked the volumes
19:34 Gambit15 :/
19:34 bennyturns hmm
19:34 bennyturns you see the arbiter online?
19:34 Gambit15 I ran peer status on the remaining full node, and it was still able to communicate with the arbiter
19:35 bennyturns Gambit15, https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
19:35 glusterbot Title: Arbiter volumes and quorum options - Gluster Docs (at gluster.readthedocs.io)
19:35 bennyturns lets look at what it suggest for quorum types
19:36 Gambit15 Well, it's replica 3 arbiter 1, which should maintain quorum if any 1 of the 3 peers should fail
19:36 bennyturns Gambit15, what about trying a different quorum type?
19:36 bennyturns we could try:
19:36 bennyturns Option:cluster.server-quorum-ratio Value Description: 0 to 100
19:36 bennyturns and set it to 51%
19:37 bennyturns Gambit15, so you have 2 data bricks and 1 arbiter brick, correct?
19:37 bennyturns and 3 nodes
19:37 Gambit15 cluster.quorum-type: auto = 51%
19:38 bennyturns Gambit15, just to confirm:
19:38 bennyturns Gambit15, so you have 2 data bricks and 1 arbiter brick, correct?
19:38 bennyturns and 3 nodes
19:39 Gambit15 Yes. Two servers/nodes, each with a single brick, plus the arbiter which is a VM on a separate environment
19:39 Gambit15 When the arbiter has gone offline, the 2 servers have continued functioning
19:39 bennyturns hmm what is this:
19:39 bennyturns If 2 bricks are up and if one of them is the arbiter (i.e. the 3rd brick) and it blames the other up brick for a given file, then all write FOPS will fail with ENOTCONN. This is because in this scenario, the only true copy is on the brick that is down. Hence we cannot allow writes until that brick is also up. If the arbiter doesn't blame the other brick, FOPS will be allowed to proceed. 'Blaming' here is w.r.t the values of AF
19:39 bennyturns R changelog extended attributes.
19:40 Gambit15 This time, with one of the servers offline, gluster "lost quorum" & locked the volumes
19:41 bennyturns Gambit15, what I pasted above is at the bottom of:
19:42 bennyturns https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#how-arbiter-works
19:42 glusterbot Title: Arbiter volumes and quorum options - Gluster Docs (at gluster.readthedocs.io)
19:42 Gambit15 There were no heal processes taking place at the time I shutdown the server, nor was anything running on it (VMs)
19:42 bennyturns Gambit15, is it saying that even with the atbiter up and one node up I/Os can still fail?
19:43 bennyturns I ahvent done alot iwht arbiter :/
19:43 Gambit15 In theory, the arbiter should act like, and be treated as, any other peer
19:43 bennyturns Gambit15, what areyou using to write data?
19:44 Gambit15 The only difference being it only writes metadata
19:44 bennyturns Gambit15, ya sounds right to me
19:44 bennyturns Gambit15, how are you accessing the mount when you are testing IOs?
19:44 bennyturns DD?
19:44 Gambit15 libgfapi
19:45 Gambit15 Uh, that's how the volumes are mounted by the hypervisor
19:45 bennyturns so you are writting directly using lgfapi?  Or areyou just running DD or something in the VM?
19:45 bennyturns or are you just seeing the MVs gat paused?
19:45 Gambit15 But that's irrelevant in this case, as quorum is server-side & it was glusterd that locked the volumes
19:46 bennyturns I wantt osee what the fail return code is
19:46 bennyturns is the DOC is mentions:
19:46 bennyturns ENOTCONN
19:47 Gambit15 The VMs got paused & the gluster command stopped responding on the remaining nodes. The gluster logs showed it had "lost" quorum & paused the volumes
19:47 bennyturns Gambit15, feel free to open a bug
19:47 Gambit15 Writing to the mailing-list now, see what they say
19:47 bennyturns since its a VM and only 1 file
19:48 bennyturns I think you could be seeing:
19:48 bennyturns "and it blames the other up brick for a given file, then all write FOPS will fail with ENOTCONN"
19:48 bennyturns can you look at the xattrs of the VM image file
19:48 bennyturns and see if thats the case
19:48 bennyturns "blame" is an AFR term
19:49 Gambit15 Well, everything's back up now. I rebooted the downed node & gluster unpaused the volumes
19:49 bennyturns Gambit15, http://lists.gluster.org/pipermail/gluster-users.old/2015-September/023488.html
19:49 glusterbot Title: [Gluster-users] [Gluster-devel] AFR arbiter volumes (at lists.gluster.org)
19:54 bennyturns Gambit15, here is a vault presentation that may answer some questions http://events.linuxfoundation.org/sites/events/files/slides/glusterfs-arbiter-VAULT-2016.pdf
19:54 bennyturns Gambit15, again it mentions:
19:54 bennyturns c)1 brick and arbiter are up → allow writes IFF the arbiter doesn't blame
19:54 bennyturns the up brick
19:55 bennyturns Gambit15, I bet you are in that C scenario where the VM image file is on a brick that is being blamed
19:56 bennyturns I wonder how to resolve that?
19:56 bennyturns in a two brick replica
19:57 bennyturns one is always marked blame AFAIK?
19:58 bennyturns Gambit15, ohh one thing
19:58 bennyturns Gambit15, in your mount options
19:58 bennyturns Gambit15, are you using backup vol servers mount option?
19:58 jkroon joined #gluster
19:59 bennyturns Gambit15, https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/#mounting-volumes
19:59 glusterbot Title: Setting Up Clients - Gluster Docs (at gluster.readthedocs.io)
20:00 bennyturns backupvolfile-server=server-name
20:00 bennyturns you need that incase the node you mounted goes down, it fetchs the vol file from a different node
20:00 bennyturns Gambit15, you have that in the client mount options?
20:01 Gambit15 Not with libgfapi. That configuration has now been deprecated
20:02 bent|afk Gambit15, ok sounds like you know better than me GL!
20:02 Gambit15 ;)
20:02 Gambit15 Cheers anyway Ben!
20:02 baber joined #gluster
20:05 Gambit15 http://lists.gluster.org/pipermail/gluster-users.old/2015-September/023488.html
20:05 glusterbot Title: [Gluster-users] [Gluster-devel] AFR arbiter volumes (at lists.gluster.org)
20:05 Gambit15 oops
20:36 derjohn_mob joined #gluster
21:06 kraynor5b_ joined #gluster
21:06 shyam joined #gluster
21:14 gluytium joined #gluster
21:16 farhorizon joined #gluster
21:44 cliluw joined #gluster
21:57 farhorizon joined #gluster
22:20 farhorizon joined #gluster
22:24 farhorizon joined #gluster
22:34 farhorizon joined #gluster
22:41 shyam joined #gluster
22:42 moneylotion joined #gluster
23:09 farhoriz_ joined #gluster
23:54 tdasilva joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary