Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 amye joined #gluster
00:25 ahino joined #gluster
00:33 amye joined #gluster
00:49 Roland- joined #gluster
00:54 kukulogy joined #gluster
00:56 kukulogy ndevos, sorry for the late reply. Yes, the  /sbin/mount.glusterfs is missing. Where can I find it? I tried looking over google but no luck.
00:58 dnunez joined #gluster
00:59 shdeng joined #gluster
01:04 kukulogy Another question: I have 4 servers with 55gb each. I created striped replicated volume. I mounted it, I have 126G size. Am I having the correct size?
01:17 hagarth joined #gluster
01:28 Lee1092 joined #gluster
01:46 aj__ joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:08 harish_ joined #gluster
02:22 kukulogy joined #gluster
02:25 F2Knight joined #gluster
02:37 B21956 joined #gluster
02:40 hagarth joined #gluster
03:06 auzty joined #gluster
03:25 [o__o] joined #gluster
03:32 ashiq_ joined #gluster
03:35 magrawal joined #gluster
03:42 itisravi joined #gluster
03:48 Javezim joined #gluster
03:54 F2Knight joined #gluster
03:55 kramdoss_ joined #gluster
03:56 nehar joined #gluster
04:03 armyriad joined #gluster
04:06 hchiramm joined #gluster
04:08 rastar joined #gluster
04:08 F2Knight joined #gluster
04:14 shubhendu joined #gluster
04:16 cvstealth joined #gluster
04:17 aspandey joined #gluster
04:21 msvbhat joined #gluster
04:25 nbalacha joined #gluster
04:27 ppai joined #gluster
04:32 poornimag joined #gluster
04:48 satya4ever joined #gluster
05:03 atinm joined #gluster
05:06 aravindavk joined #gluster
05:07 sakshi joined #gluster
05:07 Gnomethrower joined #gluster
05:09 prasanth joined #gluster
05:13 sanoj joined #gluster
05:24 kotreshhr joined #gluster
05:26 ramky joined #gluster
05:26 Apeksha joined #gluster
05:27 karnan joined #gluster
05:29 ndarshan joined #gluster
05:39 kshlm joined #gluster
05:43 RameshN joined #gluster
05:44 shubhendu joined #gluster
05:51 eryc joined #gluster
05:57 hgowtham joined #gluster
06:01 Muthu joined #gluster
06:02 jiffin joined #gluster
06:06 Klas hmm, when in a replica 3 with an arbiterin quorum, shutting down both a server and arbiter disables writes
06:06 Klas but it is not resumed on existing mounts when the peers are sufficient mounts
06:07 Klas is this correct behaviour, ie, do you need to remount it?
06:08 Manikandan joined #gluster
06:11 hackman joined #gluster
06:11 itisravi joined #gluster
06:14 jiffin itisravi: ^^
06:15 skoduri joined #gluster
06:15 itisravi Klas: No you don't need to remount.
06:16 Klas itisravi: well, I seem to need to =P
06:16 Klas but, it is an unexpected behaviour then?
06:16 Klas that's strange
06:17 aspandey joined #gluster
06:18 Klas or is this the "45 second timeout" thing?
06:18 rafi joined #gluster
06:22 shubhendu joined #gluster
06:22 kdhananjay joined #gluster
06:22 itisravi Is the mount connected to all 3 bricks?
06:23 Klas how do I check?
06:23 devyani7_ joined #gluster
06:24 itisravi you could check the fuse mount log for messages like 'connected to volname-client-x' or something like that.
06:24 itisravi Klas: Just to be sure all 3 bricks are up right? As shown in 'gluster volume status' ?
06:24 Klas yup
06:24 itisravi k
06:24 Klas and I can mount it again, no problem
06:26 Klas yup, connected to all three
06:26 Klas then client-quorum is not met
06:26 Klas then Read-only file system galore
06:26 Klas on one client node, I can read the files, on the other, it acts stale
06:27 Klas (I can not stat it in any way)
06:28 Klas interesting, very different error messages on the other server
06:28 Klas sorry, client
06:29 Klas [2016-07-15 06:18:38.471469] E [MSGID: 114031] [client-rpc-fops.c:466:client3_3_open_cbk] 0-[volumename]-client-2: remote operation failed. Path: /date (cfaec3e9-cecf-40fc-881f-7538be0c43c4) [Transport endpoint is not connected]
06:29 Klas [2016-07-15 06:18:38.472278] E [socket.c:3147:socket_connect] 0-[volumename]-client-2: connection attempt on [IP]:49152 failed, (Connection refused)
06:29 Klas [2016-07-15 06:18:38.472371] E [socket.c:464:ssl_setup_connection] 0-[volumename]-client-2: SSL connect error
06:29 Klas [2016-07-15 06:18:38.472388] E [socket.c:2505:socket_poller] 0-[volumename]-client-2: client setup failed
06:29 hchiramm joined #gluster
06:30 itisravi yeah so that client has lost connection to the 3rd brick (volumename]-client-2)
06:31 itisravi which also must be the arbiter brick.
06:31 Klas which is reasonable since at that point is was turned off
06:31 kukulogy question: when expanding striped replicate volumes, do we need to add a number of bricks that is multiple of the replica?
06:31 jtux joined #gluster
06:32 Klas noticeably, on the one with the read only, I can read new changes to files on mounts
06:32 Klas just not write
06:33 itisravi mhmm, writes are failing with EROFS?
06:34 pur_ joined #gluster
06:34 Manikandan joined #gluster
06:35 Klas echo "Failure" >> failure
06:35 Klas -bash: failure: Read-only file system
06:35 itisravi and what do the last couple of lines of the log say?
06:35 Klas [2016-07-15 06:35:08.537948] W [fuse-bridge.c:2002:fuse_create_cbk] 0-glusterfs-fuse: 7470: /failure => -1 (Read-only file system)
06:35 ashiq joined #gluster
06:35 itisravi ok
06:35 Klas (that is the correlated one)
06:36 Klas as I said, on the other client, the mount is stale
06:36 Klas if that is the right term
06:36 Klas acts exactly as a nfs stale mount at least
06:36 Klas (meaning, ls [mountpoint] locks that terminal and it has to be killed
06:36 msvbhat joined #gluster
06:37 ppai joined #gluster
06:37 Klas )
06:37 Klas meanwhile, a new mount works wonderfully
06:38 Klas but, this still means that we can get into a state where we have to remount all the shares, which of course, is not a very nice thing (still only in lab, but will hopefully go into production in september)
06:39 karthik_ joined #gluster
06:39 itisravi I'm guessing it is some sort of a connection issue. EROFS is usally got when the quorum is not met.
06:40 itisravi There has to be a lost connection to more than one brick from the mount.
06:40 Klas there was, intentionally, but it should recover
06:40 Klas I want to test that quorum works and when and how it recovers
06:41 Klas if quorum is not met, it should be read-only, this is expected and good
06:41 Debloper joined #gluster
06:41 Klas but, the clients should discover that quorum is met again without remount?
06:41 Klas btw, I am using both SSL and TLS atm
06:43 kramdoss_ joined #gluster
06:43 sakshi joined #gluster
06:44 sakshi joined #gluster
06:44 jtux joined #gluster
06:44 Javezim Anyone seen weird behaviour with Gluster, maxing out one Core of your CPU for a few seconds then going back to normal and repearing
06:50 itisravi Klas: yes definitely. If the connection is re-established, you should see "Connected to volname-client-x" .
06:58 Klas hrm
06:58 Klas so, any idea what could be wrong?
06:58 Klas running 3.7.13, both on clients and servers
06:59 Klas I'm running my own compiled version btw, and might be missing something on the client side (based it off ubuntu 16.04 requirements of their package)
07:04 itisravi not sure. But if you are able to have some sort of a consistent re-producer, do raise a bug with the logs.
07:15 jwd joined #gluster
07:16 unlaudable joined #gluster
07:21 ivan_rossi joined #gluster
07:26 atalur joined #gluster
07:34 Klas hrm, now I'm even more displeased, quorum should be lost (both servers down), but I can still write to the volume
07:34 Klas (two servers down, not both, arbiter and on server, one server still up)
07:34 Klas ah, nope
07:34 Klas it just doesn't say that it is a r/o fs
07:36 arcolife joined #gluster
07:37 archit_ joined #gluster
07:37 alvinstarr joined #gluster
07:52 kramdoss_ joined #gluster
08:02 Peppard joined #gluster
08:06 sakshi joined #gluster
08:18 fsimonce joined #gluster
08:19 deniszh joined #gluster
08:20 karthik_ joined #gluster
08:20 Klas itisravi: it seems related to SSL and/or TLS
08:21 Seth_Karlo joined #gluster
08:21 Klas I will narrow it down and post a bug report
08:21 itisravi Klas: ah okay.
08:21 itisravi Klas: that would be great.
08:21 Klas I can't reproduce it on a non-ssl volume
08:21 itisravi mhmm.
08:22 Klas I will try to post the report today (a lot of people are going on vacation, so a lot of stuff to fix around the office meetingwhise =P)
08:40 aj__ joined #gluster
08:41 Gnomethrower joined #gluster
08:42 ppai joined #gluster
08:46 kramdoss_ joined #gluster
08:48 mhulsman joined #gluster
08:57 robb_nl joined #gluster
08:58 rafi1 joined #gluster
09:04 kramdoss_ joined #gluster
09:16 nbalacha joined #gluster
09:25 Guest60259 joined #gluster
09:27 armyriad joined #gluster
09:30 jiffin1 joined #gluster
09:43 Manikandan joined #gluster
09:48 itisravi joined #gluster
09:54 bluenemo joined #gluster
10:02 Klas itisravi: bug isolated and confirmed with different volumes, if SSL is on, it creates the issue, I will file the report
10:07 itisravi Klas: I suppose this is the case with any volume topology right? (not just arbiter volumes)
10:08 Klas hmm
10:08 itisravi okay
10:08 Klas I'll test it first
10:08 itisravi k
10:08 Klas I have only tried with arbiter
10:08 Klas I'll check =)
10:08 itisravi thanks :)
10:09 Klas hey, thanks for helping =)
10:09 Klas I know SSL support is partly experimental
10:11 Klas I just love that you guys are actually answering, many projects are very bad at handling non-paying customer feedback =)
10:11 misc what, you are not paying, you should have said earlier /o\
10:11 Klas haha
10:12 jiffin1 joined #gluster
10:13 devilspgd left #gluster
10:19 Klas itisravi: verified, happens in normal replica 3 as well
10:20 Klas I'm not testing non-quorum related stuff, since, well, that seems silly =P
10:20 itisravi Klas: okay, do you know the URL to raise a bug?
10:20 Klas https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
10:20 glusterbot Title: Log in to Red Hat Bugzilla (at bugzilla.redhat.com)
10:20 Klas ;)
10:20 itisravi Klas: brilliant :)
10:21 Manikandan_ joined #gluster
10:21 Klas I'm filing it under fuse
10:21 Klas seems to be correct?
10:22 Muthu joined #gluster
10:23 itisravi protocol would be better
10:24 ira joined #gluster
10:26 Klas ok then
10:27 itisravi Klas: actually 'core' would be better.
10:27 kramdoss_ joined #gluster
10:28 itisravi not a big deal, it will get moved to the appropriate component once it is RCA'd.
10:28 Klas haha
10:28 Klas not posted yet, so changed again =)
10:37 Klas itisravi: https://bugzilla.redhat.co​m/show_bug.cgi?id=1356942 posted
10:37 glusterbot Bug 1356942: unspecified, unspecified, ---, bugs, NEW , Problem with persistant mount on quorum fail when using SSL
10:38 Klas if I missed something, I'll happily supply it
10:38 johnmilton joined #gluster
10:39 itisravi Klas: cool
10:40 johnmilton joined #gluster
10:40 Klas I'm working for about 3-4 hours longer today, btw ;)
10:41 Klas (damn it
10:41 Klas I just realized I forgot to mention that the date is UTC+2 in part of the document
10:42 Seth_Karlo joined #gluster
10:42 msvbhat joined #gluster
10:42 johnmilton joined #gluster
10:44 johnmilton joined #gluster
10:47 johnmilton joined #gluster
10:47 johnmilton joined #gluster
10:51 johnmilton joined #gluster
11:00 Seth_Karlo joined #gluster
11:21 nishanth joined #gluster
11:22 nehar joined #gluster
11:24 Ulrar Can someone confirm that I can install a proxmox with 32 Gb of swap and have VM run in there ? I realise it will be insanely slow, I'm considering that in case of node failures as a temporary solution
11:25 Ulrar And that's not the correct channel, my bad :)
11:26 md2k joined #gluster
11:29 Klas you should probably go to  a proxmox channel instead of a gluster channel =)
11:29 Ulrar Yeah, it's the one just below this one for me
11:29 Klas hehe
11:35 atalur_ joined #gluster
11:50 julim joined #gluster
11:57 Manikandan_ joined #gluster
12:18 atalur_ joined #gluster
12:20 plarsen joined #gluster
12:32 rafi joined #gluster
12:37 B21956 joined #gluster
12:44 guhcampos joined #gluster
12:48 ben453 joined #gluster
12:56 harish joined #gluster
13:01 nehar joined #gluster
13:06 Seth_Karlo joined #gluster
13:16 arcolife joined #gluster
13:25 kramdoss_ joined #gluster
13:28 unclemarc joined #gluster
13:29 msvbhat joined #gluster
13:32 Gnomethrower joined #gluster
13:36 ahino joined #gluster
13:38 robb_nl joined #gluster
13:42 Hamburglr joined #gluster
13:43 Hamburglr what configuration option sets the glusterfs -s/
14:08 shubhendu_ joined #gluster
14:13 Rasathus joined #gluster
14:14 Rasathus left #gluster
14:15 jwaibel joined #gluster
14:15 nbalacha joined #gluster
14:32 squizzi_ joined #gluster
14:41 plarsen joined #gluster
14:42 Wizek joined #gluster
14:43 farhorizon joined #gluster
14:50 prasanth joined #gluster
15:00 mhulsman joined #gluster
15:03 mhulsman joined #gluster
15:03 rafaels joined #gluster
15:04 wushudoin joined #gluster
15:08 bluenemo joined #gluster
15:13 jiffin joined #gluster
15:22 nishanth joined #gluster
15:35 Seth_Karlo joined #gluster
15:36 F2Knight joined #gluster
15:38 Seth_Kar_ joined #gluster
15:43 dnunez joined #gluster
15:43 shaunm joined #gluster
15:46 kpease joined #gluster
15:46 ivan_rossi left #gluster
15:47 kramdoss_ joined #gluster
15:49 Manikandan_ joined #gluster
16:00 guhcampos joined #gluster
16:01 msvbhat joined #gluster
16:03 JoeJulian Klas: I don't see the servers coming back up after 10:16:40 in those server logs - which is the time period that client log should have been trying to reconnect.
16:03 Klas hmm, wait a minute, I'll check when I rebooted them
16:05 Klas 10:16:40, thats the UTC time, right?
16:05 JoeJulian right
16:07 Klas yeah, 10:17 was when the system booted up
16:08 Klas hmm, maybe I never double-checked the secondary mount procedure here, I maybe only did that with a second server
16:08 Klas damn it
16:08 Klas or, rather, second volume
16:08 Klas I can reproduce it now if you want, and supply all new logs?
16:11 JoeJulian Sure. Truncate the logs (client, brick, and glusterd.vol.log). Bring everything up. Mount the client. Bring down two servers (note the timestamp). Bring up two servers. Logs that matter are the client, glusterd.vol.log, and the bricks. Logs that _do not_ matter are nfs.log, glustershd.log... pretty much any log on the server that's not brick or glusterd.
16:12 JoeJulian ... also... if you could tar them or something so they're individual files, that would make them easier to parse.
16:12 Klas sure thing
16:13 Klas I've just confirmed that I could reproduce the whole thing
16:13 Klas same client has mounted the volume both RW and RO
16:13 Klas the RO can read what the RW does
16:13 Klas so I'll supply those logs
16:19 kramdoss_ joined #gluster
16:31 shyam joined #gluster
16:31 Klas crap, need to reset pw
16:32 Klas (it's in the pw vault at the office)
16:35 Klas JoeJulian: there, done
16:36 Klas I'm leaving it in current status if you want something else (at least until monday)
16:38 Seth_Karlo joined #gluster
16:39 aj__ joined #gluster
16:41 skylar joined #gluster
16:41 JoeJulian Interesting, and much easier to read, thanks.
16:41 Klas np
16:42 Klas and thanks for checking it out =)
16:42 Klas I only have a week until vacation, so I'm hoping I can give you all the info you need until then =)
16:42 JoeJulian Maybe if I have time after work, I'll see if I can reproduce it and try debugging it myself.
16:42 Klas nice =)
16:43 JoeJulian Right now I've got to get back to my $dayjob.
16:43 Klas let me know if you want the .deb packages I'm using
16:43 Klas hehe
16:43 Klas god speed then!
16:44 jiffin joined #gluster
16:45 Klas (I think I will have plenty of time to do $dayjob this weekend as well, which is where I'm working on testing gluster)
16:51 nehar joined #gluster
16:55 karnan joined #gluster
16:58 Apeksha joined #gluster
16:59 Seth_Karlo joined #gluster
17:08 B21956 joined #gluster
17:09 chirino_m joined #gluster
17:21 kramdoss_ joined #gluster
17:26 Hamburglr joined #gluster
17:28 Hamburglr is there a good way to make gluster work with lsyncd? I tried to sync the actual files not a mount but if a file gets edited via a NFS mount the correct info isn't going to the inotify API
17:35 farhorizon joined #gluster
17:38 jiffin joined #gluster
17:47 post-factum inotify is not supported
17:52 JoeJulian yet
17:52 JoeJulian I understand there's work being done in the fuse kernel module to be able to support it in the future.
17:54 farhorizon joined #gluster
18:00 johnmilton joined #gluster
18:03 post-factum fsnotify?
18:05 post-factum https://github.com/libfuse/li​bfuse/wiki/Fsnotify-and-FUSE
18:05 glusterbot Title: Fsnotify and FUSE · libfuse/libfuse Wiki · GitHub (at github.com)
18:05 JoeJulian I think so. I haven't been paying close attention, just catching references in passing.
18:17 johnmilton joined #gluster
18:27 farhorizon joined #gluster
18:29 Manikandan_ joined #gluster
18:30 karnan joined #gluster
18:30 Hamburglr joined #gluster
18:35 jiffin1 joined #gluster
19:19 mhulsman joined #gluster
19:36 deniszh joined #gluster
19:52 Hamburglr joined #gluster
20:29 shyam joined #gluster
20:30 karnan joined #gluster
20:54 chrisg joined #gluster
20:56 mhulsman joined #gluster
21:27 mhulsman joined #gluster
21:40 deniszh joined #gluster
21:54 F2Knight joined #gluster
21:59 mtanner joined #gluster
22:01 DV_ joined #gluster
22:22 gbox Setting lookup-optmize on assumes a balanced gluster volume.  Is there a check for whether the volume is balanced?
22:48 plarsen joined #gluster
22:54 deniszh joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary