Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-08-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 shyam joined #gluster
00:00 vbellur joined #gluster
00:37 zcourts_ joined #gluster
01:01 ronrib joined #gluster
01:07 MrAbaddon joined #gluster
01:30 luizcpg joined #gluster
01:51 ilbot3 joined #gluster
01:51 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 ronrib joined #gluster
02:19 omie888777 joined #gluster
02:30 luizcpg joined #gluster
02:43 omie888777 joined #gluster
03:02 susant joined #gluster
03:02 MrAbaddon joined #gluster
03:02 ronrib joined #gluster
03:02 plarsen joined #gluster
03:09 gyadav__ joined #gluster
03:09 skoduri joined #gluster
03:44 luizcpg joined #gluster
03:44 itisravi joined #gluster
03:44 Guest9038 joined #gluster
03:49 riyas joined #gluster
03:49 luizcpg joined #gluster
03:56 msvbhat joined #gluster
04:01 atinm joined #gluster
04:10 gyadav_ joined #gluster
04:15 Prasad joined #gluster
04:17 msvbhat joined #gluster
04:20 nbalacha joined #gluster
04:30 Shu6h3ndu joined #gluster
04:34 gyadav__ joined #gluster
04:37 aravindavk_ joined #gluster
04:40 sanoj joined #gluster
04:40 ankitr joined #gluster
04:43 apandey joined #gluster
04:49 poornima joined #gluster
04:56 jiffin joined #gluster
05:00 ndarshan joined #gluster
05:03 aravindavk joined #gluster
05:06 skumar joined #gluster
05:09 kotreshhr joined #gluster
05:14 ankitr joined #gluster
05:40 prasanth joined #gluster
05:42 hgowtham joined #gluster
05:43 karthik_us joined #gluster
05:53 nbalacha joined #gluster
06:01 ankitr joined #gluster
06:02 gyadav_ joined #gluster
06:02 kdhananjay joined #gluster
06:04 sona joined #gluster
06:06 kotreshhr left #gluster
06:14 ndarshan joined #gluster
06:16 Saravanakmr joined #gluster
06:19 nbalacha joined #gluster
06:21 jtux joined #gluster
06:22 kdhananjay joined #gluster
06:22 poornima joined #gluster
06:23 atinm joined #gluster
06:33 rastar joined #gluster
06:34 Prasad joined #gluster
06:37 nbalacha joined #gluster
06:37 msvbhat joined #gluster
06:45 susant joined #gluster
06:45 Humble joined #gluster
06:52 Humble joined #gluster
06:57 armyriad joined #gluster
07:11 Saravanakmr joined #gluster
07:20 kotreshhr joined #gluster
07:22 susant joined #gluster
07:22 fsimonce joined #gluster
07:25 ivan_rossi joined #gluster
07:37 apandey_ joined #gluster
07:43 gyadav__ joined #gluster
07:45 aravindavk joined #gluster
07:48 poornima joined #gluster
07:57 mbukatov joined #gluster
07:58 ndarshan joined #gluster
08:05 skoduri joined #gluster
08:20 [o__o] joined #gluster
08:31 _KaszpiR_ joined #gluster
08:32 aravindavk joined #gluster
08:41 kdhananjay joined #gluster
08:42 jiffin joined #gluster
08:44 itisravi joined #gluster
08:48 atinm joined #gluster
08:56 aardbolreiziger joined #gluster
08:59 gyadav_ joined #gluster
09:00 Prasad joined #gluster
09:02 [fre] joined #gluster
09:05 mohan joined #gluster
09:11 MrAbaddon joined #gluster
09:11 hgowtham joined #gluster
09:12 [fre] Morning Guys and galls.
09:14 [fre] I'm building my 3th and 4th node on RHGS.
09:14 ndarshan joined #gluster
09:14 [fre] I noticed though that "it" previously created a lock-brick...
09:14 karthik_us joined #gluster
09:15 hgowtham joined #gluster
09:15 sanoj joined #gluster
09:15 [fre] Could anyone enlighten me why it's there? And if I need to have that replicated onto my 2 new nodes too?
09:16 poornima joined #gluster
09:18 _KaszpiR_ joined #gluster
09:19 nbalacha joined #gluster
09:26 kotreshhr joined #gluster
09:26 [fre] Mister @kkeithley, having a nfs-outage-issue too. Isn't NFS one of the things you were heavily involved in? Could I ask you some info about it?
09:37 skumar_ joined #gluster
09:38 Prasad_ joined #gluster
09:41 karthik_us joined #gluster
09:51 baojg_ joined #gluster
10:02 kotreshhr joined #gluster
10:07 msvbhat joined #gluster
10:08 TBlaar2 joined #gluster
10:08 apandey__ joined #gluster
10:09 skoduri joined #gluster
10:10 hvisage joined #gluster
10:13 susant joined #gluster
10:17 _KaszpiR_ joined #gluster
10:37 kkeithley [fre]: go ahead and ask
10:46 MrAbaddon joined #gluster
10:50 baber joined #gluster
10:51 msvbhat joined #gluster
11:06 skumar_ joined #gluster
11:20 nbalacha joined #gluster
11:26 itisravi joined #gluster
11:30 ThHirsch joined #gluster
11:35 shyam joined #gluster
11:38 nbalacha joined #gluster
11:40 Humble joined #gluster
11:48 gyadav__ joined #gluster
11:54 atinm joined #gluster
12:15 nbalacha joined #gluster
12:15 weller is there a way to tell glusterd not to wait on the second node (two-node cluster) on system boot?
12:37 [fre] kkeithley, on certain moments we loose all nfs-connectivity. 4 client nodes are using an nfs-share. all 4 of them experience outage during about 12min before they manage to reconnect.
12:38 kkeithley glusternfs or ganesha?
12:38 [fre] glusternfs
12:39 kkeithley can you ping the servers from the clients during that 12 minute period?
12:40 [fre] Yup. Found a ticket mentionning nfs slowing down on clients, before dropping out. Tried setting client-kernel-params to limit dirty-writes as what they tried in it, but it didn't solve...
12:42 kkeithley if you can collect a tcpdump the next time it happens.  Keep it running until the cleints reconnect. Attach it to a BZ in bugzilla.redhat.com. If you don't have a bugzilla account and don't want to create one you can send it to me and I'll open the BZ
12:43 kkeithley s/cleints/clients/
12:43 glusterbot What kkeithley meant to say was: if you can collect a tcpdump the next time it happens.  Keep it running until the clients reconnect. Attach it to a BZ in bugzilla.redhat.com. If you don't have a bugzilla account and don't want to create one you can send it to me and I'll open the BZ
12:43 kkeithley @bugzilla
12:43 sanoj joined #gluster
12:44 kkeithley @repos
12:44 glusterbot kkeithley: See @yum, @ppa or @git repo
12:44 kkeithley @fileabug
12:44 glusterbot kkeithley: Please file a bug at http://goo.gl/UUuCq
12:46 Acinonyx joined #gluster
12:49 kkeithley FYI, in case you aren't aware, we are slowly phasing out gnfs.  Long term you should start thinking about using nfs-ganesha.
12:53 misc no one tought about calling it ganefs ?
12:54 hgowtham joined #gluster
12:57 kkeithley misc: Philippe Deniel at France's CEA originally wrote it as a 9P server (Plan9 file system). Later someone realized it could be extended to do NFS. I think Ganesha predates the NFS part.
12:57 kkeithley You'd have to ask Philippe
12:58 * misc has lots of puns
12:58 [fre] kkeithley, historically, we phased out Ganesha due to stability and configurational-problems....
12:58 misc (like ganesha256)
12:58 [fre] misc: LOL
12:59 misc but that's a bit insensitive I guess
12:59 [fre] kkeithley: We may be tempted to retry ganesha in a far futur, although trying to switch as much as possible towards fuse.
13:01 kkeithley [fre]: what version? Stability has gotten much better. WRT config, I guess I'm too close to it.  If you're ultimately migrating to FUSE then I guess it doesn't matter.
13:02 skumar joined #gluster
13:02 [fre] cant really tell the version anymore... RedHat's about a year (or 2?) ago...
13:02 * kkeithley doesn't get ganesha256.
13:03 [fre] We're quite in favour of the failoverability offered by Fuse.
13:03 kkeithley [fre]: okay
13:04 [fre] sadly, it's the sharing of sub-directories that force us to use nfs.
13:04 misc kkeithley: sha256, the crypto hash
13:05 kkeithley ah....
13:08 kkeithley bad pun. very bad pun
13:09 misc yeah, I should be pun-ished for that :(
13:09 [fre] btw, do you have any idea why or how /rhgs/brick-locks/ is supposed to be used?
13:13 kkeithley misc: getting worse
13:14 kkeithley [fre]: no idea.  If you're using rhgs you know you can get help from your dedicated support person
13:15 [fre] I do like this gluster-group though. Answers are often more to the point and more thought-true. ;)
13:18 baber joined #gluster
13:18 shyam joined #gluster
13:18 WebertRLZ joined #gluster
13:19 hgowtham joined #gluster
13:20 msvbhat joined #gluster
13:25 luizcpg joined #gluster
13:28 vbellur joined #gluster
13:28 ndarshan joined #gluster
13:30 skylar joined #gluster
13:31 rastar joined #gluster
13:36 kotreshhr joined #gluster
13:36 kotreshhr left #gluster
13:41 shyam joined #gluster
13:43 fidelrodriguez joined #gluster
13:44 fidelrodriguez hello everyone
13:45 misc hi
13:45 glusterbot misc: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answ
13:45 fidelrodriguez I am not sure were gluster is seeing hostname doesn't uniquely match the interface seletcted for management bridge
13:46 fidelrodriguez its not allowing me to create ovirt ha engine
13:47 fidelrodriguez python -c 'import socket; print(socket.gethostbyaddr("10.0.1.11"));'
13:47 fidelrodriguez ('server.hostname', [], ['10.0.1.11', '172.16.0.11'])
13:47 fidelrodriguez but gluster peer and volume is on 172.16.0.11 bond0 interface to seperate traffic of the volumes
13:48 fidelrodriguez can someone point me in the right direction
13:52 fidelrodriguez I am not sure were gluster is seeing hostname doesn't uniquely match
13:52 fidelrodriguez the interface seletcted for management bridge
13:52 fidelrodriguez its not allowing me to create ovirt ha engine
13:52 fidelrodriguez python -c 'import socket; print(socket.gethostbyaddr("10.0.1.11"));'
13:52 fidelrodriguez ('server.hostname', [], ['10.0.1.11', '172.16.0.11'])
13:52 fidelrodriguez but gluster peer and volume is on 172.16.0.11 bond0 interface to
13:52 fidelrodriguez seperate traffic of the volumes
13:57 jiffin joined #gluster
13:58 riyas joined #gluster
14:01 ankitr joined #gluster
14:05 MrAbaddon joined #gluster
14:05 jobewan joined #gluster
14:07 jiffin joined #gluster
14:18 sona joined #gluster
14:20 buvanesh_kumar joined #gluster
14:23 jtux joined #gluster
14:24 jstrunk joined #gluster
14:26 skumar joined #gluster
14:30 vbellur joined #gluster
14:31 major joined #gluster
14:35 cloph_away joined #gluster
14:43 prasanth joined #gluster
14:53 farhorizon joined #gluster
14:55 Snowman_ joined #gluster
14:56 hosom joined #gluster
15:01 saybeano joined #gluster
15:06 wushudoin joined #gluster
15:07 mbrandeis joined #gluster
15:08 AdrianH joined #gluster
15:09 mbrandeis joined #gluster
15:11 AdrianH Hello, I have been using Gluster for a couple years now without any problems. But today it has stopped working, none of my clients can access it. Everything seems fine on the gluster machines but in the clients I see in the gluster logs I have:
15:11 AdrianH [2017-08-31 14:49:43.494784] E [rpc-clnt.c:208:call_bail] 0-gluster-volume-client-2: bailing out frame type(GlusterFS 3.3) op(LOOKUP(27)) xid = 0x2c33ee4 sent = 2017-08-31 14:19:42.294612. timeout = 1800 for 10.0.1.23:49152
15:11 AdrianH [2017-08-31 14:49:43.498680] W [client-rpc-fops.c:2785:client3_3_lookup_cbk] 0-gluster-volume-client-2: remote operation failed: Transport endpoint is not connected. Path: /images/vdc_100000000241 (00000000-0000-0000-0000-000000000000)
15:11 AdrianH Does anybody have any ideas of what I can do/search for?
15:12 AdrianH gluster volume info, peer status all seem fine and online/connected
15:16 omie888777 joined #gluster
15:21 fidelrodriguez exit
15:21 fidelrodriguez quit
15:28 aravindavk joined #gluster
15:30 kotreshhr joined #gluster
15:31 btspce joined #gluster
15:34 msvbhat joined #gluster
15:43 kpease joined #gluster
15:43 kotreshhr left #gluster
16:00 mbrandeis joined #gluster
16:02 baber joined #gluster
16:13 gyadav joined #gluster
16:15 bluenemo joined #gluster
16:27 vbellur joined #gluster
16:31 baber joined #gluster
16:40 riyas joined #gluster
16:49 gcavalcante8808 joined #gluster
16:49 gcavalcante8808 Hello Folks
16:49 gcavalcante8808 Morning
16:50 gcavalcante8808 I need some help with the following question: I Installed gluster 3.10 on Centos 7 and NFS+ Ganesha, then I just created a distributed volume with 1 brick (in the same host) and tried to init a new postgresql database but the following error occurs:
16:51 gcavalcante8808 . FATAL:  could not read directory "pg_notify": Unknown error 523
16:51 gcavalcante8808 . FATAL:  could not read directory "pg_notify": Unknown error 523
16:51 gcavalcante8808 . FATAL:  could not read directory "pg_notify": Unknown error 523
16:51 gcavalcante8808 FATAL: could not read directory "pg_notify": Unknown erro 523
16:52 gcavalcante8808 Is there any volume options that need to be set to host a postgreSQL db?
16:52 gcavalcante8808 I tried every performance options, including performance.flush-behind: off and others.
16:52 gcavalcante8808 In the NFS mount side I tried noac,nolock, etc and no luck too :(
16:54 msvbhat joined #gluster
16:55 gcavalcante8808 Thanks in advance folks
16:57 gcavalcante8808 left #gluster
16:58 baber joined #gluster
16:59 gcavalcante8808 joined #gluster
17:04 repnzscasb_ joined #gluster
17:04 btspce joined #gluster
17:05 bowhunter joined #gluster
17:10 Shu6h3ndu joined #gluster
18:15 MrAbaddon joined #gluster
18:17 gospod3 joined #gluster
18:23 dijuremo Is there a write up on best practices for snapshots when using glusterfs as a backend for VMs, automating the snapshots and retention? How do snapshots affect performance, i.e do I want to keep at most a certain number of snapshots, etc?
18:28 zcourts joined #gluster
18:34 gospod2 joined #gluster
18:51 gospod3 joined #gluster
18:53 alvinstarr joined #gluster
18:58 farhorizon joined #gluster
19:01 skoduri joined #gluster
19:19 rastar joined #gluster
19:25 gospod2 joined #gluster
19:28 glusterbot joined #gluster
19:37 glusterbot joined #gluster
19:41 aardbolreiziger joined #gluster
19:55 PatNarciso joined #gluster
19:55 btspce joined #gluster
19:56 zcourts_ joined #gluster
20:03 plarsen joined #gluster
20:05 ronrib_ joined #gluster
20:05 guhcampos joined #gluster
20:08 cholcombe joined #gluster
20:18 _KaszpiR_ joined #gluster
20:39 Peppard joined #gluster
20:43 skoduri joined #gluster
20:46 gospod2 joined #gluster
20:54 gospod2 joined #gluster
20:59 gospod3 joined #gluster
20:59 aardbolreiziger joined #gluster
21:00 farhoriz_ joined #gluster
21:02 Acinonyx joined #gluster
21:05 skoduri joined #gluster
21:07 vbellur joined #gluster
21:10 JoeJulian joined #gluster
21:43 aardbolreiziger joined #gluster
22:01 aardbolreiziger joined #gluster
22:02 aardbolreiziger joined #gluster
22:05 shyam joined #gluster
22:24 KuzuriAo joined #gluster
22:25 KuzuriAo Greetings and salutations
22:27 KuzuriAo I’m new to GlusterFS and I’ve run into an issue that I can’t seem to find the answer for searching online.  I was hoping maybe someone here might be able to shed some light on the situation.
22:31 JoeJulian I'm afraid I cannot. I do not have sufficient information to do so. ;P
22:31 KuzuriAo :)
22:32 KuzuriAo My setup is really pretty straight forward.  Two servers, 1 brick (1 x 2 = 2), type Replicate
22:33 KuzuriAo One node run smooth as glass with low CPU utilization
22:33 KuzuriAo The second node has CPU through the roof.
22:33 KuzuriAo I’ve pretty much figured out what’s causing the CPU utilization on the second node, but I don’t know why or how to fix it.
22:34 KuzuriAo The node in question has constant streams of this in the log file: [2017-08-31 22:34:20.736158] W [MSGID: 114031] [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-www-client-0: remote operation failed. Path: (null) (00000000-0000-0000-0000-000000000000) [No data available]
22:34 KuzuriAo The other node does not
22:35 KuzuriAo The log is just flooded with these entries.
22:36 JoeJulian What log is that in?
22:37 KuzuriAo In: /var/log/glusterfs/www.log
22:38 JoeJulian so that's a client. The client is doing a ton of lookups for what appears to be a null path. Have you tried remounting?
22:38 vbellur joined #gluster
22:38 KuzuriAo I have and rebooted, but let me do it again.
22:40 KuzuriAo What’s puzzling to me is why it doesn’t happen on the other server.  This is a load balanced Wordpress install
22:40 JoeJulian I assume these are both gluster servers as well as gluster clients?
22:41 KuzuriAo Correct
22:42 KuzuriAo Dang.  I remounted it and it looked like it was going to stop, but after about 10 seconds the error message came flooding back.
22:42 KuzuriAo I tried the various heal commands but they were all unsuccessful
22:42 KuzuriAo Hence me being here. :)
22:43 KuzuriAo It drives the load up so high the web server times out, so I’ve had to pull the node out of the LB configuration and force all the traffic to the node that runs smooth as glass.
22:44 JoeJulian Are both servers running the same gluster version?
22:44 KuzuriAo They should be mirror images of each other, but let me just double check.
22:45 JoeJulian 0-www-client-0 is the first brick of the volume. You might check that brick log.
22:46 KuzuriAo Yes, they are both the same version, but it appears it’s a bit out of date.
22:47 KuzuriAo Would that be /var/log/glusterfs/www.log ?
22:47 KuzuriAo If so, that’s where I got the error message from
22:49 JoeJulian No, that would be /var/log/glusterfs/bricks/$something (depends on where the brick lives)
22:49 JoeJulian And it's the one that's listed first in "gluster volume info www"
22:50 KuzuriAo Rats, the file is totally empty
22:50 JoeJulian Well that's odd. df /var/log?
22:51 KuzuriAo 8.0G available
22:51 KuzuriAo 29% use
22:51 JoeJulian Have you changed the log-level?
22:51 KuzuriAo It’s empty on the other node too.
22:52 KuzuriAo I have not.  Is there a cli command or is it in a conf file?
22:53 JoeJulian ~pasteinfo | KuzuriAo
22:53 glusterbot KuzuriAo: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
22:54 KuzuriAo https://paste.fedoraproject.org/paste/EVsGEookqCR0oT~H0qfhyw
22:54 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
22:55 JoeJulian So.. just to be clear that I'm not somehow confusing you, you're saying that /var/lib/glusterfs/bricks/glusterfs_bricks_brick0_www.log is empty.
22:56 JoeJulian er
22:56 JoeJulian typo
22:56 KuzuriAo root@node-01:~# cat /var/log/glusterfs/bricks/glusterfs-bricks-brick0-www.log
22:56 KuzuriAo root@node-01:~#
22:56 JoeJulian oh, right - not _... <sigh>
22:56 JoeJulian what version and distro?
22:57 KuzuriAo Ubuntu 16.04 / glusterfs 3.7.6
22:57 JoeJulian Oh, wow, that is old.
22:57 JoeJulian And no longer supported by the developers.
22:58 JoeJulian We might be able to figure this out, but if you're able to you should upgrade.
22:58 KuzuriAo Is there an updated ppa?
22:58 JoeJulian @ppa
22:58 glusterbot JoeJulian: The GlusterFS Community packages for Ubuntu are available at: 3.8: https://goo.gl/MOtQs9, 3.10: https://goo.gl/15BCcp
22:58 JoeJulian <sigh>
22:58 KuzuriAo 3.8 or 3.10?
22:58 JoeJulian that's not been updated either.
22:59 JoeJulian glusterbot is my fault.
22:59 KuzuriAo I’m fine either way
22:59 JoeJulian https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.11
22:59 glusterbot Title: glusterfs-3.11 : “Gluster” team (at launchpad.net)
22:59 JoeJulian 3.11 is the "lts" version
22:59 KuzuriAo ok, sweet
23:04 KuzuriAo sweet, now glusterfs-server won’t start
23:07 JoeJulian 'glusterd --debug' is handy for figuring that out.
23:08 KuzuriAo Was just looking at that.
23:08 KuzuriAo It appears the old version refuses to die
23:08 JoeJulian might make sense
23:09 KuzuriAo Yeah, I can’t stop it with the normal service commands
23:10 KuzuriAo I’m a bit scared to start whacking pids
23:13 KuzuriAo ::sigh::
23:17 KuzuriAo Oh well, I whacked ‘em and restarted glusterfs-server
23:17 KuzuriAo Now it’s going through and doing a boatload of selfheal
23:17 KuzuriAo Which actually looks like is doing something
23:20 KuzuriAo JoeJulian: https://paste.fedoraproject.org/paste/0wzR6Igri2UzQgK5u1bHxw
23:20 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
23:21 KuzuriAo The second server is going nuts with that, but the first one is clear in the logs
23:21 KuzuriAo I see the occasional null path, but not all of them like it was before.
23:22 KuzuriAo I’m going to step out to vape for a minute.
23:35 JoeJulian Looks like you had a stuck process which is why you were probably getting that error.
23:48 KuzuriAo It’s still healing all the things
23:48 KuzuriAo Not sure if it’s stuck again

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary