Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-11-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 JoeJulian protoporpoise: There's a bug that should be fixed with the 3.10.8 release. I have not heard what release introduced this bug. Another user reported success doing a "find" in a loop on another mount on another node.
00:14 protoporpoise Does it result in folders not being seen by clients?
00:14 JoeJulian yes
00:14 protoporpoise if say you never run find
00:14 protoporpoise create a volume, client does a mkdir folder, then ls folder
00:15 protoporpoise if so that's a MAJOR bug?
00:15 JoeJulian seems that way to me. If I had any voice, I'd have pushed a bugfix release with just that fix.
00:16 JoeJulian one sec... I'm looking for the patch for you.
00:16 protoporpoise oh man, thank you so much!
00:16 protoporpoise this has been doing our head in
00:16 protoporpoise it's broken EVERYTHING for us
00:17 JoeJulian 3.10.8
00:17 JoeJulian gah
00:18 protoporpoise oh man, that's ages away
00:18 protoporpoise oh
00:18 protoporpoise lol
00:18 protoporpoise we're running 3.12.2
00:18 protoporpoise noticed it on 3.12.1 as well
00:18 JoeJulian 3.12.3 then
00:19 protoporpoise lol, I don't know if Niels has released RPMs for that yet
00:19 protoporpoise I'm giving a talk on Gluster tomorrow and bit embarrassed ours is broken hahahaha
00:19 protoporpoise let me check to see if 3.12.3 is out in the mirrors
00:19 JoeJulian It's not
00:20 protoporpoise damn
00:20 JoeJulian The release was scheduled for last friday.
00:21 protoporpoise dang, ok hmmm - any idea how we could hot-patch? (centos 7)
00:27 JoeJulian Foudn the thread: https://www.spinics.net/lists/gluster-users/msg33059.html
00:27 glusterbot Title: Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them. - Gluster Users (at www.spinics.net)
00:27 protoporpoise thats ,e
00:27 protoporpoise me*
00:27 JoeJulian or not
00:27 protoporpoise lol
00:27 JoeJulian Nithya is one of the devs, so I was hoping.
00:27 protoporpoise I should align my usernames
00:31 JoeJulian argh... where the heck did I see that...
00:39 JoeJulian This wasn't the one, but here's something to ponder: https://bugzilla.redhat.com/show_bug.cgi?id=1368185
00:39 glusterbot Bug 1368185: urgent, unspecified, ---, nbalacha, CLOSED EOL, Directory on volume seems to be empty, but brick contains data
00:40 JoeJulian Do you ever start to wonder if an email you read was actually just a dream?
00:41 ross9000 joined #gluster
00:45 protoporpoise lol
00:45 protoporpoise yes i get that
00:45 protoporpoise often on different projects
00:45 protoporpoise I work with corosync / pacemaker and xenserver a lot so things fade into eachother
00:45 protoporpoise lol
00:46 JoeJulian I think I may have mixed this up with a different bug that I felt was critical.
00:47 JoeJulian So if you wanna ,,(paste) that client log, I'd be happy to look at it. I don't have access to my email at the moment (I've been tearing apart my personal systems this weekend).
00:47 glusterbot For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
00:47 protoporpoise oh that'd be great, thank you @JoeJulian, give me a mo
00:48 protoporpoise i dont think it'll be all that helpful though
00:48 JoeJulian probably not, but <shrug>
00:52 protoporpoise we're just going to boost the log level on the client and do 'things' and then ill let you know shortly
00:52 JoeJulian +1
00:52 protoporpoise do you think ERROR level logging is enough?
00:52 protoporpoise or do you think WARNING would be more useful?
00:52 JoeJulian Nope
00:53 protoporpoise INFO?
00:53 protoporpoise lol
00:53 JoeJulian There may be an error, but the info surrounding it usually tells a little more of the story.
00:53 protoporpoise OK I'll set to debug
00:53 protoporpoise and we'll see how we go
00:53 protoporpoise lol
00:55 protoporpoise so glad I wrote a script to run commands across all volumes now lol
00:55 protoporpoise https://github.com/sammcj/scripts/blob/master/gluster_all_volumes.sh
00:55 protoporpoise lol
00:55 glusterbot Title: scripts/gluster_all_volumes.sh at master · sammcj/scripts · GitHub (at github.com)
00:56 JoeJulian Only really need one solid example of the problem.
00:56 protoporpoise yup
00:57 protoporpoise working on it - @ross9000 is working on the client side and just setting up the log level on the mounting end
00:57 protoporpoise @JoeJulian thanks a bunch for your time btw
00:57 JoeJulian You're welcome.
01:00 protoporpoise almost there
01:02 VanDuong joined #gluster
01:03 protoporpoise OK
01:03 protoporpoise with log level set to debug on the server and client side, nothing is logged
01:04 JoeJulian nothing at all.
01:04 protoporpoise didn't even create the file :/
01:04 protoporpoise @ross9000 is looking at it
01:04 protoporpoise oh
01:04 protoporpoise it ignored the mount options aparently lol
01:04 protoporpoise give us a sec
01:04 JoeJulian Right
01:04 JoeJulian I filed a bug about that years ago.
01:06 protoporpoise lol ok INFO on the client side gives lots of logs, but debug gives nothing
01:06 protoporpoise I'll do the same on the server side
01:06 protoporpoise and we'll collect them
01:08 protoporpoise @JoeJulian https://paste.fedoraproject.org/paste/9hpV1vW7qw1yYsj-XOeofQ
01:09 protoporpoise nothing on the server side logs, other than the normal [2017-11-13 01:07:13.852735] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed which we see all the time
01:09 JoeJulian Yep, useless.
01:09 protoporpoise lol
01:09 protoporpoise our thoughts exactlyu
01:10 protoporpoise gluster sucks at logging useful information :/
01:10 JoeJulian Sometimes.
01:10 JoeJulian I mean, yeah, sure, it's noisy. But usually there's some clue somewhere.
01:10 protoporpoise yep
01:11 protoporpoise we set it to warning or error normally as otherwise it just hammers the logs etc...
01:11 JoeJulian what luck... I just encountered the same bug.
01:12 ross9000 so that’s really interesting but if i/we run while :; do find .; done in the gluster volume, most of the time it returns nothing but occasionally we get a burst of the actual folders showing up
01:12 JoeJulian huh
01:12 protoporpoise pastefail @ross9000?
01:12 protoporpoise (he's new to computers)
01:12 protoporpoise jkl
01:12 ross9000 using your find suggestion from before
01:13 ross9000 it does occasionally find the directories, but not most of the time
01:14 JoeJulian I think the fact that .. is missing tells us something... not sure what, but something...
01:15 ross9000 i agree, but .. is a directory so it kind of makes sense for it not to show up
01:15 ross9000 at least it’s consistent* *except for the find in a loop, which isn’t.
01:20 JoeJulian Is this an replica+arbiter volume?
01:23 protoporpoise yes
01:23 protoporpoise 2+1
01:24 protoporpoise https://paste.fedoraproject.org/paste/r91AnWlO4~1B~WhCcwmsqA
01:24 glusterbot Title: volume info - Modern Paste (at paste.fedoraproject.org)
01:25 JoeJulian hmm... I just did a heal...full, it came up with a number of entries. Let them finish healing (all small files). Still had the problem. Unmounted and mounted. and now everything's back.
01:30 protoporpoise self-healed all volumes no change
01:33 JoeJulian Did you unmount/mount?
01:33 protoporpoise yes he did
01:34 JoeJulian damn
01:34 ross9000 :(
01:35 DV joined #gluster
01:36 JoeJulian You keep mentioning all volumes. Is this happening on all volumes?
01:37 JoeJulian bbiab, dinner
01:38 ross9000 thanks yeah it is happening on all of them, enjoy dinner
01:51 crag joined #gluster
02:02 gospod3 joined #gluster
02:28 prasanth joined #gluster
02:55 ilbot3 joined #gluster
02:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:55 protoporpoise ohhh
02:55 protoporpoise i bet thats it
02:55 protoporpoise ~~~
02:55 protoporpoise !!!
02:58 nbalacha joined #gluster
02:59 protoporpoise https://github.com/gluster/glusterfs/issues/355
02:59 glusterbot Title: parallel-readdir = TRUE prevents directories from listing with ls, find etc... · Issue #355 · gluster/glusterfs · GitHub (at github.com)
02:59 protoporpoise *mic drop*
03:00 JoeJulian They only use issues for new features. For bugs you can file a bug here:
03:00 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
03:00 protoporpoise Oh
03:00 protoporpoise will do!
03:01 JoeJulian Sorry for the confusion.
03:01 JoeJulian Weird... I wonder what that option changes. I can't find anyplace the code actually uses that setting.
03:02 JoeJulian Ah, I see.
03:03 JoeJulian It just changes the order.
03:08 JoeJulian I'm not convinced.
03:09 JoeJulian That changes the placement of the performance.readdir-ahead translator in the graph. I have readdir-ahead turned off so that never should have happened to me.
03:10 protoporpoise its 100% reproducible for us so I'd say that's the case.
03:11 JoeJulian Well... hopefully whatever that is, since it's more reproducible than what I have, is enough to fix it for both.
03:17 gyadav__ joined #gluster
03:17 protoporpoise completely off topic, but since I'm giving an intro talk on gluster tomorrow at a meetup... do you happen to know if there are any 'official' ruby bindings for gluster?
03:17 protoporpoise I found 'libgfapi-ruby' but it hasn't been touched since 2014
03:17 protoporpoise so guessing its dead
03:18 protoporpoise oh wow I see there is a SWIFT one
03:18 JoeJulian That's probably it. Nobody likes ruby anyway. ;)
03:18 protoporpoise we do :)
03:18 protoporpoise ((for some things))
03:18 JoeJulian I'm just being facetious.
03:18 protoporpoise { I, Know }
03:18 JoeJulian But that is probably the right one. I don't think there's been any contributions.
03:19 protoporpoise yeah dw
03:19 protoporpoise swift will do
03:19 JoeJulian cool
03:19 protoporpoise C, swift and go == good enough for me, also see the python one which is good enough for anyone that likes single cores being maxed out ;)
03:19 protoporpoise [ that, was, my, turn ]
03:21 JoeJulian hehe
03:31 psony_ joined #gluster
03:35 psony joined #gluster
03:50 atinm joined #gluster
03:51 magrawal joined #gluster
03:58 protoporpoise Was trying to figure out if I can use Redhat's logo to promote their active development / support of Gluster in some slides - looks like you're not allowed to use it for anything!
04:05 itisravi joined #gluster
04:11 sunnyk joined #gluster
04:12 JoeJulian yeah, they're kind-of picky about that. You can use the Ant though.
04:19 protoporpoise lol
04:20 protoporpoise Do you work for Redhat Joe?
04:20 JoeJulian I do not. I just hang out here and BS with y'all.
04:20 protoporpoise lol
04:21 apandey joined #gluster
04:31 07IABTM7P joined #gluster
04:39 skumar joined #gluster
04:53 kdhananjay joined #gluster
04:53 susant joined #gluster
04:56 poornima joined #gluster
04:57 sanoj joined #gluster
05:04 kramdoss_ joined #gluster
05:09 rwheeler joined #gluster
05:19 karthik_us joined #gluster
05:28 apandey_ joined #gluster
05:41 ndarshan joined #gluster
05:44 skoduri joined #gluster
05:45 om2 joined #gluster
05:46 apandey__ joined #gluster
05:49 apandey_ joined #gluster
06:00 Humble joined #gluster
06:02 hgowtham joined #gluster
06:04 sahina joined #gluster
06:32 protoporpoise right I'm off, good day all
06:35 xavih joined #gluster
06:46 sahina joined #gluster
06:51 Saravanakmr joined #gluster
06:51 TBlaar joined #gluster
06:56 atinm joined #gluster
06:56 Shu6h3ndu joined #gluster
07:01 atinm joined #gluster
07:06 TBlaar joined #gluster
07:12 rwheeler joined #gluster
07:13 skumar_ joined #gluster
07:14 mbukatov joined #gluster
07:20 susant joined #gluster
07:20 Saravanakmr joined #gluster
07:25 sahina joined #gluster
07:32 jtux joined #gluster
07:35 karthik_us joined #gluster
07:41 ThHirsch joined #gluster
07:48 Saravanakmr joined #gluster
07:48 jkroon joined #gluster
07:52 skumar__ joined #gluster
07:54 fsimonce joined #gluster
07:56 dimitris joined #gluster
08:03 ivan_rossi joined #gluster
08:31 apandey__ joined #gluster
08:31 itisravi__ joined #gluster
08:47 kramdoss_ joined #gluster
08:48 DV joined #gluster
08:52 gyadav_ joined #gluster
08:54 karthik_us joined #gluster
08:55 [diablo] joined #gluster
09:00 sanoj joined #gluster
09:04 gyadav__ joined #gluster
09:04 _KaszpiR_ joined #gluster
09:15 apandey_ joined #gluster
09:19 sahina joined #gluster
09:25 davidb2111 joined #gluster
09:25 Prasad joined #gluster
09:28 _KaszpiR_ joined #gluster
09:30 kotreshhr joined #gluster
09:31 karthik_us joined #gluster
09:37 ws2k3 joined #gluster
09:49 susant joined #gluster
09:53 poornima joined #gluster
09:57 sahina joined #gluster
10:12 Prasad_ joined #gluster
10:17 rwheeler joined #gluster
10:17 poornima joined #gluster
10:23 buvanesh_kumar joined #gluster
10:35 Humble joined #gluster
10:39 ThHirsch joined #gluster
10:41 poornima_ joined #gluster
11:42 ivan_rossi left #gluster
11:58 kotreshhr left #gluster
12:06 susant joined #gluster
12:28 rwheeler joined #gluster
12:30 Prasad__ joined #gluster
12:35 ctria joined #gluster
12:36 magrawal joined #gluster
12:37 Saravanakmr joined #gluster
12:44 nbalacha joined #gluster
12:47 Gambit15 joined #gluster
13:01 humblec joined #gluster
13:03 vbellur joined #gluster
13:04 phlogistonjohn joined #gluster
13:04 vbellur joined #gluster
13:16 om2 joined #gluster
13:20 gospod2 joined #gluster
13:31 dominicpg joined #gluster
13:50 kkeithley joined #gluster
13:56 sahina joined #gluster
13:57 DV joined #gluster
14:01 map1541 joined #gluster
14:05 pladd joined #gluster
14:07 manu1311 joined #gluster
14:09 gyadav joined #gluster
14:10 manu1311 Hello
14:10 glusterbot manu1311: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer
14:10 manu1311 Anyone had the situation where gluster volume status marks bricks offline, whereas they are actually working fine?
14:20 susant joined #gluster
14:27 jiffin1 joined #gluster
14:29 skylar1 joined #gluster
14:36 sahina joined #gluster
14:37 bowhunter joined #gluster
14:39 Asako joined #gluster
14:39 Asako Good morning.  I'm seeing some geo-replication failures due to an issue with the ssh command used by gluster.  Logs show this error.
14:39 kpease joined #gluster
14:39 Asako Popen: command returned error     cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing rsync --sparse --bwlimit=128 --xattrs --acls . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-G7mnQq/56571a5b06d0e4d43bc1c01811f71cc7.sock --compress root@mdct-glus
14:39 Asako ter-srv3:/proc/31730/cwd        error=1
14:39 Asako it doesn't seem to like the --compress on on the end
14:42 manu1311 try --new-compress or --old-compress?
14:42 Asako from the command line it says unknown option -- -
14:43 Asako changing it to -C works fine
14:43 Asako but I can't find where to set the ssh command arguments
14:45 Asako ssh_command only sets the command, not options
14:45 manu1311 I think it is what is after -e (shouldn't it be quoted?)
14:45 Asako probably
14:46 manu1311 and the --compression should probably be the rsync option, not the ssh one.
14:46 manu1311 i.e.: I would not put it in the -e quotes.
14:46 Asako makes sense
14:47 hmamtora joined #gluster
14:47 Asako strange part is my secondary master works fine.  The primary goes faulty and fails over.
14:48 Asako same gluster version, same packages
14:48 manu1311 I had two twin servers, oly one had a failure this morning. Now both are broken. Don't be impatient.
14:48 Asako heh
14:48 Asako luckily this isn't a production system
14:51 Asako still, it's supposed to be a POC.  Not sure I have much confidence in it.
14:58 humblec joined #gluster
14:59 phlogistonjohn joined #gluster
15:04 msvbhat joined #gluster
15:12 manu1311 Still looking for a hint about my broken gluster volume status output.
15:12 manu1311 (all bricks marked down while they are online)
15:16 map1541 joined #gluster
15:17 baber joined #gluster
15:20 hmamtora joined #gluster
15:20 hmamtora_ joined #gluster
15:26 yato_ joined #gluster
15:27 yato_ /windows splitv
15:43 farhorizon joined #gluster
15:43 kramdoss_ joined #gluster
15:44 psony joined #gluster
16:05 azhar joined #gluster
16:08 wushudoin joined #gluster
16:09 Asako also have an issue with slaves not syncing properly.
16:14 shyam joined #gluster
16:36 jkroon joined #gluster
16:37 kramdoss_ joined #gluster
16:43 timotheus1_ joined #gluster
16:56 gyadav joined #gluster
16:57 kramdoss_ joined #gluster
17:07 DV joined #gluster
17:08 msvbhat joined #gluster
17:12 Humble joined #gluster
17:19 jbrooks joined #gluster
17:21 MrAbaddon joined #gluster
17:38 tacoboy joined #gluster
17:43 int-0x21 joined #gluster
17:43 Humble joined #gluster
17:45 atinm joined #gluster
17:52 msvbhat joined #gluster
18:03 map1541 joined #gluster
18:05 DV joined #gluster
18:25 Humble joined #gluster
18:29 David_H_Smith joined #gluster
18:35 farhorizon joined #gluster
18:49 _KaszpiR_ joined #gluster
19:37 Humble joined #gluster
19:43 farhoriz_ joined #gluster
19:55 MrAbaddon joined #gluster
20:14 int-0x21 joined #gluster
20:28 [diablo] joined #gluster
21:14 wolfshappen joined #gluster
21:22 bowhunter joined #gluster
21:51 loadtheacc_ joined #gluster
21:52 shyam joined #gluster
21:52 Ulrar joined #gluster
21:52 squarebracket joined #gluster
21:53 rideh joined #gluster
21:53 thatgraemeguy joined #gluster
21:53 thatgraemeguy joined #gluster
21:56 wolfshappen joined #gluster
21:56 decayofmind joined #gluster
21:58 ws2k3 joined #gluster
21:58 ws2k3 joined #gluster
21:58 David_H__ joined #gluster
21:59 ws2k3 joined #gluster
21:59 owlbot joined #gluster
21:59 ws2k3 joined #gluster
21:59 ws2k3 joined #gluster
22:06 tamalsaha[m] joined #gluster
22:07 ws2k3 joined #gluster
22:16 farhorizon joined #gluster
22:26 David_H_Smith joined #gluster
22:33 map1541 joined #gluster
22:42 farhoriz_ joined #gluster
22:47 smohan[m] joined #gluster
22:47 marin[m] joined #gluster
23:03 humblec joined #gluster
23:26 int-0x21 joined #gluster
23:43 mikedep333 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary