Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:41 _pol joined #gluster
02:13 ferrel left #gluster
02:42 bala joined #gluster
02:45 yinyin joined #gluster
02:50 avati_ joined #gluster
02:50 rubbs_ joined #gluster
02:52 DWSR2 joined #gluster
02:53 ehg_ joined #gluster
02:57 _br_- joined #gluster
02:59 ultrabizweb joined #gluster
02:59 Han joined #gluster
02:59 Guest77353 joined #gluster
03:04 disarone joined #gluster
03:06 MrAbaddon joined #gluster
03:09 MrAbaddon joined #gluster
03:14 rastar joined #gluster
03:35 yinyin joined #gluster
03:59 jthorne joined #gluster
04:00 mtanner joined #gluster
04:41 yinyin joined #gluster
04:45 yinyin_ joined #gluster
04:52 mtanner_ joined #gluster
05:00 kevein joined #gluster
05:19 hagarth joined #gluster
05:39 raghug joined #gluster
06:27 lh joined #gluster
06:34 lh joined #gluster
06:34 lh joined #gluster
06:52 lh joined #gluster
06:52 lh joined #gluster
06:53 tjikkun__ joined #gluster
07:06 shawns|work joined #gluster
07:07 georgeh|workstat joined #gluster
07:31 rastar joined #gluster
07:36 lalatenduM joined #gluster
07:38 zwu joined #gluster
07:46 lh joined #gluster
07:50 piotrektt joined #gluster
07:59 ekuric joined #gluster
08:02 ricky-ticky joined #gluster
08:02 lh joined #gluster
08:14 lh joined #gluster
08:14 lh joined #gluster
08:19 ricky-ticky joined #gluster
08:33 joehoyle joined #gluster
08:42 lh joined #gluster
08:45 georgeh|workstat joined #gluster
08:45 shawns|work joined #gluster
08:51 camel1cz joined #gluster
09:02 isomorphic joined #gluster
09:09 camel1cz joined #gluster
10:59 favadi left #gluster
11:12 shapemaker joined #gluster
11:22 rotbeard joined #gluster
11:28 disarone joined #gluster
11:46 MrAbaddon joined #gluster
12:10 rastar joined #gluster
12:41 isomorphic joined #gluster
13:07 joehoyle joined #gluster
13:18 NeatBasis joined #gluster
13:35 NeatBasis joined #gluster
14:31 robo joined #gluster
15:13 disarone joined #gluster
15:32 _br_ joined #gluster
15:47 cw joined #gluster
16:08 ekuric left #gluster
16:19 lh joined #gluster
16:19 rastar joined #gluster
16:24 camel1cz joined #gluster
16:44 camel1cz joined #gluster
16:48 shylesh joined #gluster
16:49 ferrel joined #gluster
16:50 ferrel hoping to avoid doomsday :-) ... I have files on a brick that just aren't showing up in gluster?
16:50 ferrel a fresh( new ) replica 2 volume where all the files were on the first brick
16:51 joehoyle joined #gluster
16:52 lh joined #gluster
16:58 rosmo ferrel try healing the brick? better backup first though
16:59 ferrel right I've looked at the heal info output and it shows the files currently being replicated over to the 2nd brick... think it's the list is short by about 6 or 8 files
17:00 ferrel ... split-brain doesn't show anything, it's like there are files on the brick it just isn't seeing for some reason at all... is it safe to maybe rename them and then copy them back into a gluster mounted directory on the same machine?
17:00 rosmo i dont see why it wouldnt be safe
17:02 ferrel OK perhaps I'll try that, I just wasn't sure if it was a "big" issue messing with files directly on the Brick at all vs only through gluster
17:04 rosmo i dont think messing with files directly on bricks is a good idea
17:04 rosmo gluster is based on extended attributes (metadata) after all
17:10 camel1cz left #gluster
17:15 lh joined #gluster
17:45 ferrel left #gluster
17:51 juhaj Can anyone help me with a problem: my glusterfs suddenly replies "Permission denied" to a member of group X, while the directory (this is the glusterfs mountpoint root) is owned by group X and says "group::rwx"
17:52 juhaj It used to work for the best part of a year (and longer, but I had problems with acl's before that)
17:55 pib1979 joined #gluster
17:58 lh joined #gluster
18:11 rosmo juhaj: selinux? facls?
18:32 joehoyle joined #gluster
18:36 juhaj Hmm, perhaps this is related? "State: Peer Rejected (Connected)"
18:36 juhaj rosmo: file acls, no selinux
18:39 rosmo might be a problem, theres an article explaining how to fix it
18:40 rosmo http://community.gluster.org/​q/how-do-i-fix-peer-rejected/
18:40 glusterbot <http://goo.gl/nWQ5b> (at community.gluster.org)
18:42 juhaj rosmo: In my setup I have just two servers, both think the other one is rejected. How do I sort THAT one out?
18:43 rosmo hmm, i guess the same, try it out
18:44 juhaj Which one do I treat as rejected then?
18:45 juhaj Actually, I think which one is the bad one: I checked one replicated brick and one server (call it M) has no changes since December, where tha last change out to be earlier today)
18:47 juhaj Oh darn... where is  /var/lib/glusterd on Debian? /etc/glusterd?
18:50 rosmo sorry, don't know... i guess dpkg can show you?
18:50 juhaj Ok, did what's advised, but the last step responds: "please delete all the volumes before full sync"
18:50 juhaj (Yes, it's /etc7glusterd)
18:50 rosmo double check the steps
18:50 rosmo i don't think it is.. if there was like 4 files and no dirs that wasn't it
18:51 lh joined #gluster
18:51 juhaj Ouch, I did the probes the wrong wy around
18:51 juhaj What do you mean "that wasn't it"?
18:53 juhaj Hm, the "probe bad server from a good one" says: "Probe on host M port 24007 already in peer list" – I take it this is not supposed to happen?
18:54 camel1cz1 joined #gluster
18:55 camel1cz1 left #gluster
18:56 camel1cz2 joined #gluster
18:56 camel1cz2 left #gluster
18:58 rosmo juhaj nope... the var dir has vols etc subdirs
18:58 rosmo also glusterd.info file
18:58 juhaj Yes, so has /etc/glusterd
19:06 disarone joined #gluster
19:06 juhaj I can get both to say "State: Peer in Cluster (Connected)" but then the sync says "please delete all the volumes before full sync"
19:06 juhaj The FAQ says nothing about deleting all files...
19:07 juhaj Do I really need to delete them all? It's a bit inconvenient
19:07 lh joined #gluster
19:09 juhaj I do see some of the new files on the bad server now, but they are wrong size (they are all zero bytes)
19:10 rosmo i think if both are connected, you're good to go
19:11 juhaj What about the files? Both servers now think the affected volume only has one brick (on the good server)
19:11 juhaj Which is odd since some replication did occur
19:13 rosmo yeah... sounds strange
19:14 juhaj Both servers also complain that "[2013-03-30 19:13:24.691860] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1021)"
19:14 glusterbot juhaj: That's just a spurious message which can be safely ignored.
19:14 juhaj That bot is GOOD
19:18 lh joined #gluster
19:19 juhaj Wait... why does it say the brick is distributed? It should be replicated (and only replicated)!
19:25 juhaj I decided I can recreate the other brick (although this is quite a severe bug: had the brick been huge, this would not be an optin)
19:26 juhaj But how can I delete the directory housing the brick? My kernel says "Device or resource busy"
19:30 lh joined #gluster
19:35 joehoyle joined #gluster
19:37 lh joined #gluster
20:06 lh joined #gluster
20:36 joehoyle joined #gluster
20:40 lh joined #gluster
20:55 rotbeard joined #gluster
21:26 lh joined #gluster
21:33 lh joined #gluster
21:33 lh joined #gluster
21:36 joehoyle joined #gluster
21:38 lh joined #gluster
21:56 joe joined #gluster
22:53 ninkotech__ joined #gluster
22:53 ninkotech joined #gluster
23:29 zyk|off joined #gluster
23:31 threesome joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary