Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 _ndevos joined #gluster
00:10 duerF joined #gluster
00:28 yinyin joined #gluster
00:54 tjikkun joined #gluster
00:54 tjikkun joined #gluster
01:03 joelwallis joined #gluster
01:19 hagarth joined #gluster
01:20 edong23 joined #gluster
01:33 badone_ joined #gluster
01:39 hagarth joined #gluster
02:41 hjmangalam joined #gluster
03:01 zwu joined #gluster
03:51 edong23 joined #gluster
04:13 recidive hey guys, I'm getting this on Gluster peer probe: "Probe returned with unknown errno 107"
04:15 recidive anything I should look at for fixing this problem?
04:28 mohankumar__ joined #gluster
05:22 isomorphic joined #gluster
05:49 yosafbridge joined #gluster
05:53 cicero_ joined #gluster
05:54 hagarth joined #gluster
06:56 ekuric joined #gluster
07:24 piotrektt_ joined #gluster
09:03 DEac- joined #gluster
09:04 brunoleon joined #gluster
09:33 andreask joined #gluster
09:40 rb2k joined #gluster
10:32 DEac- joined #gluster
10:49 chirino joined #gluster
10:50 edong23 joined #gluster
11:22 recidive joined #gluster
11:23 recidive hey guys, I'm getting this on Gluster peer probe: "Probe returned with unknown errno 107"
11:23 recidive anything I should look at for fixing this problem?
11:25 stickyboy recidive: Nope, sorry.  Never seen that.
11:25 stickyboy recidive: Is your DNS ok?
11:25 recidive I'm using ips
11:26 recidive I've flushed iptables
11:34 stickyboy recidive: Hmm.  And `iptables -L` shows no lingering rules?
11:34 stickyboy On both, of course?
11:34 recidive yes, no rules
11:35 stickyboy recidive: Ok, and is SELinux enforcing?  Just my naive troubleshooting :P
11:35 recidive I can ssh to the other server using the ip
11:36 recidive btw, I'm on ubuntu 12.04 on ec2
11:36 recidive I can paste the logs somewhere
11:36 recidive @paste
11:36 glusterbot recidive: For RPM based distros you can yum install fpaste, for debian and ubuntu it's dpaste. Then you can easily pipe command output to [fd] paste and it'll give you an url.
11:37 stickyboy Oh, Ubuntu... no SELinux.
11:37 recidive not by default, let me check if the security guy has setup it
11:47 recidive stickyboy: no selinux
11:47 stickyboy recidive: Hmm, ok.
11:47 recidive stickyboy: http://fpaste.org/14624/13695688/
11:47 glusterbot Title: #14624 Fedora Project Pastebin (at fpaste.org)
11:47 stickyboy Latest stable Gluster or 3.4 beta?
11:48 recidive 3.3
11:48 recidive latest from semiosis repo
11:48 stickyboy @repos
11:48 glusterbot stickyboy: See @yum, @ppa or @git repo
11:48 stickyboy @ppa
11:48 glusterbot stickyboy: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY -- and 3.4 packages are here: http://goo.gl/u33hy
11:48 stickyboy There's new repos since last week, maybe try a fresh reinstall?
11:50 ekuric recidive: check are cluster nodes able to resolve each other properly ... the best is to put them in /etc/hosts and then scp same /etc/hosts to all gluster nodes
11:51 recidive ekuric: ok, even when using the ip?
11:54 ekuric recidive: you will need mapping  IPs gluster_node_names in /etc/hosts on gluuster nodes, from fpaste it looks like as problem with dns resolution, ...
11:55 ekuric recidive: eg : http://fpaste.org/14625/69302136/
11:55 glusterbot Title: #14625 Fedora Project Pastebin (at fpaste.org)
11:56 recidive ekuric: they are there, but I'm using the ip
11:59 recidive ekuric: something like that http://fpaste.org/14626/69529136/
11:59 glusterbot Title: #14626 Fedora Project Pastebin (at fpaste.org)
11:59 recidive but not to the other server yes
11:59 recidive but it's very odd it does name resolution even when using the ip
12:00 recidive I mean there are just same server entries to hosts
12:01 recidive btw, I can mount a volume from outside the cluster
12:01 recidive using the ip
12:03 recidive I don't understand why it does name resolution if using the ip
12:04 recidive ekuric, stickyboy: should I add entries to hosts files for both servers even when using the ip?
12:08 ekuric recidive: yes,try that and restart gluster services, you will not make worse anything , you have to have proper gluster nodes name resolution , eg # host ip-10-158-53-80.ec2.internal should return proper resolution when run from node different than ip-10-158-53-80.ec2.internal, it should say something as ... host  ip-10-158-53-80.ec2.internal  has ip_address
12:36 chirino joined #gluster
12:39 vpshastry joined #gluster
12:41 vpshastry left #gluster
12:49 recidive ekuric: still getting the same error :(
12:52 vpshastry joined #gluster
13:01 recidive ekuric: ok, I guess I should have stick with 3.2
13:03 recidive could this be related to ipv6 being disabled on kernel params?
13:04 ekuric recidive: did not tried, ... but you can try it and feedback result, and yes, if possible use latest available gluster version
13:04 recidive I'm using the latest already
13:04 recidive I'm trying to go through the list emails for some clues
13:12 sysconfig does anybody know whether GlusterFS works on any of the many OS's with Solaris heritage; like: SmartOS, IllumOS, OMNI etc?
13:50 meunierd1 joined #gluster
14:35 Rhomber joined #gluster
14:48 portante joined #gluster
14:58 duerF joined #gluster
15:06 vpshastry joined #gluster
15:06 vpshastry left #gluster
15:07 hjmangalam1 joined #gluster
15:13 joelwallis joined #gluster
15:15 leaky joined #gluster
15:17 leaky hello all, are there problems in glusterfs 3.2.5 with xattr? I've mouned my gluster volume with -o acl user_xattr - but I still can't set security stuff
15:18 leaky example
15:18 leaky setfattr -n security.NTACL -v test2 test.txt
15:18 leaky setfattr: test.txt: Operation not supported
15:25 leaky however the same command does work on the filesystem underneath gluster...
15:41 rb2k joined #gluster
15:57 koubas joined #gluster
16:04 war|chil1 joined #gluster
16:04 partner_ joined #gluster
16:20 bala1 joined #gluster
16:29 mohankumar__ hagarth: i posted a RFC to fix --remote-host use case, could you please review it?
16:30 mohankumar__ when i try to add vbellur@redhat.com as a the reviewer, its throwing some error
16:30 mohankumar__ hagarth__:  ^^
16:36 efries joined #gluster
16:38 Skunnyk joined #gluster
16:38 rb2k joined #gluster
16:41 rb2k joined #gluster
16:46 hagarth mohankumar__: can you add vijay@gluster.com, there is some problem with my redhat.com email.
16:47 mohankumar__ hagarth: sure
16:48 mohankumar__ hagarth: done, will wait for your review, based on suggestions will send next version
16:48 mohankumar__ should i add someone else also?
16:50 nightwalk leaky: I ran into the same issue. Setting attributes in the user context works, but not in system or security. I'm not using selinux or anything, so I'd say it's a kernel-imposed limitation on fuse filesystems if I had to guess
16:51 hagarth mohankumar__: will add anybody else that is relevant to this patchset.
16:51 nightwalk sysconfig: according to the wiki it does and is current. Haven't gotten around to testing it myself though
16:52 sysconfig nightwalk, didn't discover it with google. you got a link at hand, by and chance?
16:56 nightwalk sysconfig: http://gluster.org/community/documentation/index.​php/Gluster_3.1:_Installing_GlusterFS_on_Solaris
16:56 glusterbot <http://goo.gl/QOnGN> (at gluster.org)
16:56 nightwalk ^ which I got to by typing 'solaris' in the search box on gluster.org itself
16:57 sysconfig that one I did find. but that's 3 years old. I might give it a go anyway. cheers!
16:58 nightwalk the process should be the same for 3.2 or 3.3 *shrug*
16:58 sysconfig oh sure, on actual Solaris. :)
16:59 nightwalk openindiana *is* 'actual Solaris' :)
17:01 sysconfig yes, but I want to run it on SmartOS, or OMNI if I have to :)  reason being that SmartOS has excellent virtualisation on board
17:02 sysconfig it's all not exactly crucial though... looking at different options/combination for an upcoming project
17:03 leaky @nightwalk - regarding setting permissions, if i was setting permissions via windows (through samba) to this gluster mountpoint - do you think it would impact that?
17:04 leaky i'm trying to troubleshoot modifying ACLs on files / folders in my test cluster...
17:05 nightwalk leaky: your problem is probably a bit more fundamental than a simple gluster issue. the security and system context are special in that access to them is highly restrictive to begin with. Your samba server will probably only be able to make use of them if it has root privileges, for example
17:05 recidive_ joined #gluster
17:05 nightwalk I *think* there's a setting that allows you to switch which context samba uses, but you have to keep in mind that using the non-restrictive 'user' context presents some potential security risks
17:06 edoceo joined #gluster
17:07 leaky interesting. at the moment the same setup is saving acls fine to non gluster mounts, so i think my samba / file system setup is ok,
17:07 edoceo I've got an odd issue.  I've upgraded from 3.0.5 to 3.2.5; and when I mount the Gluster and say `ls` I see sometimes three of the same file name (like readme.txt)
17:07 nightwalk leaky: like I said, I suspect it's a fuse limitation *shrug*
17:07 leaky ok
17:08 nightwalk leaky: so *if* you can persuade samba to use the 'user' context, it'll work. Otherwise, I dunno
17:08 leaky ah, right
17:08 leaky i'm with you now
17:08 nightwalk Stop by #samba and see if anyone there knows anything ;)
17:08 nightwalk maybe ##linux too
17:08 leaky gotta love open source!
17:09 leaky thanks for the help mate
17:09 nightwalk indeed
17:09 nightwalk good luck
17:09 edoceo And, if I look directly on my bricks I see that file is on all 6 of them; sometimes 0 bytes, sometimes the proper size
17:09 leaky i'll do some digging on sambe
17:09 leaky *samba
17:10 edoceo If I `cat` my file it's contents is correct, but there are three of them showing in `ls` which is very odd.  It's happend on 100s of iles
17:10 nightwalk probably need to re-create the volume
17:11 edoceo Like volume stop, volume delete then volume create BrickA...BrickF ?
17:11 nightwalk yeah. might have to wipe the xattrs after delete if it bitches though
17:12 edoceo I don't know how to wipe xattrs
17:12 nightwalk gimme a sec and I'll see if I can track down the script I found...
17:14 edoceo Oh, I found a russian email on Gluster from 2012 that says to getfattr & setfattr - do I have to do that for every file?
17:15 nightwalk http://paste.ubuntu.com/5704193/ <--"clear_xattrs.sh", written by jdarcy (I think)
17:15 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:16 edoceo Ok, so I can stop my Volume and then on each of my Bricks I run that directly on their Posix FS it looks like?
17:16 nightwalk edoceo: yes, you have to remove the gluster-added xattrs from each and every file under the mount, and it takes a while if you have any amount of files at all
17:16 edoceo 11TB
17:16 edoceo :(
17:16 nightwalk ^ this is one of those new things they imposed on us in ~3.2 btw :)
17:16 edoceo I guess I better get started, thanks!
17:17 nightwalk well, don't assume you have to. just delete and recreate as usual
17:17 nightwalk *if* it bitches at you or throws a weird error, then you know what you have to do ;)
17:17 edoceo Delete and re-create volume?  I did that but it did not complain or throw errors; - but I am seeing triplicate files
17:18 edoceo Yesterday I upgraded from 3.0.5 to 3.2.5 - and it was acting funny like this and would hang if I did a `find`  - so I stopped and then deleted the volume
17:18 nightwalk well...you'll probably have to blow away the .glusterfs directory since that's probably where the problem indexes are stored, but other than that...
17:18 edoceo Then I re-created and it would hang on find
17:20 nightwalk did you check the logs? my money is on some sort of self-heal issue
17:22 edoceo I see lots of mentions about self heal; and self-heal completed
17:22 nightwalk then the find hanging is probably just a byproduct of the healing process
17:23 edoceo I don't think I have a '.glusterfs' directory - I cannot find
17:24 nightwalk it would be in the source directory rather than on the gluster mount, assuming your server is also your client. Otherwise, it'd be in the root export directory on the server
17:25 edoceo Isn't .glusterfs for 3.3 ?  I'm still on 3.2.5
17:47 vpshastry joined #gluster
17:47 vpshastry left #gluster
18:05 lbalbalba joined #gluster
18:10 edoceo Well, my system seems to be function better now but when I `ls` it's still slow.  Can I try the clearing of attributes on just one sub-directory of my GlusterFS ?
18:38 rb2k joined #gluster
18:39 glusterbot New news from newglusterbugs: [Bug 961856] [FEAT] Add Glupy, a python bindings meta xlator, to GlusterFS project <http://goo.gl/yCNTu>
18:48 recidive_ joined #gluster
19:17 kris-- joined #gluster
19:19 _BryanHm_ joined #gluster
19:19 __Bryan__ joined #gluster
19:20 _BryanHm_ left #gluster
19:21 _BryanHm_ joined #gluster
19:50 andreask joined #gluster
20:44 recidive joined #gluster
21:21 chirino joined #gluster
21:56 cyberbootje joined #gluster
22:00 flrichar joined #gluster
22:00 ccha joined #gluster
23:12 glusterbot New news from newglusterbugs: [Bug 967031] GlusterFS client crash with sig 11 and core dump after 10-15 minutes <http://goo.gl/tg7AS>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary