Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:35 mattappe_ joined #gluster
00:53 calum_ joined #gluster
00:56 mattapp__ joined #gluster
01:02 theron joined #gluster
01:09 mattapp__ joined #gluster
01:16 mattapp__ joined #gluster
01:20 johnbot11 joined #gluster
01:26 mkzero joined #gluster
01:37 DV joined #gluster
01:39 johnbot1_ joined #gluster
01:43 sarkis joined #gluster
01:47 saltsa joined #gluster
02:15 _BryanHm_ joined #gluster
02:29 mattapp__ joined #gluster
02:33 MrNaviPacho joined #gluster
03:14 psyl0n joined #gluster
03:27 johnbot11 joined #gluster
03:45 hagarth joined #gluster
04:57 sarkis joined #gluster
04:57 mattappe_ joined #gluster
05:52 gmcwhistler joined #gluster
05:54 rjoseph joined #gluster
06:20 davinder joined #gluster
07:14 sgowda joined #gluster
07:20 sarkis joined #gluster
07:58 davinder joined #gluster
08:07 jag3773 joined #gluster
08:18 geewiz joined #gluster
09:09 psyl0n joined #gluster
09:22 hchiramm_ joined #gluster
09:43 sarkis joined #gluster
09:45 rotbeard joined #gluster
09:50 hchiramm_ joined #gluster
10:11 hchiramm_ joined #gluster
11:01 psyl0n joined #gluster
11:01 psyl0n joined #gluster
11:17 hchiramm_ joined #gluster
11:35 sarkis joined #gluster
11:37 sgowda joined #gluster
11:59 sgowda left #gluster
12:42 psyl0n joined #gluster
13:02 FarbrorLeon joined #gluster
13:02 chirino joined #gluster
13:03 FarbrorLeon I want to use Glusterfs w/ replicated volumes to provide workstations with homedirs and still be able to use selinux (enforcing) on the workstations.. But when I log in I get an error message telling me "No directory...". When selinux is turned off, it works fine. How am I able to configure selinux context to work here? Any help is much appreciated! :)
13:16 samppah FarbrorLeon: have you checked if there is se boolean regarding fuse or homedir?
13:16 samppah possibly netfs too
13:17 FarbrorLeon Yes, I found this fella.. "use_fusefs_home_dirs --> 1"
13:19 FarbrorLeon stupid question. I still need to change the selinux context, right? How do I proceed?
13:33 FarbrorLeon Actually solved this one! I created a module and installed it as instructed here: http://www.gluster.org/author/alan/ and I was good to go!
13:34 calum_ joined #gluster
13:41 ricky-ti1 joined #gluster
13:45 hchiramm_ joined #gluster
13:51 geewiz joined #gluster
14:09 bala joined #gluster
14:14 purpleidea FarbrorLeon: as an aside, some users have reported poor performance with glusterfs hosting home directories. test under load before you put it in production. ymmv
14:15 FarbrorLeon @purpleidea Actually, We had a red hat consultant on site just to make sure the performance was good enough for our needs..
14:16 FarbrorLeon That was w/ RHSS 2 and not 2.1. We have seen a huuge increase in performance when writing small files... Good thing the project got delayed I guess... :)
14:18 NeatBasis joined #gluster
14:51 neofob joined #gluster
14:55 harish joined #gluster
15:15 psyl0n joined #gluster
15:19 hybrid5121 joined #gluster
15:19 ricky-ti1 joined #gluster
15:19 bala joined #gluster
16:02 kanagaraj joined #gluster
16:05 johnbot11 joined #gluster
16:09 purpleidea FarbrorLeon: great to hear it! curious to hear about your needs, and the specific setup... how many clients? mounting with gluster fuse native client? how many servers/bricks? etc...
16:25 FarbrorLeon @purpleidea The usage is a Sun Gridengine HPC. 20 HPC nodes, ~30 workstations, 2 servers and 2 storage bricks w/ 12x600GB(5 replicated volumes) SAS each. Not a very large setup but still.. :)
16:26 FarbrorLeon ...and everything is mounted through the fuse client.
16:54 TDJACR joined #gluster
16:58 purpleidea FarbrorLeon: cool... fwiw, i have a mostly okay puppet module for sge if you're lacking on. it's currently unpublished, but better than starting from scratch if you're desperate
16:58 purpleidea s/on/one/
16:58 glusterbot What purpleidea meant to say was: FarbrorLeone: cool... fwiw, i have a mostly okay puppet module for sge if you're lacking on. it's currently unpublished, but better than starting from scratch if you're desperate
16:58 purpleidea s/on./one./
16:58 glusterbot What purpleidea meant to say was: FarbrorLeone. cool... fwiw, i have a mostly okay puppet module for sge if you're lacking on. it's currently unpublished, but better than starting from scratch if you're desperate
16:58 purpleidea glusterbot: bad glusterbot
17:02 FarbrorLeon Bad 'bot indeed! :D I'd love to take a look at it! :)
17:07 rotbeard joined #gluster
17:15 jyundt joined #gluster
17:18 jyundt Is client "failover" supported when using libgfapi in qemu/libvirt?
17:18 jyundt I'm using 3.4.1 with a 2x replicated brick
17:18 jyundt when I try to take down one gluster server, all client VMs using that brick (via libgfapi/qemu) start throwing I/O errors
17:29 edward2 joined #gluster
17:54 samppah jyundt: it should be.. does it start showing error right away?
17:58 geewiz joined #gluster
18:03 jyundt samppah: yes I get errors as soon as I take down a server
18:04 jyundt samppah: I'm going going to try to clear my logs and reproduce it
18:04 jyundt samppah: I also got similar errors when I added a brick (going from one server -> 2x replication with 2 servers / 2 bricks)
18:23 MrNaviPacho joined #gluster
18:37 samppah jyundt: there's a option network.ping-timeout which set timeout until node is considered dead.. it's set to 42 seconds by default and sometimes it's too long for virtual machines and can cause i/o errors
18:37 samppah jyundt: is it possible that it could be causing issues?
18:37 samppah it holds io for 42 seconds (by default)
18:38 jyundt samppah: I'll double check, but I don't think I normally encounter this (with the fuse mount) when doing a graceful shutdown
18:38 jyundt with a hard failure, I think I've encountered the 42 second timeout
18:50 Remco_ joined #gluster
18:50 haakon_ joined #gluster
18:50 lava_ joined #gluster
18:52 samppah jyundt: hmm.. have you checked that qemu has connected to both servers?
18:53 jyundt samppah: how do I do that? netstat?
18:54 jyundt samppah: ah , I think I see it in netstat
18:54 stickyboy_ joined #gluster
18:54 stickyboy joined #gluster
18:57 samppah jyundt: gluster vol status volName clients should also show connected clients
18:58 hagarth joined #gluster
18:58 tomased joined #gluster
18:59 achuz joined #gluster
18:59 hybrid5121 joined #gluster
19:00 andreask joined #gluster
19:01 jyundt samppah: ok, I see my kvm host (cluster client) connecting to both gluster servers
19:02 lkoranda joined #gluster
19:02 jyundt samppah: as a sanity check, if I cleanly stop gluster, (service [glusterd|glusterfsd] stop), I should _not_ hit the default 42 second timeout, right?
19:04 samppah jyundt: tht's a good question, i have to admit that i have never thought of tht
19:09 jyundt samppah: alright, I'm going to poke around with this
19:09 jyundt samppah: I might send a mail to the list once I get more information
19:20 tomased joined #gluster
19:37 sarkis joined #gluster
19:41 MrNaviPacho joined #gluster
19:48 mattapp__ joined #gluster
20:39 calum_ joined #gluster
20:48 badone joined #gluster
20:55 FarbrorLeon joined #gluster
21:10 sarkis joined #gluster
21:34 theron joined #gluster
21:36 mattapp__ joined #gluster
21:37 daMaestro joined #gluster
21:37 daMaestro joined #gluster
21:37 daMaestro joined #gluster
21:46 mattapp__ joined #gluster
21:48 pravka joined #gluster
22:00 mattapp__ joined #gluster
22:04 elyograg I've been mentally struggling with the fact that a full rebalance after adding storage is going to move incredible amounts of data around - a subset that is considerably more than 50%, every time we add more replica sets.
22:05 elyograg that's a big deal when you go from 40 TB to 80TB ... but it's a whole different class of problem when you go from 320TB to 360TB.
22:10 glusterbot New news from resolvedglusterbugs: [Bug 976750] Disabling NFS causes E level errors in nfs.log. <https://bugzilla.redhat.com/show_bug.cgi?id=976750>
22:18 glusterbot New news from newglusterbugs: [Bug 977497] gluster spamming with E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused) when nfs daemon is off <https://bugzilla.redhat.com/show_bug.cgi?id=977497>
23:02 mattapp__ joined #gluster
23:21 gdubreui joined #gluster
23:22 gdubreui joined #gluster
23:22 smasha82 joined #gluster
23:23 jag3773 joined #gluster
23:25 marvinc joined #gluster
23:26 bgpepi joined #gluster
23:26 marvinc hi
23:26 glusterbot marvinc: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
23:28 smasha82 im running a 2 node gluster replica with local nfs mounts on each box - the nfs mounts mount ok and I can see all the data and write to the mount etc.. however in the etc-glusterfs-glusterd.vol.log I am constantly seeing  0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1020)
23:29 jyundt smasha82: I think this is a known bug
23:29 smasha82 urgh fantastic
23:29 jyundt smasha82: I found a BZ for it, however, it is supposedly "fixed" in 3.4.1: https://bugzilla.redhat.com/show_bug.cgi?id=976750
23:29 glusterbot Bug 976750: low, medium, ---, vagarwal, CLOSED CURRENTRELEASE, Disabling NFS causes E level errors in nfs.log.
23:29 smasha82 fantastic - so I need to upgrade
23:29 jyundt well, I'm running 3.4.1 and I'm still getting this error message
23:30 smasha82 hrmm ok
23:32 jyundt smasha82: whoops, I might have been mistaken, this is only if your nfs is _disabled_
23:33 smasha82 i have nfs.disable set to off
23:33 jyundt ok, you might be hitting something else
23:33 jyundt sorry for the false positive
23:37 marvinc just started using gluster and so far I am very impressed.  Is it normal for io to stop if a replica peer goes down (io stops on the healthy replica).  I have adjusted the ping timeout and that solves the issue for the most part, but was just curious if this is to be expected.
23:39 DV joined #gluster
23:44 smasha82 out of curiosity if I issue a volume reset.. that only clears any extra options i set?
23:45 smasha82 and keeps all underlying data intact yes?
23:46 marvinc joined #gluster
23:46 marvinc er... disconnected
23:53 marvinc did anyone reply to my last post?  My connection was interrupted.
23:54 cyberbootje joined #gluster
23:56 psyl0n joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary