Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 plarsen joined #gluster
00:08 Pupeno joined #gluster
00:14 wushudoin joined #gluster
00:26 RicardoSSP joined #gluster
00:26 RicardoSSP joined #gluster
00:48 DV joined #gluster
01:11 topshare joined #gluster
01:15 hagarth joined #gluster
01:16 fandi joined #gluster
01:18 Pupeno joined #gluster
01:29 sprachgenerator joined #gluster
01:29 T3 joined #gluster
01:33 fandi joined #gluster
01:34 DV joined #gluster
01:40 rafi joined #gluster
01:44 DV_ joined #gluster
01:45 doekia joined #gluster
02:12 rjoseph joined #gluster
02:13 haomaiwa_ joined #gluster
02:20 harish joined #gluster
02:21 glusterbot News from newglusterbugs: [Bug 1198119] PNFS : CLI option to run ganesha servers on every node in trusted pool. <https://bugzilla.redhat.com/show_bug.cgi?id=1198119>
02:42 anoopcs joined #gluster
02:54 rjoseph joined #gluster
02:59 wkf joined #gluster
03:02 meghanam joined #gluster
03:06 bharata-rao joined #gluster
03:22 glusterbot News from newglusterbugs: [Bug 1199352] GlusterFS 3.7.0 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1199352>
03:23 kumar joined #gluster
03:43 nishanth joined #gluster
03:51 kripper left #gluster
04:14 huleboer joined #gluster
04:17 poornimag joined #gluster
04:18 bala joined #gluster
04:28 nbalacha joined #gluster
04:33 anoopcs joined #gluster
04:34 dgandhi joined #gluster
04:37 rafi joined #gluster
04:45 kshlm joined #gluster
04:46 schandra joined #gluster
04:47 ndarshan joined #gluster
04:53 gem joined #gluster
04:58 ppai joined #gluster
04:58 kdhananjay joined #gluster
04:59 jiffin joined #gluster
05:09 kdhananjay left #gluster
05:09 Apeksha joined #gluster
05:10 spandit joined #gluster
05:11 deepakcs joined #gluster
05:15 nangthang joined #gluster
05:17 kdhananjay joined #gluster
05:23 kdhananjay joined #gluster
05:25 rjoseph joined #gluster
05:29 atalur joined #gluster
05:33 earthrocker joined #gluster
05:37 aravindavk joined #gluster
05:56 lalatenduM joined #gluster
06:01 RameshN joined #gluster
06:04 bala joined #gluster
06:17 raghu joined #gluster
06:24 atalur joined #gluster
06:32 dusmant joined #gluster
06:34 sripathi1 joined #gluster
06:36 overclk joined #gluster
06:40 kovshenin joined #gluster
06:50 vimal joined #gluster
06:51 andreask left #gluster
07:17 jtux joined #gluster
07:28 schandra joined #gluster
07:37 doekia joined #gluster
07:51 deniszh joined #gluster
07:53 Debloper joined #gluster
07:54 [Enrico] joined #gluster
07:57 ricky-ticky joined #gluster
08:05 Philambdo joined #gluster
08:08 DV_ joined #gluster
08:10 schandra joined #gluster
08:31 T0aD joined #gluster
08:53 bala1 joined #gluster
09:06 o5k joined #gluster
09:08 anrao joined #gluster
09:11 anil joined #gluster
09:16 fsimonce joined #gluster
09:18 schandra joined #gluster
09:20 rafi joined #gluster
09:23 Slashman joined #gluster
09:33 ghenry joined #gluster
09:33 ghenry joined #gluster
09:35 gestahlt joined #gluster
09:35 gestahlt Hi!
09:35 glusterbot gestahlt: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:37 gestahlt I have an issue with glusterfs 3.5.3 and debian 7 (i tried 3.5.2 before, same issue). glusterfsd is causing very high cpu load (200-800% tells top). It is also very slow. I have just 2 nodes and an underlying XFS filesystem (maybe thats the cause?). I dont know what to do to fix this
09:37 gestahlt The network for glusterfs is also dedicated
09:38 schandra joined #gluster
09:39 gestahlt Btw: i just copied a file over that connection and i get around 60-70mb/sec
09:43 anil_ joined #gluster
09:45 kovshenin joined #gluster
09:51 gestahlt joined #gluster
09:53 gestahlt I dont know why it performs so badly
09:53 gestahlt i have 2 different setups which behave well out of the box
09:53 glusterbot News from newglusterbugs: [Bug 1199436] glfs_fini- The pending per xlartor resource frees. <https://bugzilla.redhat.com/show_bug.cgi?id=1199436>
09:53 glusterbot News from newglusterbugs: [Bug 1168897] Attempt remove-brick after node has terminated in cluster gives error: volume remove-brick commit force: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600. <https://bugzilla.redhat.com/show_bug.cgi?id=1168897>
10:03 Norky joined #gluster
10:11 SOLDIERz joined #gluster
10:13 Pupeno joined #gluster
10:21 Prilly joined #gluster
10:27 gildub joined #gluster
10:34 SOLDIERz_ joined #gluster
10:35 ricky-ticky2 joined #gluster
10:39 misc ndevos: I renamed the 2 bareos VM in the gluster interface, in case you wonder who did it
10:41 kumar joined #gluster
10:41 SOLDIERz_ joined #gluster
10:43 rjoseph joined #gluster
10:44 Pupeno joined #gluster
10:44 Pupeno joined #gluster
10:45 JonathanD joined #gluster
10:55 firemanxbr joined #gluster
10:57 SOLDIERz__ joined #gluster
11:14 anrao joined #gluster
11:21 Debloper joined #gluster
11:22 LebedevRI joined #gluster
11:26 yossarianuk joined #gluster
11:28 sripathi2 joined #gluster
11:28 yossarianuk hi - simple question (I hope) how can I change the port glusterd listens on ?
11:30 yossarianuk i have tried editing /etc/glusterfs/glusterd.vol
11:30 yossarianuk and restarted glusterd
11:36 anrao joined #gluster
11:49 bharata_ joined #gluster
11:51 bharata_ joined #gluster
11:53 anil joined #gluster
11:54 RaSTar yossarianuk: sometime back you had posted a log message regarding rdma.so, are you using rdma?
11:56 bene2 joined #gluster
12:00 JonathanD joined #gluster
12:09 sankarshan joined #gluster
12:14 yossarianuk RaSTar: no i am not...
12:14 yossarianuk That was an old log - i purged all the glusterfs packages - wiper /var/log/glusterfs
12:15 SOLDIERz__ joined #gluster
12:22 RaSTar yossarianuk: ok, you can ignore the rdma.so log
12:30 ira joined #gluster
12:41 yossarianuk so now I have a connection issue (one way) with port 24007 - can can i change the port used  by the server (or define a peer on a different port?_
12:43 yossarianuk i.e on the server I cannot connect to I can NAT port 22 (not used by ssh) -> 24007  as port 22 is open
12:44 yossarianuk how can I define a peer to use a different port ?
12:51 anoopcs joined #gluster
12:52 chirino joined #gluster
12:57 rwheeler joined #gluster
13:16 sripathi joined #gluster
13:17 Netbulae joined #gluster
13:19 T3 joined #gluster
13:24 ricky-ticky1 joined #gluster
13:24 glusterbot News from newglusterbugs: [Bug 1065632] glusterd: glusterd peer status failed with the connection failed error evenif glusterd is running <https://bugzilla.redhat.com/show_bug.cgi?id=1065632>
13:24 T3 joined #gluster
13:27 l0uis joined #gluster
13:31 Prilly joined #gluster
13:33 sripathi joined #gluster
13:39 ppai joined #gluster
13:52 julim joined #gluster
13:57 B21956 joined #gluster
13:57 lpabon joined #gluster
13:58 vipulnayyar joined #gluster
13:59 wkf joined #gluster
13:59 dastar_ joined #gluster
14:04 DV joined #gluster
14:10 shaunm joined #gluster
14:11 theron joined #gluster
14:18 Guest12942 joined #gluster
14:24 DV joined #gluster
14:27 T3 joined #gluster
14:27 dgandhi joined #gluster
14:29 dbruhn joined #gluster
14:31 Creeture joined #gluster
14:32 jiffin joined #gluster
14:34 bala joined #gluster
14:34 sprachgenerator joined #gluster
14:35 bennyturns joined #gluster
14:37 georgeh-LT2 joined #gluster
14:49 DV joined #gluster
15:01 rwheeler joined #gluster
15:02 Creeture1 joined #gluster
15:08 deepakcs joined #gluster
15:24 glusterbot News from newglusterbugs: [Bug 1199545] mount.glusterfs uses /dev/stderr and fails if the device does not exist <https://bugzilla.redhat.com/show_bug.cgi?id=1199545>
15:34 wushudoin joined #gluster
15:38 B21956 joined #gluster
15:43 meghanam joined #gluster
15:55 ira joined #gluster
16:02 harish joined #gluster
16:02 luis_silva joined #gluster
16:06 SOLDIERz__ joined #gluster
16:14 alexkit322 joined #gluster
16:18 plarsen joined #gluster
16:19 SOLDIERz__ joined #gluster
16:24 hagarth joined #gluster
16:27 T3 joined #gluster
16:33 raz joined #gluster
16:40 ira_ joined #gluster
16:43 Rydekull joined #gluster
16:44 bala joined #gluster
16:44 Rydekull joined #gluster
16:50 vipulnayyar joined #gluster
16:51 daMaestro joined #gluster
16:53 telmich good day
16:54 telmich can I somehow get a better output from gluster volume status that can be easily parsed for monitoring?
16:54 telmich I am effectively only interested in "volume ok or not?" status
16:55 glusterbot News from newglusterbugs: [Bug 1199577] mount.glusterfs uses /dev/stderr and fails if the device does not exist <https://bugzilla.redhat.com/show_bug.cgi?id=1199577>
16:58 joshin joined #gluster
16:58 telmich can somebody explain to me, why one peer sees the other connected while the other does not see the one?
16:58 telmich https://gist.github.com/telmich/fc7f7c855dd553766f4e
17:01 daMaestro joined #gluster
17:04 Prilly joined #gluster
17:19 ira_ joined #gluster
17:20 kkeithley telmich: gluster volume status --xml  ?
17:24 pdrakewe_ joined #gluster
17:29 SkylerBunny joined #gluster
17:32 SkylerBunny Greetings. Wanted to get a sense here for what the latest reasonable version for a Gluster installation in a production environment would be. I'm looking at http://www.gluster.org/community/documentation/ and wondering if its assessments are still current, and what they mean.
17:33 SkylerBunny 'GlusterFS 3.4.6 is a mature release and is suitable for production environments.', 'GlusterFS 3.5.3 is the latest stable release.', 'GlusterFS 3.6.2 is our latest release.'.
17:33 JoeJulian yep
17:34 SkylerBunny So the community would NOT suggest a jump from 3.2 (Debian wheezy's version) to 3.6 for production use. More 3.2 -> 3.4.6?
17:34 JoeJulian I'd be comfortable with 3.5.3
17:35 T3 joined #gluster
17:35 SkylerBunny Mmm. 'Stable' is stable. Latest...mmm...means what it means.
17:38 luis_silva joined #gluster
17:38 SkylerBunny I think I'm leaning 3.5 then. This is why I wanted to ask though!
17:39 JoeJulian As with anything, I'd test it yourself first. :D
17:39 SkylerBunny Oh yeah. I'm not looking for a warranty. ;) But general guidance. is good.
17:42 SkylerBunny Thank you!
17:42 JoeJulian You're welcome.
18:02 lalatenduM joined #gluster
18:06 Rapture joined #gluster
18:06 telmich what does "W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/83fb61f9e0cad11f8ebe47d21046b1ef.socket failed (Invalid argument)" mean? I get this message about once per second in my log
18:12 Gill joined #gluster
18:21 JoeJulian telmich: not entirely sure. That socket's associated with glustershd or nfs. There might be a corresponding clue in either of those files.
18:23 telmich JoeJulian: I turned off nfs, maybe that is the reason?
18:24 JoeJulian probably
18:32 ekuric joined #gluster
18:46 T3 joined #gluster
19:12 wushudoin| joined #gluster
19:15 chirino joined #gluster
19:18 rotbeard joined #gluster
19:40 luis_silva joined #gluster
19:51 cocotton joined #gluster
20:00 rwheeler joined #gluster
20:05 ghenry joined #gluster
20:05 ghenry joined #gluster
20:16 enstro joined #gluster
20:18 enstro need help with mounting glusterfs.....not working after update from ubuntu 14.04.1 to 14.04.2
20:23 enstro "sudo mount -o mountproto=tcp -t nfs -o vers=3 compute-node-2:/globalStorage1 /mnt/tmount" is the command i use to mount. error : "mount.nfs: requested NFS version or transport protocol is not supported"
20:23 glusterbot enstro: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
20:26 enstro hurray thanks....i did a "nfs.disable off" and a restart...thanks glusterbot
20:26 enstro it mounts now
20:27 DV joined #gluster
20:29 semiosis :-D
20:30 semiosis glusterbot++
20:30 glusterbot semiosis: glusterbot's karma is now 9
20:37 vipulnayyar joined #gluster
20:40 telmich what does "Patch Set 1: Code-Review+1" mean in gerrit?
20:47 JoeJulian telmich: It means that the first patch submitted has been reviewed +1. The first +1 usually comes from the automated testing.
20:49 telmich JoeJulian: thanks!
20:57 capri joined #gluster
21:12 Rapture joined #gluster
21:17 jobewan joined #gluster
21:19 vipulnayyar joined #gluster
21:43 DV joined #gluster
21:47 marcoceppi joined #gluster
21:47 marcoceppi joined #gluster
21:48 vipulnayyar joined #gluster
21:50 badone_ joined #gluster
22:07 gem joined #gluster
22:13 dgandhi joined #gluster
22:17 anarcat joined #gluster
22:17 anarcat hi
22:17 glusterbot anarcat: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:17 anarcat hehe, irc 101 :P
22:18 anarcat i am looking for opinions as to how gluster handles split brain scenarios and fencing. i read http://blog.gluster.org/2013/10/why-you-dont-need-stonith/ and wasn't convinced. i expected gluster to handle the whole stack of redundancy including automated failover, but i see people setting up gluster on top of DRBD, which makes me question why i would need gluster in the first place
22:22 JoeJulian anarcat: What kind of idiot would put gluster on top of drbd?
22:22 JoeJulian Sorry, but that just doesn't make any sense.
22:23 anarcat JoeJulian: that sounds like a rhetorical question
22:23 anarcat JoeJulian: i am not sure i am aware of specific, distinct classes of idiocy, maybe you can enlighten me? :p
22:23 JoeJulian Each client connects to every server, so there's built in redundancy.
22:23 Rapture joined #gluster
22:23 JoeJulian lol. I've used that same answer before. :D
22:23 anarcat what happens when a server goes down?
22:25 JoeJulian ,,(extended attributes) are used to track change states to the downed server. When the server comes back, a self-heal daemon brings it back in to synchronization.
22:25 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
22:26 JoeJulian For split-brain/fencing there are both server-based and replica-based quorum features that can be enabled.
22:40 Pupeno joined #gluster
22:41 julim joined #gluster
22:46 anarcat # mount -t glusterfs 10.0.16.38:/testvol /mnt
22:46 anarcat Mount failed. Please check the log file for more details.
22:46 anarcat i am trying to follow http://www.gluster.org/documentation/quickstart/index.html on ubuntu
22:46 anarcat the last step has at least what seems to be an error because it doesn't specify a mountpoint
22:50 JoeJulian You are correct.
22:51 JoeJulian <sigh> Always test your own instructions before publishing them...
22:51 JoeJulian The problem is usually firewall
22:57 anarcat the problem was that the volume wasn't started
22:57 JoeJulian Oops
22:58 anarcat so i'm curious: the clients say "mount 10.0.16.38:/testvol" - what if that Ip goes down, or is down when that command is issued?
22:58 JoeJulian @hostnames
22:58 glusterbot JoeJulian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
23:00 anarcat so this relies on DNS, basically?
23:00 JoeJulian Oh, and also ,,(mount server)
23:00 glusterbot (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
23:00 JoeJulian You don't have to set up dns, you can use /etc/hosts
23:00 anarcat well, that's still DNS in my mind
23:01 anarcat @rrdns
23:01 glusterbot anarcat: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
23:01 JoeJulian There are people that don't use hostnames. They always end up wishing they had.
23:02 JoeJulian Well, that may not be true. If they didn't end up wishing they had they probably didn't come here asking for help.
23:06 anarcat hmmm
23:06 anarcat i'm trying to add a third node (using gluster peer probe) and i get: peer probe: failed: 10.0.16.38 is already part of another cluster
23:09 JoeJulian peer probing creates a "trusted pool". You can't add a server to that pool from outside, it has to be probed from inside the pool.
23:09 anarcat http://devopsreactions.tumblr.com/post/107492619940/deploying-a-cluster
23:10 JoeJulian Now add 3 or 4 more cars to that and you would be simulating a ceph deployment. :D
23:10 anarcat it's funny how each channel disses the other :p
23:11 JoeJulian I use both.
23:11 T3 joined #gluster
23:11 JoeJulian Note, I'm also in #ceph on OFTC.
23:11 anarcat and drbd?
23:11 anarcat i was refering to #clusterlabs
23:12 JoeJulian I only have my own anecdotes about drbd. I'm sure they must have gotten better since they destroyed all my data 6 years ago.
23:14 anarcat funny, i tried gluster 5 years ago and it (a) crashed (bug was fixed within a week though) and (b) was performing 10x worse than NFS for reads :p
23:15 xoritor joined #gluster
23:15 JoeJulian Depends on your use case.
23:15 xoritor hey JoeJulian
23:15 xoritor hows tricks
23:15 Arminder joined #gluster
23:15 JoeJulian I doubt it *actually* performed worse for nfs, it was more likely that you never actually crossed the network with nfs and were simply working from fscache.
23:16 JoeJulian xoritor: Another day another dollar.
23:16 xoritor anyone here know if running glusterfs is easy to setup in a container or not?
23:16 xoritor ceph is a right PITA
23:17 xoritor i have not yet tried glusterfs...
23:17 JoeJulian I'm not convinced that containers are all that wonderful, but if you're good at setting up the network, I suppose it would be ok.
23:18 xoritor containers have thier use
23:18 anarcat JoeJulian: maybe. benches are there: https://wiki.koumbit.net/GlusterfsTesting
23:18 xoritor they are not the saviour of the universe ... that was flash gordon
23:22 bala joined #gluster
23:28 badone_ joined #gluster
23:33 badone joined #gluster
23:40 JoeJulian my god... who wrote that tutorial? It doesn't work at all!
23:45 JoeJulian anarcat: I just submitted a series of corrections to that page. Thanks for pointing it out.
23:45 anarcat no problem
23:45 anarcat those things happen all the time :)
23:46 anarcat i am trying to do a benchmark from an amazon client to two gluster servers on a replica volume, and i got: close: Transport endpoint is not connected
23:46 anarcat the benchmark didn't complete
23:46 JoeJulian I love the "sudo this && that && the_other_thing"
23:47 anarcat is that related to the caveats mentionned here about AWS? http://www.gluster.org/documentation/Getting_started_setup_aws
23:47 anarcat JoeJulian: neat
23:47 JoeJulian or the "sudo echo blargh >> file"
23:48 JoeJulian not connected? check the client log.
23:48 misc the sudo echo seems buggy :)
23:48 JoeJulian Yeah, sudo echo isn't cool. Pseudo echo is.
23:49 misc mhh I met the koumbit people in Montreal
23:51 anarcat misc: hi :)
23:51 JoeJulian I'm referring to https://www.youtube.com/watch?v=VnejLmQGYhg ... not sure where Koumbit fits in to that...
23:52 misc anarcat: hi
23:52 anarcat [2015-03-06 23:45:04.799431] E [afr-common.c:4168:afr_notify] 0-testvol-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
23:52 JoeJulian Oh.. that link up there...
23:52 anarcat ... i guess is what happened
23:52 JoeJulian yeah, why did the server connections (or servers themselves) go away?
23:53 * JoeJulian is hoping for oom-killer.
23:53 anarcat no idea
23:53 anarcat glusterfsd is still running on the servers
23:54 JoeJulian interesting. So that means that the client couldn't communicate with those servers (either of them) for at least 42 seconds.
23:54 anarcat 42.. seconds?
23:54 anarcat like life the universe and all that?
23:54 hagarth joined #gluster
23:55 JoeJulian :D
23:55 JoeJulian Oh, nice. My changes to the quick start page are already up.
23:56 anarcat there's a bunch of shit in the mnt.log
23:56 anarcat [2015-03-06 23:54:36.263452] E [rpc-clnt.c:369:saved_frames_unwind] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x48) [0x7ffc612aa028] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xb8) [0x7ffc612a8218] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe) [0x7ffc612a813e]))) 1-testvol-client-1: forced unwinding frame type(GlusterFS 3.3) op(FXATTR
23:56 glusterbot anarcat: ('s karma is now -61
23:56 glusterbot anarcat: ('s karma is now -62
23:56 glusterbot anarcat: ('s karma is now -63
23:56 anarcat OP(34)) called at 2015-03-06 23:53:16.154818 (xid=0x1522)
23:56 anarcat uh?
23:57 anarcat (++
23:57 glusterbot anarcat: ('s karma is now -62
23:57 JoeJulian Don't worry. It's just lazy regex matching.
23:57 anarcat poor open bracket
23:57 anarcat what have they done to the world
23:57 anarcat :(++
23:57 glusterbot anarcat: :('s karma is now 1
23:57 JoeJulian Someday I'll get around to making it much more complex.
23:57 anarcat eh
23:57 edong23 lol
23:57 edong23 im cracking up
23:57 JoeJulian ---
23:57 glusterbot JoeJulian: -'s karma is now -343
23:57 edong23 lol
23:58 anarcat http://paste.ubuntu.com/10553513/
23:58 anarcat is the mnt.log

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary