Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 bene3 joined #gluster
00:08 jvandewege joined #gluster
00:24 JoeJulian n0de: which distro do you use?
00:24 n0de Gentoo
00:24 JoeJulian I have no clue... is that one where you compile everything from source?
00:26 n0de Prior to upgrading to 3.4.2 I was running Glusterd via source install. Current version 3.4.2 I compiled via "portage"
00:26 n0de which is Gentoos package manager
00:27 n0de One odd thing I am seeing on the storage nodes is when I do "gluster peer status|grep Con|wc -l"
00:27 n0de the number of peers jumps between 42, the correct number and 40
00:27 n0de so it seems one of the clients are not happy
00:28 n0de also things like this in the gluster log:
00:28 n0de [2014-06-20 00:10:25.509864] E [rpc-clnt.c:207:call_bail] 0-management: bailing out frame type(Peer mgmt) op(--(2)) xid = 0x1359x sent = 2014-06-20 00:00:15.676668. timeout = 600
00:28 n0de [2014-06-20 00:10:19.190565] W [client-rpc-fops.c:471:client3_3_open_cbk] 0-th-tube-storage-client-22: remote operation failed: No such file or directory. Path: <gfid:cd3c041b-e027-450e-8779-fa4362fac83a> (00000000-0000-0000-0000-000000000000)
00:28 JoeJulian Not sure why they call it a package manager when it doesn't manage packaged software, but anyway...
00:28 n0de [2014-06-20 00:10:18.507073] E [socket.c:2788:socket_connect] 0-management: connection attempt failed (Connection refused)
00:28 n0de Yea, that is a whole topic on its very own :)
00:29 JoeJulian You want tag 2b789331dc933b186360fc8cbffb06289ee60ee9 from the git tree.
00:31 n0de Where do you see that?
00:31 n0de and how do I look that up/
00:33 JoeJulian git clone https://git.forge.gluster.org​/glusterfs-core/glusterfs.git ; cd glusterfs ; git checkout 2b789331dc933b186360fc8cbffb06289ee60ee9
00:34 JoeJulian ... how can you use gentoo without knowing how to use git? :P
00:34 mjsmith2 joined #gluster
00:34 _polto_ joined #gluster
00:35 n0de ah, I didn't even see the last part where you said git tree, my head is all into gluster atm heh
00:35 JoeJulian hehe
00:35 n0de I read that as a uuid, or something possibly from my logs :)
00:36 n0de anyways, getting this
00:36 n0de fatal: unable to access 'https://git.forge.gluster.org/g​lusterfs-core/glusterfs.git/': SSL certificate problem: Invalid certificate chain
00:38 mjsmith2_ joined #gluster
00:50 JoeJulian huh... try git://forge.gluster.org/gl​usterfs-core/glusterfs.git
00:53 edong23 joined #gluster
00:53 Paul-C joined #gluster
00:54 n0de got it, thanks
00:54 n0de but now:
00:54 n0de git checkout 2b789331dc933b186360fc8cbffb06289ee60ee9
00:54 n0de fatal: reference is not a tree: 2b789331dc933b186360fc8cbffb06289ee60ee9
01:07 n0de Is this something I am doing wrong?
01:09 calum_ joined #gluster
01:13 bala joined #gluster
01:23 _polto_ joined #gluster
01:23 _polto_ joined #gluster
01:23 n0de JoeJulian: I am going to retire for the night, will hit you up tomorrow. Thanks for the help so far.
01:25 gildub joined #gluster
01:25 haomaiwang joined #gluster
01:41 plarsen joined #gluster
01:41 coredump joined #gluster
02:24 harish joined #gluster
02:30 edwardm61 joined #gluster
02:42 rjoseph joined #gluster
02:42 sjm joined #gluster
02:54 _polto_ joined #gluster
03:18 nileshgr joined #gluster
03:20 nileshgr I'm running glusterfs on two servers with both servers needing local access. How to mount the local share to get better performance? http://serverfault.com/a/155077/50449 seems to have done it in some way, but I'm not understanding how to go about it
03:23 sjm left #gluster
03:29 MacWinner joined #gluster
03:30 bennyturns joined #gluster
03:37 jag3773 joined #gluster
03:44 shubhendu_ joined #gluster
03:49 itisravi joined #gluster
03:52 kshlm joined #gluster
03:58 jbrooks joined #gluster
03:59 kumar joined #gluster
04:00 ppai joined #gluster
04:01 pureflex joined #gluster
04:08 _polto_ joined #gluster
04:14 nthomas joined #gluster
04:15 nileshgr left #gluster
04:17 spajus joined #gluster
04:25 saurabh joined #gluster
04:25 RameshN joined #gluster
04:32 jbd1 joined #gluster
04:32 bharata-rao joined #gluster
04:36 coredump joined #gluster
04:40 rastar joined #gluster
04:43 haomaiwang joined #gluster
04:47 kshlm joined #gluster
04:47 spandit joined #gluster
04:47 kanagaraj joined #gluster
04:47 RameshN joined #gluster
04:55 dusmant joined #gluster
05:00 kdhananjay joined #gluster
05:04 hchiramm_ joined #gluster
05:13 hagarth joined #gluster
05:25 prasanthp joined #gluster
05:27 _polto_ joined #gluster
05:27 _polto_ joined #gluster
05:28 Philambdo joined #gluster
05:33 aravindavk joined #gluster
05:33 nshaikh joined #gluster
05:37 vpshastry joined #gluster
05:41 harish joined #gluster
05:42 bala joined #gluster
05:42 * JoeJulian shakes is fist at lack of ipv6 support again.
05:43 haomaiwang joined #gluster
05:44 purpleidea JoeJulian: sounds like you're working on some patches ;)
05:44 JoeJulian I would like to, but I'm still in cleanup mode for at least another week.
05:44 purpleidea cleanup from what?
05:44 JoeJulian probably 3
05:45 JoeJulian legacy problems from before I started here.
05:45 purpleidea ah, yeah, that always sucks
05:45 JoeJulian meh, I'm used to it.
05:46 JoeJulian I kind-of get a kick out of it. People in over their heads and I get to come in and be the expert. It's a nice ego booster.
05:46 hagarth joined #gluster
05:46 karnan joined #gluster
05:46 purpleidea is this a production cluster you're fixing ?
05:46 JoeJulian yep
05:47 purpleidea well i'm glad to hear you're hacking on something bigger than 512G now :)
05:47 JoeJulian :)
05:48 JoeJulian We're going to have to find some convention in Phoenix to have Red Hat send you to so I can show you around the datacenter. It's impressive and very innovative.
05:49 JoeJulian Or, I suppose, New Jersey. I've been told that I can go whenever I want to...
05:49 JoeJulian I haven't tested that though.
05:49 purpleidea sounds good to me! likewise for wherever i could possibly show you around. not sure what's interesting to you though
05:50 JoeJulian Maybe you could show me a magical place were every product sold has two names.
05:50 purpleidea ^ you mean redhat?
05:50 JoeJulian Canada
05:51 purpleidea oh lol
05:51 purpleidea yeah, _do_ come to montreal... it's nice now, summer, cool things, etc...
05:51 purpleidea i have a comfy couch, although i don't know if you'll 100 fit. i remember you being tall. you might have to bend your knees
05:52 purpleidea s/100/100%/
05:52 glusterbot purpleidea: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
05:52 purpleidea lol
05:52 purpleidea awesome%
05:53 rjoseph joined #gluster
05:53 purpleidea s/awesome
05:53 JoeJulian wow. you should talk to your tourist board... I can honestly say that in all my conversations with my wife about where we would like to take a vacation, the bahamas, hawaii, fiji... neither of us has ever even had Quebec cross our minds.
05:54 purpleidea ^ yeah... Montreal is awesome in the summer, Quebec city is sweet in the winter (if you snowboard/ski) ... obviously I'm biased, but it's good stuff
05:54 dusmant joined #gluster
05:55 JoeJulian I used to go up skiing every week. Then I had to work for a living and I got married.
05:56 JoeJulian @bugzilla search gluster ipv6
05:57 purpleidea JoeJulian: well, invite is open when you find time.
05:57 purpleidea lol bz search must introduce a 24h latency ;)
05:57 JoeJulian @bugzilla query gluster ipv6
05:57 glusterbot JoeJulian: Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1070685 unspecified, unspecified, ---, kparthas, NEW , glusterfs ipv6 functionality not working
05:57 glusterbot JoeJulian: Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=922801 unspecified, medium, ---, gluster-bugs, NEW , Gluster not resolving hosts with IPv6 only lookups
05:57 glusterbot JoeJulian: Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=865327 unspecified, medium, ---, gluster-bugs, CLOSED CURRENTRELEASE, glusterd keeps listening on ipv6 interfaces for volumes when using inet familly address
05:58 purpleidea ^ once ipv6 actually works, feel free to ping me, and i can patch puppet-gluster to support it too :)
05:59 * JoeJulian grumbles about being at a chef shop
05:59 purpleidea oh :P
05:59 purpleidea you mean you're not going to use puppet-gluster :P
05:59 * JoeJulian also grumbles at the lack of idempotency.
05:59 JoeJulian If we were using puppet, I probably would.
06:00 purpleidea yeah chef has issues... the lack of a fully declarative language bothers me. kind of defeats the point
06:00 purpleidea you're welcome to port puppet-gluster to chef...
06:01 purpleidea i've had people ask me to do it (for $ no less) but i declined
06:01 JoeJulian Thanks. May I also fling myself naked and covered in jelly onto a fire ant nest?
06:01 purpleidea take photos
06:01 samppah :O
06:01 saurabh joined #gluster
06:01 JoeJulian There's a guy here pushing for saltstack. I'd be much more tempted.
06:02 JoeJulian Heya samppah
06:02 samppah howdy JoeJulian
06:03 samppah JoeJulian: may I ask what kind of stuff you are workin with now? :)
06:03 samppah cloud stuff I guess?
06:03 JoeJulian yep, we're competing with the other cloud providers using OCP hardware.
06:04 samppah are you using Gluster?
06:04 JoeJulian That's why they hired me.
06:04 samppah that hardware sure looks intresting.. I wish it was available in Finland too
06:04 samppah JoeJulian: well that's cool!
06:05 JoeJulian Each of those trays in the picture, 15 x 4TB drives is one brick in raid-6. 4 bricks per server 8 servers per rack. Each module can hold 15 racks.
06:06 eseyman joined #gluster
06:06 samppah sweet
06:06 purpleidea JoeJulian: so do you have a full 28PB "module" to experiment on?
06:07 samppah how much useable disk space you have?
06:07 JoeJulian not yet. Probably not 'till the end of the year.
06:07 purpleidea samppah: do the math!
06:07 samppah purpleidea: too lazy now :)
06:07 purpleidea i can do it, hang on
06:07 samppah hehe
06:07 purpleidea JoeJulian will verify
06:08 JoeJulian I'm exhausted. I'll just agree with whatever you put up here. I've done the math in channel before...
06:08 purpleidea 12.480 PB with r == 2
06:09 purpleidea err 12.1875PB actually
06:09 samppah :P
06:10 purpleidea JoeJulian: well, ping me when you get your hardware, maybe i can put you in touch with performance people testing gluster at scale... hint: puppet-gluster might be useful for your tests
06:11 JoeJulian I've got hardware and we want to have performance specs.
06:11 LebedevRI joined #gluster
06:12 JoeJulian Two racks in Phoenix and another in NJ.
06:13 purpleidea are you guys using RHEL, or RHS, or CentOS ?
06:13 samppah JoeJulian: it would be nice to hear more about your setup but I don't want to bother you now if you are exhausted.. especially about performance so i can convince my boss about Gluster :)
06:13 JoeJulian ubuntu
06:13 purpleidea yeah fair enough. get some sleep! i gotta get back to hacking
06:13 JoeJulian samppah: I'll be doing up whitepapers before the end of the year.
06:14 JoeJulian Probably the end of July in fact.
06:15 samppah JoeJulian: sounds great :)
06:16 samppah oh well, it's Midsummer's Eve and stores are closing early, got to go to do some shopping before that
06:16 samppah good night JoeJulian
06:17 JoeJulian Gnight
06:18 ramteid joined #gluster
06:18 sh_t joined #gluster
06:20 jtux joined #gluster
06:23 haomaiwa_ joined #gluster
06:28 weykent joined #gluster
06:36 keytab joined #gluster
06:36 glusterbot New news from newglusterbugs: [Bug 1070685] glusterfs ipv6 functionality not working <https://bugzilla.redhat.co​m/show_bug.cgi?id=1070685>
06:46 raghu joined #gluster
06:49 ekuric joined #gluster
06:57 aravindavk joined #gluster
07:00 rjoseph joined #gluster
07:02 ricky-ti1 joined #gluster
07:03 hchiramm_ joined #gluster
07:10 ktosiek joined #gluster
07:17 dusmant joined #gluster
07:19 hagarth joined #gluster
07:27 _polto_ joined #gluster
07:34 stickyboy Ok, I parallelized my rsync (to my failed brick) and it's  going much faster now.
07:34 stickyboy Pure rsync is slow as hell with large bricks.
07:35 andreask joined #gluster
07:36 glusterbot New news from newglusterbugs: [Bug 1111490] Dist-geo-rep : geo-rep xsync crawl takes too much time to sync meta data changes. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1111490>
07:40 stickyboy Need to write a blog post and post to mailing list so people can find it in the future.
07:44 capri stickyboy, is it possible to rsync a failed brick?
07:49 samppah stickyboy: how did you parallelize it?
07:49 stickyboy samppah: https://gist.github.com/ala​north/e91daa44ebe4e60bf3dd
07:49 glusterbot Title: sync_brick.sh (at gist.github.com)
07:49 stickyboy capri: Yah, gluster's self-heal is slow when your entire brick needs healing :)
07:50 stickyboy So I shut down glusterd on the node with the dead brick
07:50 pureflex joined #gluster
07:50 stickyboy Then rsync the data manually (minus the .glusterfs dir), then self-heal will heal it when I bring it back up
07:51 stickyboy Basically, if you have millions+ files... rsync has to map them before it starts, and then it moves linearly in a single thread.
07:52 stickyboy So if you use find, then pipe to rsync with multiple rsync invocations, it goes much faster.
07:52 stickyboy Because you do part of rsync's job for it.
08:06 ghenry joined #gluster
08:10 RameshN joined #gluster
08:16 liquidat joined #gluster
08:18 [o__o] joined #gluster
08:21 ctria joined #gluster
08:32 Slashman joined #gluster
08:35 harish joined #gluster
08:37 nshaikh joined #gluster
08:49 Hoggins joined #gluster
08:51 Hoggins hello folks, I have two servers that I would like to use as bricks for GlusterFS (replication). They are connected to each other via 1Gbps links, but with 40ms latency (continental links). Do you think such a configuration is sustainable ?
09:02 vimal joined #gluster
09:02 Ark joined #gluster
09:06 jtux joined #gluster
09:10 TvL2386 joined #gluster
09:11 lalatenduM joined #gluster
09:17 Norky Hoggins, that depends very much on the amount of change on the file systems
09:18 Norky normally over such a connection, peopel might run GlusterFS geo-replication which, at present involves only one-way synchronistaion (based on rsync)
09:19 shubhendu_ joined #gluster
09:20 fraggeln_ do you need to do some optimizations on clientside when dealing with a shitload of small files?
09:21 fraggeln_ rcync of 130gb took like 17h
09:21 fraggeln joined #gluster
09:21 Norky ahh, looks like he left
09:24 dusmant joined #gluster
09:26 _polto_ joined #gluster
09:31 liquidat joined #gluster
09:37 pkoro joined #gluster
09:48 prasanth_ joined #gluster
09:50 jcsp joined #gluster
09:51 pureflex joined #gluster
09:56 andreask joined #gluster
09:57 nishanth joined #gluster
09:58 kanagaraj joined #gluster
10:08 qdk joined #gluster
10:14 shubhendu_ joined #gluster
10:15 ccha3 is it possible to force set performance.cache-size while client is connected ?
10:16 ccha3 volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
10:18 dusmant joined #gluster
10:18 prasanthp joined #gluster
10:34 RameshN joined #gluster
10:36 deepakcs joined #gluster
10:53 JonathanD joined #gluster
11:02 _polto_ joined #gluster
11:02 _polto_ joined #gluster
11:12 vimal joined #gluster
11:21 Pupeno_ joined #gluster
11:22 B21956 joined #gluster
11:33 stickyboy fraggeln: Yah, I'm dealing with rsyncing a failed brick right now... had to parallelize rsync using find / xargs... now I get 1TB / hour or so over 10GbE copper. :D
11:34 stickyboy fraggeln: https://gist.github.com/ala​north/e91daa44ebe4e60bf3dd
11:34 glusterbot Title: Borrowed and adapted from here: https://wiki.ncsa.illinois.edu/​display/~wglick/Parallel+Rsync (at gist.github.com)
11:37 glusterbot New news from newglusterbugs: [Bug 1111563] chgrp fails in SMB mount with vfs_glusterfs plugin <https://bugzilla.redhat.co​m/show_bug.cgi?id=1111563>
11:42 calum_ joined #gluster
11:42 edward1 joined #gluster
11:44 samppah stickyboy: thanks, that looks great.. i'll have to test this myself too :)
11:51 hagarth joined #gluster
11:52 pureflex joined #gluster
12:09 stickyboy samppah: :D
12:13 itisravi joined #gluster
12:16 vimal joined #gluster
12:22 dusmant joined #gluster
12:27 tom[] joined #gluster
12:28 mjsmith2 joined #gluster
12:28 tom[] hi
12:28 tom[] why does the getting started guide format bricks with xfs?
12:28 glusterbot tom[]: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:29 tom[] and hello to you too glusterbot
12:32 lava joined #gluster
12:34 diegows joined #gluster
12:38 theron joined #gluster
12:42 fim joined #gluster
12:43 fim hello all. I'm playing a bit with glusterfs these days and now I'm trying to evaluate the recovery options in case of failures.
12:44 fim am i right to assume that in replica N volumes, if a brick fails you have to remove N bricks in order to replace the faulty one?
12:45 fim I'm using 3.5.0, if that makes any difference
12:45 jph98 joined #gluster
12:53 firemanxbr joined #gluster
13:02 sjm joined #gluster
13:03 chirino joined #gluster
13:08 haomaiwang joined #gluster
13:09 kanagaraj joined #gluster
13:10 haomaiw__ joined #gluster
13:15 Ark joined #gluster
13:19 vimal joined #gluster
13:19 japuzzo joined #gluster
13:26 vimal joined #gluster
13:33 deeville joined #gluster
13:34 tdasilva joined #gluster
13:35 bennyturns joined #gluster
13:37 7F1AAGKBM joined #gluster
13:42 lmickh joined #gluster
13:47 theron joined #gluster
13:49 theron_ joined #gluster
13:51 dusmant joined #gluster
13:53 pureflex joined #gluster
13:57 theron joined #gluster
13:58 Norky tom[], XFS is the recommended brick filesystem for GlusterFS
14:02 kiwikrisp fim: No, you just have to replace the brick that failed and then allow the volume to heal. The number of replicas is just the number of copies of that brick that you have, each brick holding it's own copy of the same data.
14:03 tom[] Norky: others said the same in list emails that google found. i'll take it as gospel and go with it
14:04 lalatenduM joined #gluster
14:05 andreask joined #gluster
14:09 fim kiwikrisp: that's what I was hoping for but it wouldn't let me remove/add or replace a brick from a non-existing server
14:12 wushudoin joined #gluster
14:16 Norky tom[], you don't have to use XFS, I think people have used both ZFS and ext4, however if you're starting out, just use XFS :)
14:23 theron_ joined #gluster
14:26 simplycycling joined #gluster
14:29 simplycycling Morning folks...I'm trying to get a node going (that I've inherited from a previous admin), and gluster isn't coming back. I've read about a bug on Ubuntu 12.04 (which this node is) where it doesn't come back on reboot...what is the simplest, quickest way of remounting it?
14:30 daMaestro joined #gluster
14:34 Norky this machine is a glusterfs client?
14:34 simplycycling Yes
14:34 Norky need a bit more detail
14:34 simplycycling Sure, what do you need? fstab info?
14:34 Norky you're mouting using the FUSE native protocl, or NFS?
14:35 simplycycling I'm not sure...I'm fairly new here, and the admin who set this up is no longer with us.
14:35 simplycycling And unfortunately, there's not a lot of documentation on it in our wiki
14:35 Norky what's the line from the fstab?
14:35 deeville my 2-node replicated setup doesn't seem to be load-balancing
14:36 deeville one node is at 150% load..the other one is pretty much 0%
14:36 simplycycling admin1:/api /srv glusterfs defaults,_netdev 0 0
14:36 deeville all clients use glusterfs native client for mounting
14:36 deeville is there something I'm missing?
14:37 Norky simplycycling, that's native (FUSE-based) then
14:37 Norky what happens when you try to mount it manually?
14:38 Norky also, can other clients mount the volume?
14:38 simplycycling Let me check what the output was
14:38 simplycycling Ok, so I got this output:
14:39 simplycycling # mount admin1:/api /srv
14:39 simplycycling mount: wrong fs type, bad option, bad superblock on admin1:/api, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) In some cases useful info is found in syslog - try dmesg | tail  or so
14:39 simplycycling and when I tailed dmesg I got:
14:39 Norky specify the filesystem type with -t
14:39 simplycycling [26493730.006606] init: wait-for-state (mounting-glusterfsstatic-network-up) main process (1360) terminated with status 100
14:39 simplycycling [26493730.007967] init: mounting-glusterfs main process (1351) terminated with status 1
14:39 Norky otherwise "mount" will assume it;s NFS
14:39 simplycycling Ah, so mount -t glusterfs?
14:40 Norky (this is fairly basic Unix sysadmion stuff)
14:40 Norky yes
14:40 hagarth joined #gluster
14:41 Norky most modern Linux distros will guess/work out what a filesyste,m is, but not in all cases, so you will need to explicitly set the FS type
14:41 Norky add the -v option too
14:41 Norky mount -v -t glusterfs admin1:/api /srv
14:42 simplycycling # mount -v -t glusterfs admin1:/api /srv
14:42 simplycycling extra arguments at end (ignored)
14:42 simplycycling Mount failed. Please check the log file for more details.
14:42 simplycycling checking the log
14:42 Norky hmm, curious
14:43 jag3773 joined #gluster
14:44 Norky ahh, mount/glsuterfs doesn't support -v
14:44 Norky ahh, mount.glusterfs doesn't support -v
14:44 simplycycling hmm...there is no relevant info in the log.
14:44 simplycycling Btw - when I ran it without the -v, I got the help output.
14:45 Norky "the log".... which log exactly?
14:45 simplycycling By the log, I actually looked at a couple - glusterfs, and syslog
14:45 Norky yeah, /var/log/glusterfs/glusterfs.log (by default) will be the most useful
14:47 simplycycling Here's the last couple of lines - it looks like it hasn't logged anything since the node was shut down
14:47 simplycycling [2014-06-20 14:41:50.104502] W [glusterfsd.c:838:cleanup_and_exit] (-->/usr/lib/libgfrpc.so.0(​rpc_transport_notify+0x27) [0x7f53569945b7] (-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0x114) [0x7f5356998694] (-->/usr/sbin/glusterfs(+0xd666) [0x7f535704a666]))) 0-: received signum (1), shutting down
14:47 simplycycling [2014-06-20 14:41:50.104580] I [fuse-bridge.c:4655:fini] 0-fuse: Unmounting '/srv'.
14:48 Norky you do have space on /var, right?
14:49 Norky df /var
14:49 simplycycling Yes, plenty
14:49 simplycycling Ok, I fat fingered it the first time I did the mount -t
14:50 Norky mount -t glusterfs admin1:/api /srv
14:50 simplycycling This time, I simply got # mount -t glusterfs admin1:/api /srv
14:50 simplycycling Mount failed. Please check the log file for more details.
14:50 simplycycling And the logs show a connection error
14:50 simplycycling [2014-06-20 14:48:21.442460] I [glusterfsd.c:1670:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.3.2
14:50 simplycycling [2014-06-20 14:48:21.452754] E [socket.c:1715:socket_connect_finish] 0-glusterfs: connection to  failed (Connection refused)
14:50 simplycycling [2014-06-20 14:48:21.452888] E [glusterfsd-mgmt.c:1778:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: Transport endpoint is not connected
14:50 simplycycling [2014-06-20 14:48:21.452921] I [glusterfsd-mgmt.c:1781:mgmt_rpc_notify] 0-glusterfsd-mgmt: -1 connect attempts left
14:50 simplycycling those look like the relevant lines
14:50 Norky yes
14:51 Norky portscan one of the gluster servers, e.g. nmap admin1
14:51 Norky it shoudl show some open ports on 49xxx
14:52 Norky and what happens with other clients?
14:52 simplycycling PORT     STATE SERVICE
14:52 simplycycling 22/tcp   open  ssh
14:52 simplycycling 25/tcp   open  smtp
14:52 simplycycling 80/tcp   open  http
14:52 simplycycling 111/tcp  open  rpcbind
14:52 Norky easy with the pasting - use a pastebin service :)
14:52 simplycycling sorry
14:52 Norky no worries
14:53 simplycycling Yeah, it's not showing anything above 5666
14:53 pureflex joined #gluster
14:53 Norky you might need to tell nmap to explicitly scan the higher ports, I dunno its default behaviour on Ubuntu
14:54 simplycycling one moment
14:54 Norky firstly though, what happens with other clients?
14:56 plarsen joined #gluster
15:03 theron joined #gluster
15:05 elico joined #gluster
15:06 simplycycling Sheesh, that was frustrating - nmap refused to scan the higher ports from that node, I had to do it locally. Ok, so the other clients seem to be fine, and the only high port showing open is 56882/tcp open  unknown
15:09 Norky other clients are fine? Are you able to try un-mounting and remounting one of the other clients
15:10 Norky also, what does the server say about the status of the volume? "gluster volume status api"
15:10 zaitcev joined #gluster
15:12 ndk joined #gluster
15:14 simplycycling Meh...boss wants me on something else. Thanks for the help, hopefully I'll be able to get back to this later...
15:15 bala joined #gluster
15:16 ekuric left #gluster
15:24 halfinhalfout joined #gluster
15:27 bala joined #gluster
15:30 kiwikrisp fim: How many nodes are you replicating? Remember the order in which you attach the bricks is important as it dictates the replica groups. if your testing brick failure then you should be able to replace the failed brick and keep moving. What's the error message your getting when you try to remove|add|replace?
15:31 sjm left #gluster
15:33 Pupeno joined #gluster
15:36 jbrooks joined #gluster
16:01 Slashman joined #gluster
16:01 vpshastry joined #gluster
16:02 jag3773 joined #gluster
16:10 zerick joined #gluster
16:14 coredump joined #gluster
16:42 _polto_ joined #gluster
16:42 jason___ joined #gluster
16:45 jobewan joined #gluster
16:46 Matthaeus joined #gluster
16:49 Mo_ joined #gluster
16:52 Matthaeus joined #gluster
16:53 plarsen joined #gluster
16:53 MacWinner joined #gluster
16:53 jobewan joined #gluster
16:55 jobewan joined #gluster
17:01 bala joined #gluster
17:08 glusterbot New news from newglusterbugs: [Bug 1111670] continuous log entries failed to get inode size <https://bugzilla.redhat.co​m/show_bug.cgi?id=1111670>
17:27 dtrainor joined #gluster
17:29 bennyturns joined #gluster
17:35 cmtime left #gluster
17:40 prasanthp joined #gluster
17:49 dtrainor joined #gluster
18:19 dtrainor joined #gluster
18:30 lmickh joined #gluster
18:34 pureflex joined #gluster
18:38 Matthaeus joined #gluster
18:52 Slashman joined #gluster
19:12 lmickh_ joined #gluster
19:12 rwheeler joined #gluster
19:27 sjm joined #gluster
19:44 jason____ joined #gluster
20:02 mjsmith2 joined #gluster
20:13 lmickh joined #gluster
20:23 SFLimey joined #gluster
20:34 pureflex joined #gluster
20:35 chirino joined #gluster
20:39 Matthaeus joined #gluster
20:45 _polto_ joined #gluster
20:45 _polto_ joined #gluster
20:53 CROS__ joined #gluster
20:57 edward1 joined #gluster
21:20 [ilin] left #gluster
21:24 dtrainor joined #gluster
21:32 ron-slc joined #gluster
21:39 rotbeard joined #gluster
21:45 jason__ joined #gluster
21:50 CROS__ left #gluster
21:52 jcsp1 joined #gluster
22:02 diegows joined #gluster
22:13 roost joined #gluster
22:29 jag3773 joined #gluster
22:33 zerick joined #gluster
22:35 pureflex joined #gluster
23:03 theron joined #gluster
23:10 vpshastry joined #gluster
23:17 Ark joined #gluster
23:25 sjm joined #gluster
23:48 Ark joined #gluster
23:48 vpshastry joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary