Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 failshell joined #gluster
00:33 vpshastry joined #gluster
00:34 tokik joined #gluster
00:42 failshell joined #gluster
01:35 B21956 joined #gluster
01:42 failshell joined #gluster
02:00 jporterfield joined #gluster
02:10 ThatGraemeGuy_ joined #gluster
02:12 samppah_ joined #gluster
02:12 klaas_ joined #gluster
02:12 edong23_ joined #gluster
02:13 johnmark_ joined #gluster
02:13 micu1 joined #gluster
02:14 haomaiw__ joined #gluster
02:14 ccha2 joined #gluster
02:14 ccha2 joined #gluster
02:14 Philambdo1 joined #gluster
02:14 marbu joined #gluster
02:14 xymox joined #gluster
02:15 davinder joined #gluster
02:16 B219561 joined #gluster
02:16 jporterfield_ joined #gluster
02:17 gmcwhist_ joined #gluster
02:18 asku1 joined #gluster
02:19 __NiC joined #gluster
02:19 pixelgremlins joined #gluster
02:22 qdk_ joined #gluster
02:22 fidevo joined #gluster
02:23 solid_liq joined #gluster
02:23 cp0k joined #gluster
02:24 tg2 joined #gluster
02:25 smellis_ joined #gluster
02:26 slappers joined #gluster
02:26 REdOG joined #gluster
02:26 solid_liq joined #gluster
02:29 davinder2 joined #gluster
02:29 asku joined #gluster
02:32 smellis joined #gluster
02:36 xymox joined #gluster
02:37 Humble joined #gluster
02:37 fidevo joined #gluster
02:38 davinder joined #gluster
02:38 jporterfield joined #gluster
02:39 georgeh|workstat joined #gluster
02:40 tg2 joined #gluster
02:40 FrodeS joined #gluster
02:41 recidive joined #gluster
02:41 davinder2 joined #gluster
02:41 a2 joined #gluster
02:44 micu1 joined #gluster
02:44 REdOG_ joined #gluster
02:44 tru_tru joined #gluster
02:44 ulimit_ joined #gluster
02:44 natgeorg joined #gluster
02:44 natgeorg joined #gluster
02:44 edong23_ joined #gluster
02:44 samppah joined #gluster
02:45 xavih_ joined #gluster
02:45 twx_ joined #gluster
02:45 l0uis joined #gluster
02:46 gmcwhistler joined #gluster
02:46 harish joined #gluster
02:46 smellis joined #gluster
02:47 Philambdo joined #gluster
02:47 JonnyNomad_ joined #gluster
02:47 GabrieleV_ joined #gluster
02:47 johnmark joined #gluster
02:49 ultrabizweb_ joined #gluster
02:49 bfoster_ joined #gluster
02:51 wcchandler joined #gluster
02:51 failshell joined #gluster
02:52 failshell joined #gluster
02:53 ujjain joined #gluster
02:56 wgao joined #gluster
02:58 ThatGraemeGuy joined #gluster
02:58 XpineX joined #gluster
02:59 lyang0 joined #gluster
03:00 tokik joined #gluster
03:03 ira joined #gluster
03:04 bennyturns joined #gluster
03:04 solid_li1 joined #gluster
03:09 dbruhn joined #gluster
03:11 bharata-rao joined #gluster
03:19 jporterfield joined #gluster
03:21 gdubreui joined #gluster
03:28 shubhendu joined #gluster
03:29 haomaiwa_ joined #gluster
03:29 jmarley joined #gluster
03:29 codex joined #gluster
03:29 _NiC joined #gluster
03:30 marbu joined #gluster
03:30 Humble joined #gluster
03:30 fidevo joined #gluster
03:30 a2 joined #gluster
03:30 harish joined #gluster
03:30 bfoster_ joined #gluster
03:30 shubhendu joined #gluster
03:37 hchiramm_ joined #gluster
03:39 shylesh joined #gluster
03:46 hchiramm__ joined #gluster
03:48 davinder joined #gluster
03:55 jporterfield joined #gluster
03:56 itisravi joined #gluster
04:00 saurabh joined #gluster
04:15 spandit joined #gluster
04:15 jporterfield joined #gluster
04:18 sahina joined #gluster
04:23 pixelgremlins joined #gluster
04:27 Elico joined #gluster
04:36 vpshastry joined #gluster
04:37 kanagaraj joined #gluster
04:38 bala joined #gluster
04:39 ccha2 joined #gluster
04:50 pixelgremlins_ba joined #gluster
04:57 kdhananjay joined #gluster
05:05 sticky_afk joined #gluster
05:05 stickyboy joined #gluster
05:06 edong23 joined #gluster
05:07 nrdb joined #gluster
05:10 ndarshan joined #gluster
05:11 ppai joined #gluster
05:16 jporterfield joined #gluster
05:22 nrdb if I was to have multiple bricks per server with a replicate setup, is there a way to tell which which bricks are pairs?
05:23 CheRi joined #gluster
05:25 ajha joined #gluster
05:25 sahina kkeithley, ping
05:26 aravindavk joined #gluster
05:29 mohankumar__ joined #gluster
05:33 hagarth joined #gluster
05:47 rastar joined #gluster
05:49 jporterfield joined #gluster
05:50 shyam joined #gluster
05:56 prasanth joined #gluster
05:58 raghu` joined #gluster
05:59 surabhi joined #gluster
06:06 mohankumar__ joined #gluster
06:11 bulde joined #gluster
06:17 dusmant joined #gluster
06:17 lalatenduM joined #gluster
06:20 hchiramm__ joined #gluster
06:26 nshaikh joined #gluster
06:27 aurigus joined #gluster
06:27 aurigus joined #gluster
06:29 badone__ joined #gluster
06:30 sputnik13net joined #gluster
06:35 mohankumar__ joined #gluster
06:37 jporterfield joined #gluster
06:39 rjoseph joined #gluster
06:47 kevein joined #gluster
06:49 overclk joined #gluster
06:56 jporterfield joined #gluster
06:57 vimal joined #gluster
06:59 glusterbot New news from newglusterbugs: [Bug 914641] Rebalance Stop Command does not give proper message <https://bugzilla.redhat.com/show_bug.cgi?id=914641>
07:13 lalatenduM joined #gluster
07:14 ktosiek joined #gluster
07:17 kanagaraj joined #gluster
07:17 JoeJulian ~brick-order | nrdb
07:17 glusterbot nrdb: I do not know about 'brick-order', but I do know about these similar topics: 'brick order'
07:17 JoeJulian ~brick order | nrdb
07:17 glusterbot nrdb: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
07:19 mohankumar__ joined #gluster
07:29 rossi_ joined #gluster
07:30 dusmant joined #gluster
07:32 tokik joined #gluster
07:34 spandit joined #gluster
07:48 ctria joined #gluster
07:54 psyl0n joined #gluster
07:58 mohankumar__ joined #gluster
07:58 FilipeCifali joined #gluster
08:00 Philambdo joined #gluster
08:00 kris joined #gluster
08:06 eseyman joined #gluster
08:10 itisravi joined #gluster
08:13 eastz0r joined #gluster
08:13 shubhendu joined #gluster
08:14 pixelgremlins_ba joined #gluster
08:16 gmcwhistler joined #gluster
08:16 johnmark joined #gluster
08:16 psyl0n joined #gluster
08:17 rjoseph joined #gluster
08:18 mohankumar__ joined #gluster
08:20 RobertLaptop joined #gluster
08:23 eastz0r joined #gluster
08:23 sputnik13net joined #gluster
08:24 Nev___ joined #gluster
08:26 mohankumar__ joined #gluster
08:26 msvbhat joined #gluster
08:30 kshlm joined #gluster
08:32 surabhi joined #gluster
08:33 mohankumar__ joined #gluster
08:34 eastz0r joined #gluster
08:38 mohankumar__ joined #gluster
08:43 mgebbe joined #gluster
08:52 ndarshan joined #gluster
08:54 kris joined #gluster
08:55 bulde joined #gluster
08:55 RameshN joined #gluster
09:01 jporterfield joined #gluster
09:06 ngoswami joined #gluster
09:06 mohankumar__ joined #gluster
09:10 andreask joined #gluster
09:11 tryggvil joined #gluster
09:12 liquidat joined #gluster
09:12 StarBeast joined #gluster
09:13 mohankumar__ joined #gluster
09:13 ndarshan joined #gluster
09:16 shyam joined #gluster
09:22 shubhendu joined #gluster
09:24 dusmant joined #gluster
09:24 sahina joined #gluster
09:25 qdk_ joined #gluster
09:29 nshaikh joined #gluster
09:31 vpshastry1 joined #gluster
09:35 vpshastry2 joined #gluster
09:56 Slash_ joined #gluster
10:01 sputnik13net joined #gluster
10:03 shubhendu joined #gluster
10:03 sahina joined #gluster
10:08 dusmant joined #gluster
10:13 baoboa joined #gluster
10:13 ira joined #gluster
10:14 pixelgremlins joined #gluster
10:20 jmarley joined #gluster
10:20 jmarley joined #gluster
10:26 harish joined #gluster
10:30 glusterbot New news from newglusterbugs: [Bug 1041109] structure needs cleaning <https://bugzilla.redhat.com/show_bug.cgi?id=1041109>
10:42 sahina joined #gluster
10:52 shylesh joined #gluster
10:55 kris joined #gluster
11:00 diegows joined #gluster
11:02 ppai joined #gluster
11:03 jporterfield joined #gluster
11:03 sahina joined #gluster
11:04 vpshastry1 joined #gluster
11:06 Nev___ joined #gluster
11:28 ndarshan joined #gluster
11:33 tryggvil joined #gluster
11:33 bfoster joined #gluster
11:44 bandini_onlinux joined #gluster
11:48 bandini_onlinux Hi, just want to clarify a few things. Firstly, to create a six node distributed (replicated) volume with a two-way mirror (i.e. gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4 server5:/exp5 server6:/exp6) - would the 5th and 6th bricks mirror each other?
11:49 tryggvil joined #gluster
11:51 kkeithley1 joined #gluster
11:55 Amanda joined #gluster
11:56 JonathanD joined #gluster
11:58 itisravi_ joined #gluster
12:02 mohankumar__ joined #gluster
12:07 SteveCooling HELO. We're having problems with stat() operations on some files on a GlusterFS volume mounted on a 32-bit client. Looks like inodes numbers are too big. I've seen some info about that regarding NFS. Is it possible to "flick a switch" to make this work for our native GlusterFS client?
12:07 SteveCooling And if so, what drawbacks does it have?
12:09 edward1 joined #gluster
12:11 social SteveCooling: I think yes if it's nfs
12:11 SteveCooling natvie glusterfs mount
12:11 SteveCooling *native
12:12 social SteveCooling: nfs.enable-ino32 is for nfs dunno about native client
12:12 ppai joined #gluster
12:13 social http://joejulian.name/blog/broken-32bit-apps-on-glusterfs/ ugly :/
12:13 glusterbot Title: Broken 32bit apps on GlusterFS (at joejulian.name)
12:13 dusmant joined #gluster
12:17 CheRi joined #gluster
12:23 mohankumar__ joined #gluster
12:25 andreask joined #gluster
12:31 pdrakeweb joined #gluster
12:32 Nev___ joined #gluster
12:33 eseyman joined #gluster
12:35 prasanth joined #gluster
12:35 DV joined #gluster
12:46 recidive joined #gluster
12:53 shubhendu joined #gluster
12:54 edward2 joined #gluster
12:57 rastar joined #gluster
13:01 shubhendu_ joined #gluster
13:02 glusterbot New news from newglusterbugs: [Bug 1064863] Gluster CLI to enable parent gfid feature for a volume <https://bugzilla.redhat.com/show_bug.cgi?id=1064863>
13:02 dusmant joined #gluster
13:04 sprachgenerator joined #gluster
13:08 DV joined #gluster
13:19 social kkeithley_: do you have 2min? just poke about how to resubmit patch and question about already merged ones with whitespace errors
13:20 rastar joined #gluster
13:21 kkeithley_ social: fire when ready
13:22 kkeithley_ to resubmit after you've changed a patch do (at top of source tree): `git add $path-to-change/foo.c; git commit -a --amend; ./rfc.sh`
13:23 social ah so just ammend
13:23 social ok and for the ones already merged? should I send fixes or just leave it be?
13:26 kkeithley_ If it was merged with white-space then I'd leave it be. But I want to know how you you achieved it, they never let my patches go through with white-space. ;-) I guess my Jedi powers aren't as strong as I thought they were.
13:27 social dunno I remember that I did different patches for branches so maybe the whitespace error wasn't there
13:27 vpshastry joined #gluster
13:27 social I have to check
13:28 nrdb JoeJulian, thanks
13:29 vpshastry left #gluster
13:33 glusterbot New news from newglusterbugs: [Bug 1058227] AFR: glusterfs client invoked oom-killer <https://bugzilla.redhat.com/show_bug.cgi?id=1058227> || [Bug 1022535] Default context for GlusterFS /run sockets is wrong <https://bugzilla.redhat.com/show_bug.cgi?id=1022535>
13:36 ajha joined #gluster
13:36 social kkeithley_: it's there with whitespace errors ^_^
13:39 kkeithley_ okay, no big deal. They may get cleaned up the next time someone fixes something in the file
13:40 plarsen joined #gluster
13:49 shubhendu_ joined #gluster
13:50 hagarth joined #gluster
13:51 japuzzo joined #gluster
13:57 mohankumar__ joined #gluster
14:00 B21956 joined #gluster
14:04 nshaikh joined #gluster
14:06 burn420 joined #gluster
14:06 psyl0n joined #gluster
14:06 burn420 I have one brick that dropped out how to I get it back online?
14:07 burn420 -----------------------------------------------------------------------------
14:07 burn420 Brick gluster1:/export/cluster149152Y1842
14:07 burn420 Brick gluster2:/export/cluster149152Y1404
14:07 burn420 Brick gluster3:/export/cluster1N/ANN/A
14:07 burn420 Brick gluster4:/export/cluster149152Y1481
14:07 burn420 gluster3 is offline
14:07 burn420 and I keep seeing this in the logs on all nodes
14:07 burn420 2014-02-13 14:08:49.916792] W [socket.c:514:__socket_rwv] 0-home-client-2: readv failed (No data available)
14:07 burn420 [2014-02-13 14:08:49.916839] I [client.c:2097:client_rpc_notify] 0-home-client-2: disconnected
14:08 burn420 gluster2 seems to be fine...
14:08 burn420 how do I get gluster3 back online?
14:08 burn420 its a distributed replicated volume
14:10 sahina joined #gluster
14:11 kanagaraj joined #gluster
14:18 shylesh joined #gluster
14:21 johnmilton joined #gluster
14:25 rwheeler joined #gluster
14:29 shubhendu_ joined #gluster
14:29 dbruhn joined #gluster
14:30 circ-user-BX87z joined #gluster
14:31 dbruhn joined #gluster
14:32 sroy joined #gluster
14:34 theron joined #gluster
14:36 ndevos burn420: if you have figured out why that brick exited, and having fixed that, you should be able to start the process with 'gluster volume start $VOLUME force' (that should start all missing processes for the $VOLUME)
14:37 burn420 yeah I saw that there must be some other underlying issue
14:38 burn420 I keep seeing that readv failed (No data available)
14:38 burn420 on all nodes
14:38 Derek_ good day everyone
14:38 burn420 don't know why its doing that
14:39 ndevos burn420: maybe firewall, or selinux?
14:40 Derek_ I've inherited a gluster 3.2.6 setup with geo-replication between two remote sites
14:40 burn420 I tried to force it got this volume start: home: failed: Volume id mismatch for brick gluster1:/export/cluster1. Expected volume id 9e0ffc91-9d46-477a-b8eb-dfd3b7d65765, volume id 91a78f1b-f644-4865-b329-887f6663bed2 found
14:40 hybrid512 joined #gluster
14:40 lalatenduM joined #gluster
14:41 Derek_ I'm showing the replication status as OK and I can see traffic going back and forth (over ssh) between master and slave
14:41 ndevos burn420: sounds like you used that for a different brick?
14:42 Derek_ but, no files are actually getting pushed to the slave
14:43 Derek_ any smoking guns I can look at?
14:43 burn420 I set it up last year it may have failed when I originally set it up is only thing I can think of
14:43 hchiramm_ joined #gluster
14:43 burn420 just noticed the one brick offline which I guess it has been offline since december lol
14:44 burn420 that is not even the node that is offline!
14:44 burn420 great
14:45 ndevos burn420: do you have the glusterd process running on gluster3?
14:45 burn420 yes glusterd (pid  1186) is running...
14:46 ndevos burn420: the volume-id is a xattr on the directory of the bricks, you can verify that with 'getfattr -m. -ehex -d /export/cluster1', the volume-id should be the same on all bricks
14:47 ndevos burn420: also, sometimes users have a volume-id set on any parent directory of the brick (no idea how that happens), you should check that as well
14:50 vipulnayyar joined #gluster
14:50 failshell joined #gluster
14:51 Derek_ good day everyone
14:51 bugs_ joined #gluster
14:51 burn420 bluster 1 is the only I see volume_id on trusted.glusterfs.volume-id=0x91a78f1bf6444865b329887f6663bed2
14:51 failshell joined #gluster
14:51 burn420 the rest don't have trusted..... volume id
14:52 burn420 the rest look similar to this [root@gluster2 ~]# getfattr -m. -ehex -d /export/cluster1
14:52 burn420 getfattr: Removing leading '/' from absolute path names
14:52 burn420 # file: export/cluster1
14:52 burn420 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
14:52 burn420 trusted.glusterfs.quota.dirty=0x3000
14:52 burn420 trusted.glusterfs.quota.size=0x0000003dfb035200
14:53 ndevos thats strange, I dont think I've seen that before... maybe someone else has an idea about that
14:53 burn420 well I use quotas
14:53 burn420 I don't know if it has to do with that
14:54 burn420 # file: export/cluster1
14:54 burn420 trusted.afr.home-client-2=0x000000000000000000000251
14:54 burn420 trusted.afr.home-client-3=0x000000000000000000000000
14:54 burn420 trusted.glusterfs.dht=0x00000001000000007fffffffffffffff
14:54 burn420 trusted.glusterfs.quota.dirty=0x3000
14:54 burn420 trusted.glusterfs.quota.size=0x0000001c64a09a00
14:54 burn420 [root@gluster2 ~]# getfattr -m. -ehex -d /export/cluster1
14:54 burn420 getfattr: Removing leading '/' from absolute path names
14:54 burn420 # file: export/cluster1
14:54 burn420 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
14:54 burn420 trusted.glusterfs.quota.dirty=0x3000
14:54 burn420 trusted.glusterfs.quota.size=0x0000003dfb035200
14:54 burn420 [root@gluster1 ~]# getfattr -m. -ehex -d /export/cluster1
14:55 burn420 getfattr: Removing leading '/' from absolute path names
14:55 burn420 # file: export/cluster1
14:55 burn420 trusted.glusterfs.dht=0x0000000100000000000000007ffffffe
14:55 burn420 trusted.glusterfs.quota.dirty=0x3000
14:55 burn420 trusted.glusterfs.quota.size=0x0000003df63cd600
14:55 burn420 trusted.glusterfs.volume-id=0x91a78f1bf6444865b329887f6663bed2
14:55 ndevos @paste
14:55 glusterbot ndevos: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
14:55 kdhananjay joined #gluster
14:55 burn420 ?
14:56 burn420 should I not be pasting?
14:56 ndevos well, it's not advised to be pasting something in the channel, use something like fpaste and provide the url :)
14:57 burn420 http://fpaste.org/76904/
14:57 glusterbot Title: #76904 Fedora Project Pastebin (at fpaste.org)
14:59 burn420 so if you run the force do you run it on the node with the problem ?
14:59 burn420 let me try that on gluster3
14:59 ndevos no, it does not matter where you run it
14:59 burn420 ah I figured
14:59 sarkis joined #gluster
15:00 burn420 I don't want to have to rebuild the whole thing
15:00 burn420 lol
15:00 burn420 I ran it on 3 and got this
15:00 burn420 volume start: home: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /export/cluster1. Reason : No data available
15:02 burn420 no data on gluster1
15:02 diegows joined #gluster
15:02 burn420 guess I was wrong there is data
15:03 ndevos its strange that the volume-id is not set on the other bricks... and that the volume-id is different from what is expected...
15:03 burn420 yeah If I look in the directory above which is /export says something about selenium but I disabled that on all server and there is no firewall
15:03 ndevos normally, the volume-id is set in /var/lib/glusterd/vols/$VOLUME/info
15:04 ndevos that volume-id (prepended with '0x', and removing the '-') should be set as an xattr
15:05 ndevos I'd try to set the correct volume-id manually, cross-fingers and do the start-force again
15:05 ndevos but, I have no idea if that would work, or if it breaks things more....
15:05 burn420 it is in info and it is the same on all three
15:05 burn420 let me check 3 now
15:07 burn420 yeah all 4 exactly the same in that info file
15:11 burn420 where would that other id be stored any idea ?
15:12 burn420 in /var/lib/glusterd/vols/ there is only one volume which is home
15:12 burn420 maybe its on another box let me look
15:14 burn420 they all do the something except gluster1 says 2 ids...
15:14 * burn420 kicks the wall!
15:14 burn420 ouch
15:15 Derek_ good day all... who do we talk to for troubleshooting issues?
15:18 burn420 you should probably tell people what problem you are having that might be a good start..
15:18 Derek_ yes I did... no responses
15:18 Derek_ I didn't know if there was a queue or something
15:19 burn420 I think its a bug @ndevos https://bugzilla.redhat.com/show_bug.cgi?id=991084
15:19 glusterbot Bug 991084: high, unspecified, ---, vbellur, NEW , No way to start a failed brick when replaced the location with empty folder
15:19 burn420 I am running 3.4.0
15:20 burn420 oh @Derek_ did not see it....
15:20 Derek_ I can repost them if you'd like
15:20 P0w3r3d joined #gluster
15:21 Derek_ I've inherited a gluster 3.2.6 setup with geo-replication between two remote sites
15:21 burn420 I was just looking
15:21 burn420 whats wrong with it ?
15:21 Derek_ ok, thanks. :)
15:21 burn420 oh file not getting pushed to slave
15:22 Derek_ correct.  but geo-replication says everything is ok
15:22 burn420 does bluster volume status show all nodes as online? It seems like it must be much different from my setup did not know there were master and slave in any setup
15:22 burn420 but I use distributed replicated volume
15:22 burn420 not sure if it is at all the same
15:22 burn420 mine seems to be broke... its replicating but a node is offline and having another issue which seems to be a bug
15:23 Derek_ ah, sorry to hear that
15:23 burn420 is what it is....
15:23 ndevos burn420: yeah, that looks pretty similar
15:24 Derek_ I have two servers at each site... then the master at site A is trying to geo-replicate to site B
15:24 burn420 I wonder if I upgrade if it will fix itself or if I should run that fix they put on the bug
15:24 Derek_ was just wondering if there's any command I can run to actually show it's working? :)
15:24 burn420 says JoeJulian on irc came up with the fix!
15:24 burn420 that guy here hmmmm'
15:24 burn420 damn sure is lol
15:25 burn420 I guess I will try it on one of the nodes the fix... hopefully that fixes the id issue
15:26 burn420 woohoo trusted.glusterfs.volume-id=0x9e0ffc919d46477ab8ebdfd3b7d65765
15:27 burn420 appears to fix it on the nodes that had no id
15:28 burn420 whats the log say @Derek_
15:28 burn420 ?
15:28 burn420 geo log
15:28 burn420 I think its in like /var/log/glusterd/ some wheres
15:28 burn420 in there
15:29 Derek_ I didn't see anything nasty in the logs... just some warnings about connecting to localhost:xxxx
15:29 Derek_ which appears to be something that will be fixed in the next version > 3.2.6
15:29 Derek_ let me double check the logs again though...
15:30 Derek_ this might take a few minutes
15:30 ndevos burn420: nice!
15:30 burn420 yeah gonna try force in a sec
15:30 burn420 don't know if I need to do it on gluster1 also
15:31 burn420 volume start: home: success
15:31 burn420 I had to do it on gluster1 also
15:31 burn420 its fixed! woohooo Thanks for you help @ndevos
15:31 sprachgenerator joined #gluster
15:32 Derek_ congrats
15:34 burn420 thanks
15:34 burn420 @ndevos do you know if I can just run gun upgrade to upgrade it from 3.40 to 3.4.2 and do I need to stop the volume or anything?
15:34 burn420 or anyone know for that matter running CentoS 6.5
15:39 spiekey joined #gluster
15:39 spiekey Hello!
15:39 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:40 spiekey i mounted a glusterfs and df tells me its 10GB big. Who sets the size here?
15:40 spiekey the Gluster?
15:40 spiekey bluster help does not say anything about a size
15:41 fidevo joined #gluster
15:41 ndevos burn420: you should be able to just update the packages and restart all the processes (service glusterd stop; killall glusterfsd; service glusterd start), or reboot if you do a kernel update anyway
15:41 burn420 thanks appreciated...
15:41 lpabon joined #gluster
15:42 spiekey gluster volume quota HeldeleClusterVoume list --> cannot list the limits, quota is disabled
15:42 ndevos spiekey: 'df' returns the filesystem statistics from the underlaying filesystem that is used for the bricks - that is one reason why it is recommended to have a filesystem per brick (and nothing else on it)
15:43 spiekey holy…sorry
15:43 spiekey i kept reading the wrong line! Of course!
15:43 spiekey its only 9GB big! I feel stupid! Sorrry!!
15:44 spiekey thank you
15:44 ndevos :)
15:45 jobewan joined #gluster
15:46 hchiramm__ joined #gluster
15:48 jbrooks joined #gluster
15:48 tryggvil joined #gluster
15:50 psyl0n joined #gluster
15:50 psyl0n joined #gluster
15:52 tryggvil joined #gluster
15:57 tryggvil joined #gluster
16:07 Elico joined #gluster
16:13 ProT-0-TypE joined #gluster
16:18 psyl0n joined #gluster
16:19 tdasilva joined #gluster
16:21 davinder joined #gluster
16:24 mohankumar__ joined #gluster
16:26 vpshastry joined #gluster
16:32 wralej joined #gluster
16:32 mohankumar__ joined #gluster
16:42 rfortier joined #gluster
16:48 social kkeithley_: another stupid one, https://bugzilla.redhat.com/show_bug.cgi?id=1057846 < I guess we should deffinetly grab this one?
16:48 glusterbot Bug 1057846: urgent, urgent, ---, pkarampu, ASSIGNED , Data loss in replicate self-heal
16:55 Faed joined #gluster
16:56 Faed Hi. Has anyone used SSD for caching access to glusterfs bricks?
16:59 zerick joined #gluster
17:02 xymox joined #gluster
17:03 cp0k joined #gluster
17:09 xymox joined #gluster
17:10 eseyman joined #gluster
17:13 KyleG joined #gluster
17:13 KyleG joined #gluster
17:16 rossi_ joined #gluster
17:17 lyang0 joined #gluster
17:19 mohankumar__ joined #gluster
17:26 kkeithley_ social: yup, it's already in the tracker BZ https://bugzilla.redhat.com/show_bug.cgi?id=1060259
17:26 glusterbot Bug 1060259: unspecified, unspecified, ---, kkeithle, NEW , 3.4.3 tracker
17:26 mohankumar__ joined #gluster
17:32 Mo__ joined #gluster
17:33 JMWbot joined #gluster
17:33 JMWbot I am JMWbot, I try to help remind johnmark about his todo list.
17:33 JMWbot Use: JMWbot: @remind <msg> and I will remind johnmark when I see him.
17:33 JMWbot /msg JMWbot @remind <msg> and I will remind johnmark _privately_ when I see him.
17:33 JMWbot The @list command will list all queued reminders for johnmark.
17:33 JMWbot The @about command will tell you about JMWbot.
17:34 DV joined #gluster
17:39 SFLimey joined #gluster
17:43 davinder joined #gluster
17:44 _Bryan_ joined #gluster
17:46 mohankumar__ joined #gluster
17:54 zaitcev joined #gluster
17:56 kmai007 joined #gluster
17:56 kmai007 when a fuse client disconnects from a brick, does it every reestablishes that connection again, is it logged anywhere?
17:57 kmai007 this is what is in my FUSE client log, and I haven't say it reestablished a connection.
17:57 kmai007 Feb 13 02:23:45 omhq17b6 GlusterFS[4530]: [2014-02-13 08:23:45.686756] C [client-handshake.c:127:rpc_client_ping_timer_expired] 0-devstatic-client-1: server 69.58.224.72:49153 has not responded in the last 42 seconds, disconnecting.
17:58 kmai007 though the storage is UP, and I can ping its IP address from the client
17:58 mohankumar__ joined #gluster
18:03 kmai007 this happened all within the last 5 mins at 11:54AM central time http://fpaste.org/76991/31455713/
18:03 glusterbot Title: #76991 Fedora Project Pastebin (at fpaste.org)
18:13 Staples84 joined #gluster
18:13 burn420 joined #gluster
18:21 kris1 joined #gluster
18:32 sprachgenerator joined #gluster
18:37 rotbeard joined #gluster
18:39 P0w3r3d joined #gluster
18:40 mohankumar__ joined #gluster
18:46 JoeJulian kmai007: Yes, it tries every 3 seconds. It's logged in the client log /var/log/glusterfs/mount-point.log
18:47 sprachgenerator joined #gluster
18:47 JoeJulian kmai007: which version is that? It's not current.
18:47 JoeJulian Oh, wait... nevermind. It is.
18:49 JoeJulian kmai007: You do realize that the last entry in that paste is 20 seconds prior to the date you included, right?
18:51 REdOG joined #gluster
18:53 sputnik13net joined #gluster
18:58 kmai007 JoeJulian: Thanks for that briefing
19:01 kmai007 JoeJulian: The version i'm using is glusterfs 3.4.1 built on Oct 28 2013 11:12:17, Do you recommend updating?
19:09 mohankumar__ joined #gluster
19:13 JoeJulian kmai007: Usually, but nothing that should affect your query.
19:16 mohankumar__ joined #gluster
19:19 REdOG a few of my volumes have constant errors in the log [2014-02-12 22:47:21.199292] E [afr-open.c:273:afr_openfd_fix_open_cbk] 0-form-replicate-0: Failed to open /form.img on subvolume form-client-0
19:20 REdOG the replicated data seems to be fine ...
19:20 REdOG should I worry or what is this error from?
19:21 REdOG im getting them at pretty high rates
19:21 REdOG and those logs are HUGE now
19:27 mohankumar__ joined #gluster
19:35 psyl0n joined #gluster
19:38 mohankumar__ joined #gluster
19:39 tdasilva joined #gluster
19:43 rossi_ joined #gluster
19:43 lyang0 joined #gluster
19:45 kris1 joined #gluster
19:54 _dist joined #gluster
19:56 gdubreui joined #gluster
19:58 semiosis @seen jayunit100
19:58 glusterbot semiosis: jayunit100 was last seen in #gluster 2 weeks, 0 days, 2 hours, 55 minutes, and 39 seconds ago: <Jayunit100> hi purple idea, I'm trying to get it running on vagrant-gluster-puppet on VBox .  will let you know.
20:05 _dist semiosis: I'm having an issue https://dpaste.de/XHYW where it looks like my libgfapi client isn't switching over to the other node when one goes down. Didn't notice it before because of a quirk in the way I was testing.
20:05 glusterbot Title: dpaste.de: Snippet #257266 (at dpaste.de)
20:05 _dist Maybe I have somethnig setup wrong?
20:05 semiosis [2014-02-13 19:58:32.638575] E [socket.c:2157:socket_connect_finish] 0-management: connection to 192.168.50.1:24007 failed (Connection refused)
20:06 _dist Right, but it should switch to the other node, 192.168.50.2
20:06 semiosis conn refused usually indicates... 1. glusterd is not running on the host, or 2. iptables is rejecting port 24007 on the host, or 3. another host has the IP of the host you want to reach (ip conflict)
20:07 _dist Right, I shut the machine down with shutdown -r -time 0 (on purpose)
20:07 semiosis ah
20:07 semiosis the client should always be connected to all bricks, there's no "switching over"
20:08 semiosis normally this would be shown in the beginning of the client log, when it first attempts to connect to all the bricks
20:08 semiosis can you truncate that log file & restart?
20:08 semiosis it should say all the bricks it tries to connect to and whether those conns are OK or failed
20:08 _dist if they fail, does it "correct" later down the road should they become available ?
20:09 semiosis it contines trying to reach the down brick
20:09 semiosis should even be doing dns lookups in case the ip for that hostname changes
20:09 _dist right, they are by hostname, let me read through the log in detail
20:11 tryggvil joined #gluster
20:11 _dist semiosis: should it specifically call out each IP or hostname in the log? can I search or grep for it ?
20:11 semiosis worth a try i suppose... i think it will have the whole brick address, server:/path
20:11 semiosis though not sure
20:12 _dist is there anywhere I can look using gluster tools to see what connections that specific use (a specific libgfapi) has?
20:14 semiosis netstat -anp is what i use
20:15 psyl0n joined #gluster
20:15 psyl0n joined #gluster
20:17 wrale is there any RAM per brick rule of thumb?  My bricks are 3TB SATA, and I'd like to have two replicas.. Six nodes will hold two bricks per...
20:17 semiosis enough so that ,,(joe's performance metric) is obtained
20:17 glusterbot nobody complains.
20:19 semiosis wrale: having free ram available will help with page caching, which might benefit a read-heavy workload
20:19 semiosis hard to say how much you should have though
20:19 wrale i'm hoping to use cgroups to limit ram, which is why i ask.. the servers have 256GB per, but ovirt will be on the same server doing its thing.. swap will be disabled..
20:20 wrale right on.. thanks
20:20 _dist semiosis: Looks like it's got all the connections it would need, https://dpaste.de/h4MR I wouldn't expect one node .1 or .2 here to make it fail would you ?
20:20 glusterbot Title: dpaste.de: Snippet #257267 (at dpaste.de)
20:21 kris1 joined #gluster
20:21 wrale i think i'll start with 16GB
20:22 wrale (per server)
20:23 _dist semiosis: Wait I'm wrong, this is more a more accurate view for just the one vm https://dpaste.de/rOMZ looks like there is only 1 connection from .2 (the hypervisor) to .1 @ 24007, not .2 -> .2:24007 that means libgfapi isn't connected to both right?
20:23 glusterbot Title: dpaste.de: Snippet #257268 (at dpaste.de)
20:27 dbruhn grrr, Redhat....
20:28 kkeithley_ dbruhn: ???
20:28 _dist semiosis: This is the qemu line that attaches the drive "-drive file=gluster://melchior-gluster/gvms1/vms/win.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none"
20:29 semiosis _dist: idk qemu :(
20:29 dbruhn They just called me and told me they are forcing me to upgrade from self support workstation to self support server for all of my systems.
20:29 * kkeithley_ doesn't know that that entails, or even really what it means.
20:31 dbruhn I run 6.5 workstation on all of my gluster servers, with self support. Means I only get the workstation repo's and updates. It's $179 per server a year. Server is different repo's, and they charge $349 per system.
20:31 kkeithley_ ouch
20:32 KyleG That's redhat pricing for gluster support?
20:32 dbruhn no, that's just for Redhat OS
20:32 KyleG oo
20:32 dbruhn the worst part is by doing that it breaks my current repo's and I have no idea what I actually need to do to adjust and get on the new repos
20:32 dbruhn just a pain in the ass
20:33 dbruhn I should just cancel my subscriptions and adjust to the cent repos
20:34 _dist semiosis: ok, but that netstat "rOMZ", does it look to you that I'm missing a connection from .2 --> .2:24007 ?
20:36 semiosis _dist: afaict gluster clients only maintain a single connection to a glusterd (24007) -- i guess that should switch over!
20:36 semiosis not too sure about this
20:36 semiosis never ran into a problem with it
20:36 semiosis my clients survive server failures OK
20:37 _dist semiosis: ok, so it does switch over. The log appears to indicate it isn't trying to, was that your take as well?
20:37 semiosis idk what to say
20:37 semiosis maybe this is a reason to use rr-dns
20:38 _dist no problem, better than pretending you know the answer! :)
20:38 semiosis maybe it's just waiting for that server to reconnect
20:38 semiosis guesses
20:38 _dist yeah that's where I'm at
20:38 _dist I wouldn't want to use rr-dns, doens't make sense with gluster. Maybe that network timeout thing joe julian had on his blog is the reason, but I waited more than 42 seconds and it still didn't note in the log that it "connected to .2"
20:39 _dist I'll need to test this specific problem a bunch to get to the bottom of it. I think this is (hopefully) my last pre-golive issue
20:39 semiosis rr-dns for the mount server address only, not for brick server addrs
20:39 semiosis ,,(rrdns)
20:39 glusterbot You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
20:39 semiosis maybe JoeJulian can clear this up
20:40 _dist semiosis: sure, but my understanding was that wouldn't be neccessary. Yeah I assume he would know the answer, that article appears to be for fuse mount
20:43 REdOG I have errors like this http://pastie.org/pastes/8466585 in my nfs.log
20:43 glusterbot Title: #8466585 - Pastie (at pastie.org)
20:43 REdOG is that anything to be worried about?
20:51 _dist semiosis: thanks for the rr suggestion, to move forward I'm going to do that for now, I'm just surprised libgfapi doesn't do that on its' own
20:51 semiosis yw.  please let me know if that solves it
20:55 mohankumar__ joined #gluster
20:56 lawrie joined #gluster
21:01 JoeJulian _dist: just got back. Yes, my rrdns usage is via fuse. I haven't switched anything to gfapi yet.
21:01 kmai007 JoeJulian: you replied to Gluster-users Digest, Vol 70, Issue 12, how do i "Check the extended attributes to see where it's pointing."
21:01 JoeJulian @extended attributes
21:01 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
21:02 JoeJulian ... although you probably want to omit "-e hex"
21:02 kmai007 thank you
21:02 JoeJulian Unless you're just THAT good at reading hex into ascii.
21:02 kmai007 nope not i
21:02 _dist JoeJulian: libgfapi needs it though? I asume in a replicate volume it would just pick a new path when one goes down
21:02 _dist assumed*
21:03 JoeJulian @mount server
21:03 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
21:03 bennyturns joined #gluster
21:03 _dist JoeJulian: well, that's not what's happening :)
21:03 JoeJulian So if you specify a server that isn't listening, how would gfapi know?
21:04 JoeJulian be warned, I only scanned the scrollback so I might be missing something there.
21:04 _dist JoeJulian: It's already running let's say, then the gluster node it's currently feeding from powers down
21:05 _dist JoeJulian: In this case, I got a log error about connection refused, and the VM is now going to crash, or remount ro
21:05 JoeJulian Ok, so replicated volume (symmetrical server1 and server2), fd's connected to, for instance, server1:/brick1. You shut down server1 the image should continue uninterrupted from server2:/brick1.
21:06 Cenbe We are storing VM images on a RAID 5 array and are considering using Gluster for this. Is it good or bad to have the bricks on RAID? (i.e. is it just redundant)?
21:06 _dist JoeJulian: yeah that's what I expected, This is the qemu line that attaches the drive "-drive file=gluster://melchior-gluster/gvms1/vms/win.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none"
21:07 _dist JoeJulian: but what's actually happening is when that gluster brick goes down, vm is done
21:07 JoeJulian Cenbe: As with everything, that depends. It depends on drive speed, network speed, number of clients, number of servers, phase of the moon....
21:08 JoeJulian Cenbe: Ok, maybe not that last one... your fault tolerance, ...
21:08 Cenbe heh
21:08 JoeJulian _dist: I'm not even sure what gfapi does for a client log...
21:09 _dist JoeJulian: if you have a minute https://dpaste.de/XHYW https://dpaste.de/rOMZ
21:09 glusterbot Title: dpaste.de: Snippet #257266 (at dpaste.de)
21:10 JoeJulian Cenbe: One advantage to having a raid behind gluster is that if a drive fails, you can replace it without needing to repair the volume, resulting in decreased load on your servers during that (hopefully limited) period.
21:11 JoeJulian _dist: which server is 192.168.50.1?
21:11 kmai007 JoeJulian: looking from brick replicate-0, the file xattr for the T bit; looks like trusted.glusterfs.dht.linkto="devstatic-replicate-1"
21:11 kmai007 should I rebalance?
21:12 _dist JoeJulian: .1 is not the local server, .2 is. I say local because both are also hypervisors. KVM is running on .2 for that netstat
21:12 JoeJulian kmai007: depends on how many indirections there are and whether or not that matters to your use case. I haven't rebalanced in 3 years.
21:13 JoeJulian _dist: which one is melchior?
21:13 Cenbe thanx JoeJulian
21:13 kmai007 in the directory, it looks like its 50/50 between replicate 0 and 1
21:13 _dist JoeJulian: melchior = 192.168.50.1, balthasar = 192.168.50.2
21:14 JoeJulian _dist: Wow... I'd call that a bug if that's working the way it looks like it is.
21:14 JoeJulian And a very surprising one at that.
21:14 zerick joined #gluster
21:15 JoeJulian _dist: Can you go back to the disconnection in that paste? I'm curious what happens immediately after disconnect.
21:15 semiosis JoeJulian: gfapi makes a log pretty much like a fuse client. https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h#L193
21:15 glusterbot Title: glusterfs/api/src/glfs.h at master · gluster/glusterfs · GitHub (at github.com)
21:15 JoeJulian Thanks
21:15 semiosis JoeJulian: how you call that function is up to the client implementation of course
21:16 _dist JoeJulian: sure, what extra info do you want ?
21:16 _dist JoeJulian: After the disconnect it just says "[2014-02-13 20:00:00.013803] W [socket.c:514:__socket_rwv] 0-management: readv failed (No data available)" over and over again
21:16 JoeJulian semiosis: I had a (probably worthless) thought... Using your work - it should be fairly simple to put elasticsearch data on a gluster volume using libgfapi, right?
21:17 _dist JoeJulian: I felt the same way, surprised. If it a bug, and not a mistake on my part, it's an obvious one. I assume you might have a kvm setup you can test on to verify?
21:17 semiosis JoeJulian: i had the same thought.  the answer is: any program that runs on the JVM and uses the new java file api can use glusterfs without modification
21:18 semiosis JoeJulian: a java nio2 storage gateway seems like a good idea for ES
21:20 semiosis JoeJulian: i've already made lots of progress on a java nio2 backed file input plugin for logstash... :)
21:20 jobewan joined #gluster
21:22 MacWinner joined #gluster
21:25 recidive joined #gluster
21:26 _dist JoeJulian: do you think you'll have time to confirm this behaviour? Our running gluster is 3.4.2 latest, the only other point of failure I could imagine (in setup) is the version of gluster that was used to compile the qemu.
21:26 JoeJulian _dist: working on it...
21:26 _dist JoeJulian: that's awesome, let me know if I can help in any way.
21:26 Matthaeus joined #gluster
21:27 JoeJulian I could use a sandwich...
21:28 _dist I could mail you one I suppose, overnight delivery it _might_ be ok
21:28 JoeJulian hehe
21:29 dbruhn just get his address and have Jimmie Johns deliver it ;)
21:29 JoeJulian lol
21:33 kmai007 wow somebody sent me a sandwich
21:33 kmai007 thanks guys
21:39 kmai007 can somebody tell me when dht link pointers show up, and is it appropriate to keep them?  Is it imperative that I run 'rebalance' to address them?  I don't believe there is an issue from my customers, but i dont' want it to burn me in the long run....
21:40 semiosis kmai007: if you're not adding bricks, then they probably came from renames
21:40 JoeJulian It is appropriate to keep them and it is not imperative to rebalance.
21:41 JoeJulian kmai007: You can learn a bit more about dht at http://joejulian.name/blog/dht-misses-are-expensive/
21:41 glusterbot Title: DHT misses are expensive (at joejulian.name)
21:41 kmai007 how would I address them?  any docs is much appreciated
21:41 kmai007 thank you
21:43 kmai007 what my users conjurer up when you provide them with a NAS
21:44 kmai007 i think what is happening is they are using SVN ontop of samba, on top of a gluster fuse mount
21:45 kmai007 files are being written and then renamed....ugh!!!!!
21:47 diegows joined #gluster
21:50 kmai007 seriously JoeJulian  let me send you a JJ sandwich, you've been great this past year
21:53 JoeJulian Thanks, they don't deliver here.
21:54 zapotah joined #gluster
21:54 zapotah joined #gluster
21:54 semiosis rly?  no drone sandwich deliveries?
21:54 semiosis thought it was all drones all the time up there
21:54 JoeJulian they don't deliver to either of my offices... no wonder they're "really fast".
22:03 jag3773 joined #gluster
22:06 Cenbe Gluster admin guide on docs page is for 3.3, is it OK to use for 3.4?
22:06 tryggvil joined #gluster
22:07 failshel_ joined #gluster
22:15 JoeJulian No, you will be severely flogged...
22:15 JoeJulian ... well, ok... I guess you may... But only you.
22:23 quique joined #gluster
22:28 quique gluster 3.4.2: I have a volume created via cli with: gluster volume create testvol1 replica 2 transport tcp gluster1.domain.com:/mnt/gluster1/testvol1 gluster2.domain.com:/mnt/gluster1/testvol1; if I wanted to change something in the volfile do i do it in /etc/glusterfs/glusterd.vol or in one of the files in /var/lib/glusterd/vols/testvol1?
22:30 mohankumar__ joined #gluster
22:37 kris joined #gluster
22:39 _dist JoeJulian: Thanks to a suggestion from Matthaeus, I have a band-aid solution to my problem of using localhost. However, if the problem is persistent it will mean a) I should probably wait until newly upped hosts are "healed" before migrating to them and b) current kvm connections will never recalc better routes
22:40 kris joined #gluster
22:41 JoeJulian quique: preferably you use the cli to make modifications.
22:42 quique quique: i would like to change the value of DEFAULT_CERT_PATH, DEFAULT_CA_PATH, and DEFAULT_KEY_PATH on a per volume basis, it can be done via the glusterfs cli, but I'm not sure how, do you know?
22:43 quique JoeJulian^
22:43 JoeJulian Damn. I feel like I'm falling behind the curve here... I haven't played with that yet..
22:46 qdk_ joined #gluster
22:49 REdOG E [posix.c:1850:posix_open] 0-v7-posix: open on /awz0/v7_65G/brick6/v7.img: Invalid argument ?
22:52 tru_tru joined #gluster
22:57 sarkis joined #gluster
22:58 Matthaeus1 joined #gluster
22:59 mohankumar__ joined #gluster
22:59 Matthaeus joined #gluster
23:10 theron joined #gluster
23:16 khushildep joined #gluster
23:18 failshell joined #gluster
23:26 johnbot11 joined #gluster
23:36 REdOG my brick logs are noisy
23:38 JoeJulian REdOG: Looks like your brick filesystem isn't being very posix maybe.
23:38 tokik joined #gluster
23:38 REdOG hmm
23:39 REdOG its zfs
23:39 _dist REdOG: did you use zvol instead of dataset?
23:39 REdOG yes
23:40 REdOG is that why?
23:41 diegows joined #gluster
23:42 kris joined #gluster
23:42 _dist no, that'd make it ok
23:43 _dist what filesystem did you put on your zvol?
23:43 REdOG xfs iirc
23:43 _dist afaik xfs is pretty posix compliant :)
23:44 gdubreui joined #gluster
23:45 JoeJulian Yep, should be...
23:47 _dist JoeJulian: I'm doing more research and I'm not 100% yet on libgfapi not behaving for downed nodes, is 24007 the _only_ port it uses ?
23:47 JoeJulian _dist: definitely not. ,,(ports)
23:47 glusterbot _dist: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
23:48 _dist JoeJulian: I'm trying to find which one it uses, or is it random in a specific range?
23:48 JoeJulian gluster volume status
23:48 _dist the brick's port?
23:49 JoeJulian but it can change when the server is rebooted.
23:49 _dist or the NFS server port
23:49 JoeJulian brick port
23:49 _dist Ok cool, I'm going to test this in an isolated setup with no other known issues. I really find it hard to believe that libgfapi would misbehave on this most important of functions
23:51 hagarth joined #gluster
23:53 JoeJulian Yep, finally got a valid test. killed each server and healed in between. read/write tests continued without a hiccup.
23:55 _dist JoeJulian: Ok, that's great to hear. I'll veify on my system as well, like I said could be the qemu is compile with old version of api (I'm using proxmox)
23:56 _dist qemu 1.7.0 though
23:57 JoeJulian Shouldn't matter. It's a library and the api hasn't changed. If an implementation has, it'll be in the current library.
23:57 mohankumar__ joined #gluster
23:57 pdrakeweb joined #gluster
23:58 JoeJulian 1.7? I'm running 1.4.2

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary