Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-05-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 jonnyno5 for example
00:00 elico try the "cat ..." example that I gave you..
00:11 yinyin_ joined #gluster
00:35 failshell joined #gluster
00:48 theron joined #gluster
01:17 bala joined #gluster
01:19 sprachgenerator joined #gluster
01:23 lyang0 joined #gluster
01:50 bala joined #gluster
01:54 harish joined #gluster
01:59 sprachgenerator joined #gluster
02:33 EWDurbin left #gluster
02:36 mjsmith2 joined #gluster
02:40 bharata-rao joined #gluster
02:56 hchiramm_ joined #gluster
02:58 hagarth joined #gluster
02:59 kumar joined #gluster
03:10 chirino joined #gluster
03:23 Paul-C joined #gluster
03:24 Paul-C joined #gluster
03:26 chirino joined #gluster
03:29 rjoseph joined #gluster
03:29 kanagaraj joined #gluster
03:33 RameshN joined #gluster
03:40 shubhendu joined #gluster
03:42 lalatenduM joined #gluster
03:42 mjsmith2 joined #gluster
03:54 itisravi joined #gluster
04:13 edong23 joined #gluster
04:17 davinder joined #gluster
04:25 ndarshan joined #gluster
04:26 ppai joined #gluster
04:31 haomaiwa_ joined #gluster
04:33 haomaiwa_ joined #gluster
04:39 atinmu joined #gluster
04:44 hagarth joined #gluster
04:54 kasturi joined #gluster
04:54 deepakcs joined #gluster
04:55 ravindran1 joined #gluster
04:58 TvL2386 joined #gluster
05:01 kdhananjay joined #gluster
05:01 vpshastry joined #gluster
05:02 mjsmith2 joined #gluster
05:06 nishanth joined #gluster
05:10 rastar joined #gluster
05:11 shubhendu joined #gluster
05:11 root joined #gluster
05:12 zerick joined #gluster
05:15 nshaikh joined #gluster
05:20 prasanthp joined #gluster
05:22 haomaiw__ joined #gluster
05:26 haomaiwa_ joined #gluster
05:30 yinyin joined #gluster
05:33 Philambdo joined #gluster
05:35 dusmant joined #gluster
05:37 Paul-C left #gluster
05:40 mjsmith2 joined #gluster
05:48 lalatenduM joined #gluster
05:50 ramteid joined #gluster
05:54 surabhi joined #gluster
05:54 rjoseph joined #gluster
06:00 kanagaraj joined #gluster
06:02 mjsmith2 joined #gluster
06:04 davinder joined #gluster
06:06 atinmu joined #gluster
06:11 bala1 joined #gluster
06:18 dusmant joined #gluster
06:19 rjoseph joined #gluster
06:20 social joined #gluster
06:21 kanagaraj joined #gluster
06:24 bala1 joined #gluster
06:31 atinmu joined #gluster
06:32 glusterbot New news from newglusterbugs: [Bug 1094119] Remove replace-brick with data migration support from gluster cli <https://bugzilla.redhat.co​m/show_bug.cgi?id=1094119>
06:33 rahulcs joined #gluster
06:42 NCommander joined #gluster
06:45 ctria joined #gluster
06:51 ekuric joined #gluster
06:54 ravindran1 joined #gluster
06:55 eseyman joined #gluster
06:57 rahulcs joined #gluster
07:02 mjsmith2 joined #gluster
07:05 ravindran1 joined #gluster
07:08 n0de joined #gluster
07:14 hybrid512 joined #gluster
07:15 keytab joined #gluster
07:18 ravindran1 joined #gluster
07:22 ravindran1 joined #gluster
07:23 psharma joined #gluster
07:37 ninkotech joined #gluster
07:37 rahulcs_ joined #gluster
07:43 edward joined #gluster
07:43 andreask joined #gluster
07:45 yinyin joined #gluster
07:49 fsimonce joined #gluster
08:02 liquidat joined #gluster
08:02 mjsmith2 joined #gluster
08:04 mjsmith2_ joined #gluster
08:13 ravindran1 joined #gluster
08:13 joncolasgomez joined #gluster
08:14 joncolasgomez hello
08:14 glusterbot joncolasgomez: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
08:14 glusterbot answer.
08:14 joncolasgomez i am spanish
08:14 joncolasgomez please somebody can helpme with a dude?
08:22 scuttle_ joined #gluster
08:22 MrAbaddon joined #gluster
08:24 ktosiek joined #gluster
08:33 Slashman joined #gluster
08:37 joncolasgomez for a webserver enviroment what is better for client reads NFS or GLUSTER native client?
08:41 ninkotech joined #gluster
08:41 ninkotech_ joined #gluster
08:45 ngoswami joined #gluster
08:47 jcsp joined #gluster
08:49 Chewi joined #gluster
08:55 rjoseph joined #gluster
08:57 ninkotech_ joined #gluster
08:57 ninkotech joined #gluster
08:57 ProT-0-TypE joined #gluster
09:02 rahulcs joined #gluster
09:02 mjsmith2 joined #gluster
09:06 saravanakumar1 joined #gluster
09:10 davinder joined #gluster
09:10 ghenry joined #gluster
09:10 ghenry joined #gluster
09:21 rjoseph joined #gluster
09:28 jonybravo30 joined #gluster
09:29 jonybravo30 hello
09:29 glusterbot jonybravo30: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:29 jonybravo30 i need support for optimize reads client side vol mounted
09:30 harish joined #gluster
09:30 jonybravo30 i apreciate some advice
09:30 jonybravo30 very thanls
09:30 jonybravo30 thanks
09:30 jonybravo30 what client is better gluster native client?
09:30 jonybravo30 or NFS
09:34 rahulcs joined #gluster
09:35 bharata-rao joined #gluster
09:35 tryggvil joined #gluster
09:44 rahulcs_ joined #gluster
09:47 Debolaz jonybravo30: In what way do you need to optimize it? What is the use case?
09:50 Debolaz jonybravo30: Please reply in the channel, others might be able to help.
09:54 ProT-O-TypE joined #gluster
09:54 cyberbootje hi, i'm running gluster for a while and for some reason one of the two storages went in RO mode and one file(VM disk) is now in split brain mode, how do i recover from that?
09:55 Debolaz cyberbootje: Have you tried to follow any of the split-brain tutorials available through Google?
09:55 samppah_ @splitbrain
09:55 glusterbot samppah_: I do not know about 'splitbrain', but I do know about these similar topics: 'split-brain'
09:56 samppah_ @split-brain
09:56 glusterbot samppah_: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
09:57 meghanam joined #gluster
09:58 cyberbootje Debolaz: Well, i want to be sure that if i follow a tutorial it's the correct way, i don't want extra troubles
09:59 Debolaz Supposedly, there's this thing called quorum which prevents split-brain from happening in the first place, though it seems to be severely underdocumented. :P
10:00 coredumb hi folks
10:00 coredumb i seem to have broken my glusterfs volumes
10:00 coredumb by upgrading to 3.5
10:00 coredumb all gluster commands are irresponsive now
10:01 coredumb as i didn't see any upgrade doc in the release note, i just stopped services then updated packages and restarted them
10:01 coredumb seems it wasn't sufficient :D
10:02 coredumb there's only two nodes in distrib+replicate
10:03 coredumb seems like if i stop one service the other can work after a service restart...
10:03 liquidat joined #gluster
10:05 coredumb as long as i start the second one gluster commands are frozen on both :(
10:08 kanagaraj joined #gluster
10:12 samppah_ coredumb: anything in log files?
10:13 coredumb nothing that i understand :)
10:18 coredumb samppah_: http://pastie.org/9142064
10:18 glusterbot Title: #9142064 - Pastie (at pastie.org)
10:19 coredumb http://pastie.org/9142065
10:19 glusterbot Title: #9142065 - Pastie (at pastie.org)
10:19 aravindavk joined #gluster
10:20 coredumb http://pastie.org/9142075
10:20 glusterbot Title: #9142075 - Pastie (at pastie.org)
10:21 coredumb this is with gfstest-02 glusterd off indeed
10:21 dusmant joined #gluster
10:24 hagarth joined #gluster
10:26 coredumb samppah_: finally when i start glusterd on second node http://pastie.org/9142082
10:26 glusterbot Title: #9142082 - Pastie (at pastie.org)
10:27 nishanth joined #gluster
10:27 coredumb any idea ?
10:32 samppah_ coredumb: from what version did you upgrade?
10:33 glusterbot New news from newglusterbugs: [Bug 1094226] Changelog: Add timeout in changelog barrier. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1094226>
10:34 jonybravo30 hello DEbolaz
10:34 jonybravo30 what can i do with my problem?
10:34 saravanakumar joined #gluster
10:34 harish joined #gluster
10:34 jonybravo30 i need to tiune up gluster reads performance on clients
10:35 Debolaz jonybravo30: Explain the performance problems you are experiencing and your use case.
10:36 jonybravo30 clients mounted in nodes works fine
10:36 jonybravo30 but clients mounted in other machines(apaches)
10:36 jonybravo30 have slow performance reads
10:36 jonybravo30 apache wait for gluster
10:37 jonybravo30 i have neck bottle in gluster client side
10:37 jonybravo30 any advice
10:37 jonybravo30 ?
10:38 jonybravo30 i am thinking in cache options but i dont know if that will be my solution
10:38 qdk joined #gluster
10:40 Debolaz jonybravo30: Are the responsivity slow in general, or is it something specific like PHP applications?
10:40 jonybravo30 in general
10:41 jonybravo30 php aplication is a wordpress
10:41 jonybravo30 already tested in gluster nodes with /var/www mounted
10:41 jonybravo30 i have problem only in different machines that not are nodes
10:41 jonybravo30 and use gluster native client
10:42 Chewi jonybravo30: I'm not wordpress expert do you really need to mirror all those PHP files? I would have thought those were easily replaceable. maybe just file uploads and config files?
10:47 Debolaz jonybravo30: It might be that you have a slow network, for instance if you are using virtual servers from a low-end provider. You might want to consider hosting the gluster nodes on the same server as apache is running. But PHP specifically will have rather bad performance on GlusterFS no matter how you tune GlusterFS.
10:47 jonybravo30 yes i need
10:47 jonybravo30 i use amazon EC2
10:47 Debolaz This is because PHP will access every single source file on every request, unless tuned otherwise.
10:47 jonybravo30 network works perfectly
10:48 Debolaz Amazon EC2 can be very slow.
10:48 jonybravo30 i checkit with perf
10:48 Debolaz That it works doesn't mean that latency is low and bandwidth is high.
10:48 Debolaz Especially latency might be an issue.
10:52 jonybravo30 569 Mbits/sec
10:52 jonybravo30 between machines
10:52 jonybravo30 with iperf tested in this moment
10:52 Chewi jonybravo30: that does not indicate latency
10:52 Debolaz jonybravo30: And what's the latency like?
10:53 jonybravo30 time=1.60 ms
10:54 Debolaz jonybravo30: For comparison, that's 5 times more than between my virtual servers on linode.
10:55 jonybravo30 what can i do then
10:55 jonybravo30 some advice
10:55 jonybravo30 thanks a lot for support
10:56 saravanakumar1 joined #gluster
10:56 Debolaz jonybravo30: All file access is going to be at least as slow as the slowest latency in the system. For a single static filȩ, that is not going to matter much. For a PHP script, that's going to basically murder every request.
10:57 leo_ joined #gluster
10:57 jonybravo30 apache goes crazy
10:57 jonybravo30 it begins to respawn processes
10:57 Debolaz jonybravo30: What is the request time for a single static .txt file?
10:57 jonybravo30 due to wait of gluster
10:58 jonybravo30 what size? for example
10:58 jonybravo30 ?
10:58 Debolaz Just a "hello world".
10:58 jonybravo30 ok
10:58 jonybravo30 wait please
10:58 edward joined #gluster
10:59 jonybravo30 time cat helloworld.txt
10:59 jonybravo30 "Hello world"
10:59 jonybravo30 real0m0.005s
10:59 jonybravo30 user0m0.000s
10:59 jonybravo30 sys0m0.000s
11:01 nishanth joined #gluster
11:02 Debolaz jonybravo30: But through apache?
11:02 rahulcs joined #gluster
11:02 tdasilva joined #gluster
11:02 jonybravo30 i not understand
11:03 jonybravo30 y cated form apache
11:03 jonybravo30 from apache mouted gluster native client
11:03 dusmant joined #gluster
11:03 jonybravo30 not from node
11:05 Debolaz jonybravo30: You say apache are having problems serving files stored on glusterfs. Does it have any problems serving the txt file?
11:06 hagarth joined #gluster
11:07 jonybravo30 apache have problems when it have  server content to 60 clients more or less
11:07 B21956 joined #gluster
11:07 jonybravo30 then apache is waiting to gluster
11:07 jonybravo30 sorry for muy english
11:08 Debolaz jonybravo30: Can you reproduce this problem with a simple txt, or does it only happen with PHP?
11:08 jonybravo30 i are trying to reproduce it now
11:15 ppai joined #gluster
11:21 haomaiwang joined #gluster
11:21 kanagaraj joined #gluster
11:25 ira joined #gluster
11:28 davinder joined #gluster
11:42 coredumb samppah_: i upgraded from 3.4
11:55 coredumb samppah_: wait seems it's not locked anymore :O
11:55 ppai joined #gluster
11:56 coredumb samppah_: oh yeah because second node is still down
11:56 coredumb indeed
11:56 coredumb -_-
12:03 diegows joined #gluster
12:03 glusterbot New news from newglusterbugs: [Bug 1091961] libgfchangelog: API to consume historical changelog. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1091961>
12:03 rahulcs joined #gluster
12:04 coredumb samppah_: volume start: gvol2: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /brick3. Reason : No data available seems it has something to do with this
12:13 kumar joined #gluster
12:17 itisravi joined #gluster
12:19 leo_ joined #gluster
12:19 ctria joined #gluster
12:20 jmarley joined #gluster
12:20 jmarley joined #gluster
12:30 Slashman joined #gluster
12:43 aixsyd joined #gluster
12:43 aixsyd Heya guys! 3.5! Woo!
12:44 Chewi aixsyd: I think that "woo" moment has kinda passed already :P
12:44 aixsyd Chewi: I just heard about final release
12:45 aixsyd so I'm all woo!
12:45 aixsyd I am curious about upgrading though - if I have VMs running on a gluster volume that I want to upgrade - should I shut down all VMs before upgrading?
12:46 aixsyd I assume the volume has to stop and start again at some point
12:47 chirino joined #gluster
12:47 rahulcs joined #gluster
12:49 cvdyoung Good morning, if my gluster systems are using an infiniband interface, and I attempt to mount the volume to a system without IB, it fails trying to connect to that IB network.  Is there a way to get around using the IB on servers without that type of interface?
12:49 sroy_ joined #gluster
12:51 warci joined #gluster
12:52 warci howdy all, i have a problem being: when i remove a brick from a distributed volume, not everything gets move over to the remaining volume
12:53 warci i saw another person have this issue in some irc logs
12:54 warci so: distributed volume with 1 brick. I add another one and then i "remove-brick start" the original volume
12:54 aixsyd cvdyoung: what interface DO you want it on?
12:55 rahulcs joined #gluster
12:55 aixsyd warci: i could be wrong, as I've never run a distributed volume, but you may need to reblanace
12:55 warci aha...
12:55 warci i'll give it a go
12:56 warci the weird thing is, i say "remove-brick", so i can follow the status
12:56 aixsyd warci: just be sure to read up on it before you do it
12:56 warci and at one time it says: completed
12:56 warci ok, checking...
12:57 ackjewt I've two nodes running as replicated, connected back-to-back. When i try to mount the volume from a client using glusterfs-fuse, it tries to connect to the backend interfaces and as a result: No route to host....
12:57 aixsyd cvdyoung: I run IB and bonded GBE, and I have the opposite of you - I DONT want gluster running on GBE, only IB. its pretty simple.
12:58 aixsyd ackjewt: which backend interfaces?
12:59 jag3773 joined #gluster
13:00 leo_ joined #gluster
13:00 ackjewt aixsyd: both servers have 4 frontend interfaces connected to the storage network, and one 10GbE connected back-to-back, used for replication between the nodes
13:01 aixsyd I assume your clients have a defined route in your hosts file, right?
13:01 ackjewt i'm mounting via the frontend interface from the client and i got this in the logfile on the client: socket_connect_finish] 0-video_vol1-client-0: connection to backend-ip-addr:24007 failed (No route to host)
13:01 aixsyd if you ping the server you want to connect to from your client, does it resolve the correct front end IP?
13:02 ackjewt aixsyd: yes, it does. I only get the no route to host when i try to mount with the glusterfs client
13:02 aixsyd are you connecting by name or IP?
13:02 jrcresawn joined #gluster
13:02 ackjewt IP
13:02 aixsyd wow, really? hm
13:05 aixsyd ackjewt: can you fpaste.org the output of "gluster volume status <VOL>"
13:05 ackjewt yes. Client (172.25.71.231) connects to GlusterFS-node using native client on (172.25.101.15) and Client gets No route to host (172.25.102.11:24007) in log, which is the backend interface
13:05 ackjewt aixsyd: yes, wait.
13:05 primechuck joined #gluster
13:06 aixsyd do it from BOTH nodes, too
13:06 cvdyoung joined #gluster
13:07 rahulcs joined #gluster
13:08 plarsen joined #gluster
13:08 ackjewt aixsyd: http://fpaste.org/99205/13992953/
13:08 glusterbot Title: #99205 Fedora Project Pastebin (at fpaste.org)
13:09 ackjewt Those addresses belongs to the backend interfaces
13:09 ackjewt -s
13:09 jdarcy joined #gluster
13:09 primechuck joined #gluster
13:10 aixsyd ackjewt: Hm. and just to reconfirm, you CAN ping BOTH nodes from your client via IP
13:11 ackjewt yes, but not the backend addresses, because those are connected back-to-back between the nodes
13:11 cvdyoung Yeah, both IPs are responding to pings, and lookup.  It's weird, because in my fstab I tried telling it exactly which IP I wanted to mount from, but it still tries the IB network in the log
13:11 ackjewt and only reachable from the two nodes
13:11 ackjewt and not the client
13:12 aixsyd ackjewt  cvdyoung  what version of Gluster are you both running?
13:13 ackjewt glusterfs 3.4.3 built on Apr  3 2014 16:02:46
13:13 aixsyd cvdyoung: I'm wondering how the heck your client knows ANYTHING about a forign network that it cant even ping..
13:13 ackjewt aixsyd: probably gets the addresses from the gluster nodes?
13:13 ackjewt when it tries to mount
13:14 cvdyoung It is pinging it.  I have IB on my gluster storage systems, but not on my client.  When I try and mount to the client by the IP that it should be using, it tries the IB network inside of /var/log/glusterfs/mnt-gfs.log
13:14 aixsyd ackjewt: nope. I run my nodes on a 172.99.5.X IB backend network, but a bonded 10.0.0.X GBE network. All my clients connect via the 10.0.0.X GBE network, and none have a clue about 172.99.5.X
13:14 cvdyoung I am using gluster 3.5
13:14 aixsyd *but a bonded front end
13:15 ackjewt aixsyd: http://fpaste.org/99207/13992956/
13:15 ackjewt that's the client
13:15 cvdyoung Here's what it's telling me:
13:15 cvdyoung E [socket.c:2158:socket_connect_finish] 0-home-client-0: connection to <INFINIBAND_NETWORK>:24007 failed (Connection timed out)
13:15 aixsyd ackjewt:  172.25.101.X is NOT the backend network,  172.25.102.X IS?
13:16 ackjewt yes. 101.0/24 = frontend. 102.0/24 = backend
13:16 aixsyd the two of you seem to have very similar issues here
13:16 ackjewt indeed
13:16 aixsyd one times out, one has no route.
13:17 aixsyd ackjewt: are you able to change the backend IP's freely? for example, try a 10 network, or a 192 network - something drastically different?
13:17 ackjewt Well, yes.
13:17 cvdyoung I agree.  For some reason, a server without IB is trying to mount the volume thru that network.  The gluster storage system is using IB and that's what the peer status shows.  Maybe that's how it's exported?
13:17 ackjewt I can give it a shot
13:18 aixsyd cvdyoung: like I said - I have mine in nearly the same config - and no issue.
13:18 aixsyd it only knows that I tell the client to know, aka, the front end GBE network
13:18 cvdyoung When you run peer status, do you have more than your IB interface in it?
13:19 aixsyd one sec
13:19 ackjewt aixsyd: and you can't reach the backend (172.99)?
13:19 ackjewt from the clients
13:19 aixsyd ackjewt: correct.
13:20 aixsyd cvdyoung: nope. the only peer known is the other's IB interface
13:20 rahulcs joined #gluster
13:21 ackjewt Well, if you use the "native" gluster client in replicated mode, doesn't the client write to both nodes at the same time?
13:21 aixsyd yes.
13:21 aixsyd both nodes need a front end interface
13:21 aixsyd and your client needs to know both nodes via that interface
13:21 ackjewt And, my other gluster node doesn't know the other nodes frontend address
13:22 ackjewt only backend
13:22 aixsyd doesnt need to if youve peer probed by name
13:22 aixsyd just define the names/addresses in your hosts file
13:23 ackjewt True
13:24 aixsyd http://fpaste.org/99212/92962241/
13:24 glusterbot Title: #99212 Fedora Project Pastebin (at fpaste.org)
13:24 aixsyd thats what my hosts look like
13:25 aixsyd that may illustrate better. I've then peer probed by name, not IP, and hosts takes care of resolution
13:25 aixsyd on the client, however, i connected via the first nodes IP (10.0.0.220), and it writes to both :)
13:25 ackjewt Thanks. I'll try to do this instead. :)
13:26 aixsyd does it make sense?
13:26 ackjewt yes
13:26 aixsyd ^_^
13:26 ackjewt In some way :)
13:26 aixsyd ;)
13:27 aixsyd cvdyoung: does that help you at all?
13:27 cvdyoung Going to try it now
13:27 aixsyd sweeet.
13:27 japuzzo joined #gluster
13:29 aixsyd JoeJulian: JustinClift - thoughts on 3.5's snapshots and compression as far as running VMs?
13:29 aixsyd Says its great for that - but any idea on benchmarks?
13:32 theron joined #gluster
13:33 glusterbot New news from newglusterbugs: [Bug 1094328] poor fio rand read performance <https://bugzilla.redhat.co​m/show_bug.cgi?id=1094328>
13:34 theron joined #gluster
13:35 msp3k1 left #gluster
13:36 hagarth joined #gluster
13:37 warci aixsyd: you're right,  when i first fix-layout and rebalance, the remove-brick is executed correctly
13:37 aixsyd ^_^
13:37 aixsyd one down!
13:37 warci is this a known issue or intentionally?
13:38 cvdyoung I'm still not able to mount a volume to a server without IB, even though I am calling it directly by IP.  The server I am trying to mount to does not have IB, but the gluster storage system does.
13:38 cvdyoung mount -t glusterfs 10.214.251.1:/home /mnt/gfs
13:38 cvdyoung E [socket.c:2158:socket_connect_finish] 0-home-client-0: connection to 10.200.70.1:24007 failed (Connection timed out)
13:38 cvdyoung E [afr-common.c:3919:afr_notify] 0-home-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
13:38 warci in each case: many many thanks for your help, was really scratching my head
13:39 aixsyd warci: this is intentional
13:39 aixsyd cvdyoung: now wait a sec - "The server I am trying to mount to does not have IB"
13:39 harish_ joined #gluster
13:39 cvdyoung correct.  There's no infiniband network on the client side
13:40 aixsyd why are you trying to mount a server thats NOT the gluster node?
13:40 cvdyoung I want to mount the volume to a client
13:40 aixsyd so you mean the client doesnt have IB
13:40 aixsyd but both nodes do.
13:41 cvdyoung yes, but both of my gluster storage servers do.  They also have two 1G connections, and 10.214. is one of those
13:41 aixsyd and in this scenario, theres only a client, and two nodes - nothing else
13:41 cvdyoung correct
13:41 aixsyd can you do a layout like this? http://fpaste.org/99212/92962241/ feel free to censor w/e you wish
13:41 glusterbot Title: #99212 Fedora Project Pastebin (at fpaste.org)
13:42 mjsmith2 joined #gluster
13:43 ctria joined #gluster
13:43 brimstone joined #gluster
13:44 rahulcs joined #gluster
13:45 cvdyoung let me give that a try on my test system
13:46 cvdyoung I appreciae the help on this one, it's weird.
13:46 aixsyd ^_^ very weird.
13:49 aixsyd brb smoke break :)
13:50 cvdyoung When you do a gluster volume info, do you see more than your IB network for your bricks?  I tried the mount to a server with only a 1G nic, and it failed with a timeout like before trying to go back to the IB network.
13:50 cvdyoung 2014-05-05 13:47:29.883716] W [fuse-bridge.c:3371:fuse_statfs_cbk] 0-glusterfs-fuse: 2: ERR => -1 (Transport endpoint is not connected)
13:55 scuttle_ joined #gluster
13:55 cvdyoung If I am using IB in my peer probe, do I need to do anything inside of the volume to define a route for another network?
13:57 sprachgenerator joined #gluster
13:58 aixsyd cvdyoung: nope, no need to define anything in the volume
13:58 aixsyd I'm still wondering how your client even knows about that other network. if theres nothing in hosts and nothing in DNS to point to that other network... theres NO WAY it could know about that IB network
13:59 dbruhn joined #gluster
14:04 aixsyd cvdyoung: http://fpaste.org/99225/29865313/
14:04 glusterbot Title: #99225 Fedora Project Pastebin (at fpaste.org)
14:05 cvdyoung so at-gfs01 is your IB interface?
14:05 aixsyd as defined by the hosts file, yes
14:05 gmcwhistler joined #gluster
14:08 aixsyd cvdyoung: just to confirm, you dont have two listings in any of your hosts files for the same node, right? like youre not defining node 1 with BOTH networks, right?
14:08 cvdyoung correct, there's only one.  I will post it for you to see
14:09 aixsyd kk
14:09 aixsyd and nothing in any DNS server, either?
14:10 Philambdo joined #gluster
14:10 cvdyoung http://fpaste.org/99226/29904113/
14:10 glusterbot Title: #99226 Fedora Project Pastebin (at fpaste.org)
14:12 aixsyd cvdyoung: this is concerning
14:12 aixsyd Transport-type: tcp,rdma
14:13 aixsyd I highly do not recommend rdma
14:13 aixsyd most here would not, either
14:13 cvdyoung it doesn't work anyway, we call the transport=tcp for now.  Hopefully rdma becomes available soon :)
14:13 dbruhn what version are you trying to use RDMA in
14:13 dbruhn and what is the purpose of your use?
14:14 cvdyoung not using rdma, just made it available for future use
14:14 dbruhn I have been doing testing between RDMA and TCP for a while
14:14 aixsyd cvdyoung: i'd still explicitly state tcp and not even mention rdma
14:14 dbruhn on sequential writes I have been seeing better performance from TCP
14:19 aixsyd cvdyoung: now, can I see your /etc/hosts for all three nodes?
14:20 cvdyoung I've also found that if I use a 1G nic for peer probe, that when I mount to servers over infiniband, I am only getting 1G speeds.
14:21 aixsyd between nodes? or from client to servers?
14:21 aixsyd from client to server makes sense, but not between nodes
14:21 cvdyoung between my storage nodes.  If I create the volume using the 1G nic, I get 1G speeds to clients.... even mounting the volume over IB.
14:22 wushudoin joined #gluster
14:22 aixsyd cvdyoung: any reason why you peer probed the FQDN? and not the short name?
14:25 aixsyd i'm wondering if that is messing things up
14:26 aixsyd aw
14:27 cvdyoung joined #gluster
14:27 aixsyd ohai!
14:27 aixsyd see my last two messages?
14:28 cvdyoung hi, yes.  All 3 systems have the same in /etc/hosts.  I just copied it and pasted it in.
14:29 cvdyoung let me upload it for you.
14:29 aixsyd kk
14:29 aixsyd cvdyoung: any reason why you peer probed the FQDN? and not the short name?
14:30 cvdyoung no reason, I thought I should.  Is there a reason to only use the short?
14:30 aixsyd just seeing anything .com makes me think DNS may be involved in some way
14:30 cvdyoung http://fpaste.org/99229/99300221/
14:30 glusterbot Title: #99229 Fedora Project Pastebin (at fpaste.org)
14:31 aixsyd I'd also add in a host entry on both nodes for the client
14:31 cvdyoung I've tried the mount to a server with no IB, using the IP, and using the name.  Both try to go back to the IB network because that's how the volume was built.. IMHO
14:33 cvdyoung I think that my IB network is my first route out, so it uses that
14:33 aixsyd and the final thing to do then would be from client, ping both nodes, make sure it resolves the GBE IP, then ping the client from both nodes making sure it resolves, and then ping the opposite node from eachother and be sure THEY all resolve correctly
14:33 aixsyd wait - if your IB network is fully contained - how is it a route?
14:33 aixsyd or WHY is it a route out - and out to where?
14:34 ackjewt aixsyd: I can confirm that your fix worked for me.
14:34 aixsyd ackjewt: WOO! Two down!
14:34 ackjewt :) thanks
14:34 aixsyd $79.99 for the first hour, $89.99 the second ;)
14:36 dusmant joined #gluster
14:36 cvdyoung I have some servers with IB, those I want to mount via IB. Others will need a route to the volume
14:36 failshell joined #gluster
14:36 cvdyoung the IB isn't just for my storage systems, we use it on other server types also, but not all
14:37 ctria joined #gluster
14:37 aixsyd Hmmm...
14:38 aixsyd the IP network, though.. if theres a NIC connected to a switch on both nodes, and the client is on the same network/switch as the servers, theres no routes needed
14:39 cvdyoung My storage systems (glusterfs-server) have 3 networks on them.  The IB is what the volumes are built to use.  I think that my route to servers without IB capabilities needs to be better defined, but I'm not sure if it needs it at the OS level or the volume?
14:41 aixsyd i assure you its not at the volume level
14:41 cvdyoung ok, then I need to look at the routing inside the OS.  I'll update if I find the culprit  :)
14:41 cvdyoung Thank you!
14:47 aixsyd :)
14:49 rahulcs joined #gluster
14:50 LoudNois_ joined #gluster
14:57 sjoeboo joined #gluster
14:57 kmai007 joined #gluster
14:58 rahulcs joined #gluster
14:59 lalatenduM @ports
14:59 glusterbot lalatenduM: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
15:12 shubhendu joined #gluster
15:17 rahulcs joined #gluster
15:26 daMaestro joined #gluster
15:29 Matthaeus joined #gluster
15:30 haomaiwa_ joined #gluster
15:34 lmickh joined #gluster
15:38 haomaiwang joined #gluster
15:48 vpshastry joined #gluster
15:49 lkoranda joined #gluster
15:50 jobewan joined #gluster
15:51 johnmark who's heading to Atlanta for OpenStack Summit?
15:52 johnmark see http://osstorage-hack.eventbrite.com/
15:52 glusterbot Title: OpenStack Storage Hackathon Registration, Atlanta - Eventbrite (at osstorage-hack.eventbrite.com)
15:55 johnbot joined #gluster
15:55 * ndevos is not
16:04 nhm joined #gluster
16:09 cvdyoung Hi, is it possible to export a gluster volume over IB as well as gigE?  Do I need to create a IB volume, and a gigE volume, or can I set an option on the volume itself?
16:09 cvdyoung http://gluster.org/pipermail/glust​er-users/2010-February/026814.html
16:09 glusterbot Title: [Gluster-users] gigabit to "infiniband storage network" (at gluster.org)
16:11 jbd1 joined #gluster
16:13 JonnyNomad left #gluster
16:14 Mo__ joined #gluster
16:16 maduser joined #gluster
16:20 maduser joined #gluster
16:21 andreask joined #gluster
16:23 systemonkey joined #gluster
16:23 maduser joined #gluster
16:30 maduser joined #gluster
16:31 arya joined #gluster
16:32 zerick joined #gluster
16:35 basso joined #gluster
16:35 SFLimey joined #gluster
16:39 kanagaraj joined #gluster
16:40 jcsp joined #gluster
16:40 SFLimey joined #gluster
16:41 kaptk2 joined #gluster
16:42 brimstone i'm looking for hints to tune a cluster spread out across multiple data centers
16:42 Matthaeus What's the latency between data centers?
16:42 brimstone a du across ~641MB of data takes ~5 minutes
16:42 brimstone 100~200ms or so
16:43 Matthaeus That's not going to end well for you.
16:43 brimstone it's saying 47ms now, but it's a dirty liar
16:43 Matthaeus Each gluster write requires several round trips between nodes.
16:43 Matthaeus If you're replicating.
16:44 maduser joined #gluster
16:44 Matthaeus That's going to end up being almost a second per write, which is going to be a huge problem.
16:44 brimstone the indent is to have this same directory R/W in every location
16:44 brimstone yeah, it's not ideal
16:44 Matthaeus It's a great idea, but gluster solves the wrong two sides of the triangle to be the right solution here.
16:45 Matthaeus Gluster does durability and consistency at the cost of performance.
16:46 zaitcev joined #gluster
16:49 vpshastry left #gluster
16:49 jbd1 brimstone: if you're stuck with this cluster configuration, the fastest route to improving performance is to reduce latency between the datacenters
16:49 John_HPC joined #gluster
16:50 brimstone this is still POC so i'm not stuck with gluster, but I am stuck with latency between datacenters
16:51 jbd1 brimstone: it's going to be complicated as long as you need to write from all datacenters simultaneously
16:51 brimstone yeah, that's what i'm afraid
16:52 Matthaeus brimstone:  think very, very carefully about which two sides of the triangle you need.
16:53 brimstone durability, consistency and performance?
16:53 Matthaeus Yup
16:53 Matthaeus With a network file system, you only get two.
16:53 Matthaeus At most.
16:53 brimstone performance and consistency are obvious, what's durability?
16:53 Matthaeus One node goes down, do you still have the data that was on that node?
16:54 Matthaeus If yes, then durable.
16:54 brimstone gotcha
16:54 Matthaeus Durability is usually the one you always care about.
16:54 Matthaeus But if it's data that's easily regenerated and just being cached for performance purposes, then you're good.
16:55 brimstone i noticed that it seemed quick to copy data in the new volume, but it took hours to copy ~600MB between nodes
16:55 brimstone immediately rerunning the rsync also took a really long time
16:56 brimstone it never seemed like the nodes were using all of the available bandwidth either, i guess just latency problems?
16:57 jbd1 brimstone: yes, latency.  If you're using rsync, use the --inplace argument to improve things too
16:57 jbd1 brimstone: also, read up on glusterfs geo-replication.  It may be useful to you.
16:57 brimstone but that's one way right?
16:58 jbd1 brimstone: yes, you have a master and slaves
17:00 jbd1 brimstone: there's also Sync at http://www.theronconrey.com/sync-gluster/
17:00 glusterbot Title: BitTorrent Sync as Geo-Replication for Gluster (at www.theronconrey.com)
17:01 brimstone the closed source nature of BitTorrent Sync bothers me
17:01 SFLimey joined #gluster
17:01 tdasilva left #gluster
17:01 tycho_ joined #gluster
17:01 theron joined #gluster
17:02 jbd1 brimstone: understood
17:05 lmickh joined #gluster
17:08 kmai007 brimstone: i tried reading up on rsync --inplace, and I found a RTFM before you try
17:08 kmai007 lol had to wiki that
17:08 kmai007 but that being said, the example of using --inplace was used once you had rsync running before it, then you'd use --inplace to follow it
17:09 kmai007 http://www.linux.com/community/blogs​/131-business-or-qenterprise/421384
17:09 glusterbot Title: HOWTO - Using rsync to move a mountain of data | Linux.com (at www.linux.com)
17:09 kmai007 i was told it would be ideal, b/c renames make the DHT go krazy
17:11 maduser joined #gluster
17:11 brimstone oh, --inplace is a neat option
17:11 brimstone got a big WARNING next to it in the man page too
17:12 brimstone kmai007: yup, found the same thing
17:12 kmai007 i want to use it, but dang, WARNINGS scare me
17:14 gmcwhistler joined #gluster
17:21 jhp joined #gluster
17:23 partner joined #gluster
17:24 nshaikh left #gluster
17:28 marcoceppi_ joined #gluster
17:29 davinder joined #gluster
17:31 andreask joined #gluster
17:31 davinder joined #gluster
17:34 andreask joined #gluster
17:38 edward1 joined #gluster
17:39 lpabon joined #gluster
17:41 vpshastry joined #gluster
17:42 failshell joined #gluster
17:47 vpshastry left #gluster
17:48 chirino joined #gluster
17:54 ctrianta joined #gluster
17:58 johnbot I went to go upgrade a Ubuntu 13.04 machine with Gluster 3.5 by using semiosis's ppa but there doesn't appear to be anything built for raring. Any idea why?
17:59 semiosis because raring is no longer supported
17:59 semiosis therefore I am not allowed to publish any packages (launchpad rejects my uploads)
17:59 johnbot thanks semiosis that makes a lot of sense
17:59 semiosis yw
18:02 jcsp joined #gluster
18:10 cvdyoung Hi,
18:11 cvdyoung I was having an issue with mounting a volume that was built with infiniband and exported over it to servers without IB.  If I set the IP of the network I want to use with the name of the IB inside of /etc/hosts it works.
18:13 andreask joined #gluster
18:17 cvdyoung How do I set a volume up to be exported to two or more different networks?
18:20 kanagaraj joined #gluster
18:21 zerick joined #gluster
18:27 semiosis cvdyoung: glusterfs binds to all IPv4 interfaces/addresses
18:29 cvdyoung I have my glusterfs volume built using my ib network on 10.200.X.X and a client with no IB tries to connect to it even though I mount -t gluserfs 10.214.X.X:/VOLNAME.  But if I set the name of the IB interface to be the IP of the 10.214.X.X inside the client /etc/hosts it works.
18:29 theron joined #gluster
18:30 Chewi joined #gluster
18:32 semiosis cvdyoung: ,,(mount server)
18:32 glusterbot cvdyoung: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
18:33 semiosis so the client fetches the volume info (brick addresses) from the server given on the mount line, then connects directly to the bricks at the addresses configured in the volume
18:38 Debolaz2_ joined #gluster
18:38 semiosis [2014-05-05 18:30:10.925062] E [glusterd-utils.c:332:glusterd_lock] 0-management: Unable to get lock for uuid: 692a94a9-359b-4ab8-9130-8f57112bc1db, lock held by: 692a94a9-359b-4ab8-9130-8f57112bc1db
18:39 semiosis said by 692a94a9-359b-4ab8-9130-8f57112bc1db
19:01 rturk joined #gluster
19:03 * rturk waves to #gluster
19:03 cvdyoung ah ok, so when the client attempts to connect directly to the brick at the address that's configured in the volume, that's when it fails because it doesn't have that interface type.  Is there any way around that, besides setting the IP to the one used in the configured volume?
19:07 B21956 joined #gluster
19:09 semiosis split horizon dns?
19:14 zerick joined #gluster
19:16 johnmark rturk-away: howdy!
19:17 MeatMuppet joined #gluster
19:17 Amanda o/ johnmark
19:17 svennd joined #gluster
19:18 svennd Hey, when using geo-replication, when the master fails, will the mount still be accessable ? (will slave become master) ?
19:20 semiosis svennd: doubt it
19:21 svennd damn, giving up 50% write speed is a heavy one to swallow (2 replica's/2 storageservers)
19:26 MeatMuppet Hey folks.  We're stuck in a huge heal process for an openstack nova volume.  there is a handful of files that keep getting healed over and over.  Is this \is because
19:26 MeatMuppet crap
19:26 MeatMuppet Is this because of lots of block changing?
19:26 jbd1 MeatMuppet: it's probably because the blocks are changing, yes
19:27 MeatMuppet would the diff heal algorithm help here and can it be changed on the fly?
19:27 semiosis probably not, and yes
19:27 semiosis i think diff is default
19:28 kmai007 joined #gluster
19:29 ProT-0-TypE joined #gluster
19:31 MeatMuppet hmm, ok.  thx.
19:33 recidive joined #gluster
19:35 andreask it uses some kind of heuristic by default to do a full sync for small files and diff for big ones
19:35 glusterbot New news from newglusterbugs: [Bug 1094478] Bad macro in changelog-misc.h <https://bugzilla.redhat.co​m/show_bug.cgi?id=1094478>
19:35 MeatMuppet This was a brick replacement.  I thought algorithm defaulted to full for that?
19:44 WillHunting joined #gluster
19:44 WillHunting hello
19:44 glusterbot WillHunting: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:45 WillHunting I am trying to connect to gluster node form client with NFS but i have this problem
19:45 WillHunting mount.nfs: mount to NFS server
19:45 semiosis WillHunting: ,,(nfs)
19:45 glusterbot WillHunting: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
19:45 WillHunting failed: timed out, giving up
19:46 WillHunting mount -t nfs -o vers=3,nolock,soft,intr x.xx.xx.xx:/data/bricktest1/wwwtest /var/www
19:46 semiosis missing -o tcp
19:46 jruggiero left #gluster
19:47 maduser joined #gluster
19:48 WillHunting bad position
19:48 semiosis mount -t nfs -o tcp,vers=3,nolock,soft,intr x.xx.xx.xx:/data/bricktest1/wwwtest /var/www
19:48 WillHunting ah ok sorry
19:48 WillHunting I try
19:49 WillHunting out:
19:49 WillHunting mount.nfs: requested NFS version or transport protocol is not supported
19:49 glusterbot WillHunting: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
19:50 WillHunting i have this
19:50 WillHunting performance.stat-prefetch: on
19:50 WillHunting performance.io-cache: on
19:50 WillHunting performance.quick-read: on
19:50 WillHunting performance.io-thread-count: 32
19:50 WillHunting performance.cache-refresh-timeout: 60
19:50 WillHunting performance.cache-size: 256MB
19:50 WillHunting nfs.disable: off
19:50 WillHunting nfs.enable-ino32: on
19:50 WillHunting diagnostics.brick-log-level: WARNING
19:50 semiosis WillHunting: please use a ,,(paste) site
19:50 glusterbot WillHunting: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
19:50 WillHunting diagnostics.client-log-level: WARNING
19:50 WillHunting in volume volume options
19:51 WillHunting it is rights
19:51 WillHunting right?
19:51 semiosis who knows... i leave my vols with default opts
19:51 semiosis bbiab
19:52 kmai007 make it more verbose when you mount
19:52 kmai007 mount -vvv -t nfs <blah blah blah>
19:52 WillHunting ok thanks i try
19:52 kmai007 then paste your output to fpaste.org
19:52 WillHunting oki
19:56 WillHunting http://fpaste.org/99330/19744139/
19:56 glusterbot Title: #99330 Fedora Project Pastebin (at fpaste.org)
19:57 kmai007 its it for any client you want to mount this volume?
19:57 kmai007 do you have other volumes? do they behave the same way?
19:58 WillHunting i have only this volume now
19:59 WillHunting i have all ports opened for now
20:00 maduser joined #gluster
20:01 WillHunting rpcinfo http://fpaste.org/99331/32001713/
20:01 glusterbot Title: #99331 Fedora Project Pastebin (at fpaste.org)
20:01 ThatGraemeGuy joined #gluster
20:04 kmai007 are you using rhel, centos, fedora?
20:05 kmai007 iand what kernel release?
20:06 kmai007 have you tried recycling portmapper? i see portmap query failed: RPC: Program not registered
20:06 WillHunting ubuntu 12.04.3 LTS
20:06 kmai007 when you try to mount it as NFS, check your system logs, it might give you some insight...
20:06 WillHunting 3.2.0-54
20:07 kmai007 if you don' thave port mappper in your version, try recycling rpcdbind
20:07 SpeeR joined #gluster
20:07 WillHunting what do you means with recycling?
20:08 WillHunting restart?
20:17 SpeeR I have a question about the new encryption, it says it doesn't work on nfs mounts. Does this include VMDK's in a vmware datastore?
20:18 semiosis WillHunting: sorry i didnt catch this earlier, but gluster-nfs only supports mounting the top/root of the volume.  so this will not work: mount -t nfs -o tcp,vers=3,nolock,soft,intr x.xx.xx.xx:/data/bricktest1/wwwtest /var/www
20:18 semiosis WillHunting: please ,,(pasteinfo)
20:18 glusterbot WillHunting: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:22 tdasilva joined #gluster
20:23 semiosis SpeeR: afaik vmw uses an nfs mount, so....
20:23 WillHunting ok
20:24 SpeeR yeh, I thought as much, but thought I would ask, since vmdk acts like a physical disk
20:25 WillHunting http://ur1.ca/h9hdb
20:25 glusterbot Title: #99346 Fedora Project Pastebin (at ur1.ca)
20:26 mgarcesMZ joined #gluster
20:28 semiosis WillHunting: your volume name is "wwwtest" so your nfs mount command should be something like this: mount -t nfs -o tcp,vers=3,nolock,soft,intr x.xx.xx.xx:/wwwtest /var/www
20:34 WillHunting same output protocol is not supported
20:34 WillHunting portmap query failed: RPC
20:38 mgarcesMZ left #gluster
20:41 gmcwhistler joined #gluster
20:42 recidive joined #gluster
20:44 ProT-0-TypE joined #gluster
20:58 theron joined #gluster
20:59 mjsmith2 joined #gluster
21:06 Matthaeus1 joined #gluster
21:11 badone joined #gluster
21:11 MrAbaddon joined #gluster
21:17 failshel_ joined #gluster
21:18 JoeJulian WillHunting: http://gluster.org/pipermail/glust​er-users/2010-November/005692.html
21:18 glusterbot Title: [Gluster-users] Cannot mount NFS (FIXED) (at gluster.org)
21:21 edward2 joined #gluster
21:24 semiosis for the record the glusterfs-server package in ubuntu *does* depend on nfs-common
21:25 semiosis that ML post is from 2010
21:33 _Bryan_ joined #gluster
21:33 JoeJulian Figured...
21:45 kmai007 is item 18. not valid any more?  http://www.gluster.org/community/documentat​ion/index.php/Basic_Gluster_Troubleshooting
21:45 glusterbot Title: Basic Gluster Troubleshooting - GlusterDocumentation (at www.gluster.org)
21:46 JoeJulian afaict it should still work.
21:47 kmai007 unrecognized word: --remote-host='omdx14f0' (position 1)
21:47 JoeJulian Ah, but it's wrong.
21:48 kmai007 couldn't find it in the --help flags
21:48 JoeJulian what version?
21:48 kmai007 3.4.2
21:49 JoeJulian Fixed the invalid syntax. Try it without the word "volume"
21:49 kmai007 ah ha!
21:49 kmai007 thanks glusterbot
21:50 JoeJulian Interestingly in 3.5 it gives the much more appropriate "unrecognized word: peer"
21:50 kmai007 so here is the circle of life
21:50 kmai007 i run 'gluster peer status' on a gluster server
21:51 kmai007 if they all show 'Connected', shouldn't i TRUST that return?
21:51 kmai007 to run the command via remote-host, gives me the same output....
21:51 JoeJulian "State: Peer in Cluster (Connected)", yes.
21:52 JoeJulian Ah, but running it on a remote host does not give the same output.
21:52 JoeJulian It includes a server that was absent from the local host.
21:52 kmai007 agreed, its like playing peek-a-boo
21:53 kmai007 show me everyone but myself that is calling that cmd
21:53 kmai007 but, i like many ways to skin a cat
21:53 kmai007 now imma run this b/c i can
21:54 kmai007 oh its only good with global queries, like info, status, cannot narrow it down to specific volume :-(
21:55 kmai007 nevermind
21:55 kmai007 you can
21:55 kmai007 had it in the wrong place
21:55 JoeJulian :)
21:55 kmai007 well i think i've learned enough for today
21:56 kmai007 have agood night community
21:57 cvdyoung Is there a file/db/command that can tell me which networks my volume is "exported/aware of" to?  I have three different networks on my gluster storage servers, and I want to verify that my volume is binding to them all properly.  Thank you!
21:58 mjsmith2 joined #gluster
22:03 Matthaeus joined #gluster
22:06 jbd1 cvdyoung: in linux, you can say sudo netstat -lnp
22:29 recidive joined #gluster
22:45 badone joined #gluster
22:50 edong23_ joined #gluster
23:02 naveed joined #gluster
23:02 johnbot Hi. Is renaming volumes still supported in gluster 3.5? The man page shows the command gluster rename oldvol newvol but it doesnt work and shows the error "unrecognized word: rename (position 1)"
23:03 johnbot This bugzilla appears to reference the same problem. https://bugzilla.redhat.com/show_bug.cgi?id=841010
23:03 glusterbot Bug 841010: medium, unspecified, ---, divya, VERIFIED , man page references "gluster volume rename" but cli component reports error
23:04 arya joined #gluster
23:05 arya joined #gluster
23:10 mjsmith2 joined #gluster
23:17 johnbot If I want to rename the file system AND rename is depricated will stopping the current file system, removing each brick and creating a new file system work?
23:20 edong23_ joined #gluster
23:28 WillHunting how to activate NFS on nodes?
23:29 WillHunting appear Not online in gluster volume status
23:29 WillHunting new instalation
23:30 edward1 joined #gluster
23:36 WillHunting please ??
23:44 anotheral joined #gluster
23:46 anotheral are the gluster man pages out of date for 3.4+?
23:47 anotheral for example, this seems to contradict what's in the man page regarding brick migration:
23:47 anotheral http://supercolony.gluster.org/pipermail​/gluster-users/2012-October/034473.html
23:47 glusterbot Title: [Gluster-users] 'replace-brick' - why we plan to deprecate (at supercolony.gluster.org)
23:47 brimstone left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary