Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:07 hagarth joined #gluster
01:04 level7 joined #gluster
01:22 haomaiwang joined #gluster
01:33 d0nn1e joined #gluster
01:33 Vaelatern joined #gluster
01:37 sakshi joined #gluster
01:42 luizcpg joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 EinstCrazy joined #gluster
01:58 sakshi joined #gluster
02:01 haomaiwang joined #gluster
02:04 EinstCrazy joined #gluster
02:25 Lee1092 joined #gluster
02:28 EinstCrazy joined #gluster
02:36 EinstCrazy joined #gluster
02:40 EinstCra_ joined #gluster
02:49 RameshN joined #gluster
03:01 haomaiwang joined #gluster
03:01 harish joined #gluster
03:04 luizcpg joined #gluster
03:08 kshlm joined #gluster
03:10 vshankar joined #gluster
03:20 luizcpg joined #gluster
03:35 nbalacha joined #gluster
03:39 itisravi joined #gluster
03:47 atinm joined #gluster
04:01 haomaiwang joined #gluster
04:02 nehar joined #gluster
04:03 luizcpg joined #gluster
04:06 hgowtham joined #gluster
04:10 jobewan joined #gluster
04:18 mowntan joined #gluster
04:19 nehar joined #gluster
04:21 gem joined #gluster
04:22 mowntan joined #gluster
04:26 prasanth joined #gluster
04:40 Apeksha joined #gluster
04:44 rafi joined #gluster
04:45 poornimag joined #gluster
04:48 ndarshan joined #gluster
04:50 shubhendu joined #gluster
04:52 kotreshhr joined #gluster
04:55 shubhendu joined #gluster
04:58 mchangir joined #gluster
05:01 haomaiwang joined #gluster
05:03 karnan joined #gluster
05:06 beeradb joined #gluster
05:10 timotheus1 joined #gluster
05:12 aspandey joined #gluster
05:12 DV joined #gluster
05:12 kdhananjay joined #gluster
05:23 nishanth joined #gluster
05:25 haomaiwang joined #gluster
05:25 hagarth joined #gluster
05:34 hchiramm joined #gluster
05:35 atalur joined #gluster
05:43 JesperA joined #gluster
05:45 atinm joined #gluster
05:46 purpleidea joined #gluster
05:47 skoduri joined #gluster
05:48 aravindavk joined #gluster
05:51 Bhaskarakiran joined #gluster
05:55 ramky joined #gluster
05:56 spalai joined #gluster
05:57 level7_ joined #gluster
06:00 aravindavk joined #gluster
06:01 haomaiwang joined #gluster
06:14 ashiq joined #gluster
06:21 sac joined #gluster
06:28 tessier Every few days one of my bricks just goes offline. No idea why. The logs make no sense to me. :( Could it be because I have only two bricks in each volume and have the following set:
06:28 tessier cluster.quorum-type                     none
06:28 tessier cluster.quorum-count                    (null)
06:29 tessier I've never really understood quorum and forget why I set it this way. I think someone once told me that unless I had 3 brick servers in a volume I needed that. But can't gluster clients participate in quorum also?
06:29 anil joined #gluster
06:33 tessier The more I read the more it looks like quorum is indeed the issue. And it looks like I need at least 3 brick servers to make this work. I always thought I was being extra careful by doing mirroring instead of RAID 5. Now I have to do 3 way replication? Ouch.
06:36 raghug joined #gluster
06:36 karthik___ joined #gluster
06:41 skoduri_ joined #gluster
06:41 atinm joined #gluster
06:42 mchangir joined #gluster
06:47 nbalacha joined #gluster
06:48 karnan joined #gluster
06:52 skoduri joined #gluster
06:52 atalur joined #gluster
07:01 haomaiwang joined #gluster
07:06 raghug joined #gluster
07:09 hackman joined #gluster
07:10 rastar joined #gluster
07:12 pur joined #gluster
07:16 aravindavk joined #gluster
07:17 mbukatov joined #gluster
07:17 raghug joined #gluster
07:29 mchangir joined #gluster
07:36 nbalacha joined #gluster
07:37 fsimonce joined #gluster
07:37 unforgiven512 joined #gluster
07:37 unforgiven512 joined #gluster
07:44 gem joined #gluster
07:49 ctria joined #gluster
07:52 skoduri joined #gluster
07:57 ctria joined #gluster
08:01 haomaiwang joined #gluster
08:09 rafi joined #gluster
08:12 eKKiM_ joined #gluster
08:19 muneerse joined #gluster
08:22 sakshi joined #gluster
08:23 gem joined #gluster
08:30 MikeLupe joined #gluster
08:35 vshankar joined #gluster
08:39 jiffin joined #gluster
08:43 jiffin1 joined #gluster
08:44 jiffin joined #gluster
08:47 nishanth joined #gluster
08:48 jiffin1 joined #gluster
08:48 k4n0 joined #gluster
08:55 jiffin1 joined #gluster
08:57 atinm joined #gluster
09:01 haomaiwang joined #gluster
09:02 level7 joined #gluster
09:08 JesperA joined #gluster
09:15 tyler274 o/
09:15 tyler274 when probing a new server I get peer probe: success. Host railgun.dabney.moe port 24007 already in peer list
09:15 tyler274 "peer probe: success. Host railgun.dabney.moe port 24007 already in peer list"
09:16 tyler274 but it doesn't get added to the peer list
09:16 spalai left #gluster
09:16 tyler274 I'm using it to replace an existing peer, and would like to transfer the bricks from the old peer to this one, but I can't if it won't be allowed into the cluster
09:28 hagarth tyler274: make sure that the UUID for glusterd in the host being probed is different from that of all other nodes in the cluster
09:30 kovshenin joined #gluster
09:32 natarej joined #gluster
09:42 tyler274 @hagarth ok.
09:42 tyler274 @hagarth If I want to reseed a volume with data stored locally on one of the nodes, can I just recreate the volume with the same bricks
09:43 tyler274 as in, the bricks from the old volume are still there
09:43 tyler274 old volume no longer exists of course
09:47 nbalacha joined #gluster
09:48 atinm joined #gluster
09:58 rafi joined #gluster
10:00 tyler274 found out it works
10:00 tyler274 thanks for help
10:01 haomaiwang joined #gluster
10:01 kshlm joined #gluster
10:02 natarej_ joined #gluster
10:08 skoduri joined #gluster
10:14 natarej joined #gluster
10:25 rafi joined #gluster
10:30 arcolife joined #gluster
10:35 ashiq joined #gluster
10:56 rastar joined #gluster
10:58 Biopandemic joined #gluster
11:01 Intensity joined #gluster
11:01 haomaiwang joined #gluster
11:13 nishanth joined #gluster
11:17 hi11111 joined #gluster
11:19 kshlm joined #gluster
11:21 hgowtham joined #gluster
11:26 luizcpg joined #gluster
11:26 johnmilton joined #gluster
11:28 n0b0dyh3r3 joined #gluster
11:29 n0b0dyh3r3 joined #gluster
11:37 DV joined #gluster
11:37 harish joined #gluster
11:37 shersi2 joined #gluster
11:39 rafi joined #gluster
11:43 shersi2 Hi Everyone, I'm using fuse-client to mount glusterfs replicated volume. One of the client can not access the volume and it hangs.
11:44 lh joined #gluster
11:44 shersi2 Error on the server:  0-repvol1-client-5: disconnected from alf_vol1-client-0. Client process will keep trying to connect to glusterd until brick's port is available.
11:45 chirino_m joined #gluster
11:46 shersi2 When i try to run self heal command on the server, i'm getting following error: "Launching heal operation to perform index slef heal on volume alf_vol1 has been unsuccessfull on bricks that are down."
11:46 shersi2 Could this be caused by split-brain? Please help
11:48 johnmilton joined #gluster
12:01 haomaiwang joined #gluster
12:02 leucos joined #gluster
12:04 rafi joined #gluster
12:19 karnan joined #gluster
12:22 hampus joined #gluster
12:24 hampus left #gluster
12:30 shaunm joined #gluster
12:35 nbalacha joined #gluster
12:36 ashiq joined #gluster
12:36 ashiq joined #gluster
12:38 dlambrig joined #gluster
12:40 julim joined #gluster
12:42 stealthrecon joined #gluster
12:44 shersi joined #gluster
12:45 atinm joined #gluster
12:45 unclemarc joined #gluster
12:47 AdStar joined #gluster
12:48 AdStar Hi all, I have a question, is it possible to setup GlusterFS with just a single node, (expecting to bring a second node online in a weeks time).
12:50 kotreshhr left #gluster
12:54 shersi joined #gluster
12:57 beeradb joined #gluster
12:57 F2Knight joined #gluster
12:57 nishanth joined #gluster
13:03 skoduri joined #gluster
13:05 jwd joined #gluster
13:08 plarsen joined #gluster
13:08 squizzi joined #gluster
13:09 squizzi joined #gluster
13:11 karnan joined #gluster
13:20 JesperA joined #gluster
13:24 DV joined #gluster
13:30 JoeJulian AdStar: yes
13:33 mpietersen joined #gluster
13:33 shyam joined #gluster
13:38 d0nn1e joined #gluster
13:39 mpietersen joined #gluster
13:42 mpietersen joined #gluster
13:43 mpietersen joined #gluster
13:43 rwheeler joined #gluster
13:44 hackman joined #gluster
13:54 rafi joined #gluster
14:03 stealthrecon I have four servers in a distributed/replicated set up: "gluster volume create volume1 replica 2 transport tcp fscluster1.domain.local:/data/brick1/gv0 fscluster2.domain.local:/data/brick1/gv0 fscluster3.domain.local:/data/brick1/gv0 fscluster4.domain.local:/data/brick1/gv0"  I have setup rrdns using fscluster.domain.local pointing to all four servers.  When I connect from a client using glusterfs my connection only writes to
14:03 stealthrecon All 4 volumes are online and all four servers are connected.  Is there a good way for me to troubleshoot this?  I am assuming I have something configured wrong, but I am not sure what.
14:07 JoeJulian stealthrecon: IRC has max line lengths and you were cut off after " When I connect from a client using glusterfs my connection only writes to"
14:07 JoeJulian Assuming it's only writing to one, it's probably iptables related. It usually is.
14:08 stealthrecon Ahh.  I forgot about that limit.
14:08 stealthrecon It writes to two of the four.
14:08 stealthrecon Always the first two only.
14:08 hagarth joined #gluster
14:08 stealthrecon I can write directly to any of them.
14:08 stealthrecon FW is turned off on all four servers.
14:11 stealthrecon More than anything I was wanting to see if I could get pointed in the best direction to troubleshoot this.
14:11 JoeJulian dht works using a hash of the first N bytes of the filename (I don't remember how many). If your files all start with the same long prefix, that could do it.
14:12 stealthrecon I have been testing by using sudo dd if=/dev/zero of=a1.txt bs=100k count=1000
14:12 JoeJulian Always the same filename, a1.txt?
14:13 stealthrecon I use a different file name each time.
14:13 stealthrecon :-)
14:13 JoeJulian See https://joejulian.name/blog​/dht-misses-are-expensive/
14:13 glusterbot Title: DHT misses are expensive (at joejulian.name)
14:14 JoeJulian I would check the trusted.glusterfs.dht ,,(extended attribute) on the directory you're writing to on all four servers.
14:14 glusterbot To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}
14:15 JoeJulian It would be odd that they wouldn't be right, unless your bricks were previously part of a 2 brick volume.
14:15 stealthrecon Actually they were originally part of a 2 brick.
14:16 JoeJulian Aha
14:16 JoeJulian So just rebalance and it should fix that.
14:16 JoeJulian Or, minimally, rebalance fix-layout.
14:16 stealthrecon I shutdown and deleted it and the recreated it for four servers instead of two.
14:17 JoeJulian If you didn't wipe the extended attributes (and who would) they still have the dht masks for a 2 brick volume.
14:18 chirino joined #gluster
14:18 JoeJulian To prove that, create a new directory and try your test within that.
14:18 arcolife joined #gluster
14:19 archit_ joined #gluster
14:20 stealthrecon That  worked.  The directory just wrote to all four.
14:22 stealthrecon I am still working to get my head wrapped around gluster, but I really like the concept.  I want our company to use it.  Thank you for the help.
14:22 JoeJulian You're welcome
14:26 jobewan joined #gluster
14:31 rnowling joined #gluster
14:31 nbalacha joined #gluster
14:37 rafi joined #gluster
14:38 ivan_rossi joined #gluster
14:53 rafi joined #gluster
14:53 DV__ joined #gluster
14:53 Pupeno joined #gluster
14:56 kotreshhr joined #gluster
15:03 julim joined #gluster
15:04 shyam1 joined #gluster
15:09 wushudoin joined #gluster
15:10 squizzi joined #gluster
15:11 Pupeno joined #gluster
15:18 nathwill joined #gluster
15:19 spalai joined #gluster
15:22 rafi joined #gluster
15:26 jotun left #gluster
15:39 crimson69666 joined #gluster
15:41 crimson69666 Hi all!  Setting up a lab with infiniband. need to know if Gluster can do direct RDMA (without IPoIB).  Using GLusterFS 3.7.11 on CentOS 7.2.1511.
15:45 kotreshhr joined #gluster
15:45 JoeJulian "<kkeithley> gluster uses the RDMA connection manager to make connections. And/or uses the TCP connection to exchange rdma info." ... which doesn't really answer your question...
15:46 crimson69666 Yeah, i was searching/reading the logs!  And it is unclear how can the connection manager can "reach" the pear over RDMA directly.
15:46 crimson69666 ... or is it even achievable in my setup!
15:48 crimson69666 Infiniband is up and running, ping works.  Gluster is up too and peer probe with ip works flawlessly.  Want to use RDMA to bypass CPU IRQ!
15:49 kpease joined #gluster
15:50 crimson69666 (I meant IB Ping)
15:53 JoeJulian Sorry, I have no idea. I know that it should work if you have rdma capability and IPoIB configured. No idea if you do not have IPoIB configured.
15:53 JoeJulian In fact, I *think* I remember that you can use RDMA even if your IP network was on a different network, but I could be completely wrong.
15:53 crimson69666 So maybe it could use IPoIB to get infos and use direct IB afterward?
15:53 JoeJulian Right
15:54 JoeJulian That's my understanding.
15:54 JoeJulian There's been a lot of work done since I understood though.
15:54 crimson69666 This cutting edge i know.  And unless you're a developper, it's not easy to find infos.
15:55 crimson69666 Documentation for cutting edge is generally rough, i understand that coding for them is higher priority!
15:55 kpease joined #gluster
15:56 crimson69666 So, i'll go IPoIB and will try to see if Gluster use direct RDMA once connected!
15:56 shaunm joined #gluster
15:59 crimson69666 ... first test will be without IPoIB (peer probe done with IP) and use transport=rdma, just to see what peers will do!
15:59 JoeJulian If you figure it out, can you please blog about it to help out the rest of the community?
16:00 crimson69666 Yeah for sure!  I'm here to get/give help!  I'll report back results on my tests.
16:01 crimson69666 Thanks for your replies.
16:06 julim joined #gluster
16:08 haomaiwang joined #gluster
16:09 raghug joined #gluster
16:18 skylar joined #gluster
16:21 shyam joined #gluster
16:30 rafi joined #gluster
16:32 spalai joined #gluster
16:34 kotreshhr left #gluster
16:34 julim joined #gluster
16:37 shubhendu joined #gluster
16:45 vshankar joined #gluster
16:47 haomaiwang joined #gluster
16:48 haomaiwang joined #gluster
16:49 haomaiwang joined #gluster
16:49 rwheeler joined #gluster
16:50 haomaiwang joined #gluster
16:50 ivan_rossi left #gluster
16:51 haomaiwang joined #gluster
16:52 haomaiwang joined #gluster
16:53 haomaiwang joined #gluster
16:54 haomaiwang joined #gluster
16:55 haomaiwang joined #gluster
16:56 haomaiwang joined #gluster
16:57 haomaiwang joined #gluster
16:58 haomaiwang joined #gluster
16:59 haomaiwang joined #gluster
17:00 haomaiwang joined #gluster
17:01 haomaiwang joined #gluster
17:02 crimson69666 Result of Gluster brick created without IPoIB with transport RDMA:  [rdma.c:1294:gf_rdma_cm_event_handler] 0-gv1-client-0: cma event RDMA_CM_EVENT_ADDR_ERROR, error -19 (me: peer:)  (RDMA Address Resolution Failed)
17:02 haomaiwang joined #gluster
17:03 haomaiwang joined #gluster
17:03 rafi joined #gluster
17:03 crimson69666 (Message above taken in /var/glusterfs/mnt-glusterrdma.log wher /mnt/glusterrdma is the mount point on the client)
17:04 crimson69666 So, it seems that transport=rdma needs IPoIB
17:04 haomaiwang joined #gluster
17:14 hgowtham joined #gluster
17:22 Pupeno joined #gluster
17:22 jwd joined #gluster
17:23 crimson69666 joined #gluster
17:23 crimson69666 Sorry about having mixed "brick" and "volume" nomenclature
17:31 shubhendu joined #gluster
17:31 rafi joined #gluster
17:32 raghug joined #gluster
17:32 Pupeno joined #gluster
17:35 skoduri joined #gluster
17:35 rafi joined #gluster
17:36 Pupeno_ joined #gluster
17:39 raghug_ joined #gluster
17:43 haomaiwang joined #gluster
17:49 karnan joined #gluster
17:50 rafi joined #gluster
17:52 raghug joined #gluster
17:54 kotreshhr1 joined #gluster
17:56 kotreshhr1 left #gluster
17:58 nishanth joined #gluster
18:01 crimson69666 Test RDMA:  reprobed with IPoIB.  Created volume using transport rdma.  Copying a 128G VM over GlusterFS: CPU usage 50% on AMD Athlon II 605e (low power quad core).  So, it doesn't seem to use RDMA directly.
18:08 julim joined #gluster
18:09 kpease joined #gluster
18:10 crimson69666 Oh, actually maybe not!!!  Top shows: %Cpu(s) 9.7 u, 0.3 sy, 0.0 ni, 67.3 id, 0.0 wa, 0.0 si, 0.0 st !
18:11 crimson69666 http://stackoverflow.com/questions/2600​4507/what-do-top-cpu-abbreviations-mean
18:11 glusterbot Title: linux - What do top %cpu abbreviations mean? - Stack Overflow (at stackoverflow.com)
18:14 crimson69666 Transfert done from SSD to GlusterFS volume on 500G 7200 RPM disks at about 100 MBps.
18:17 crimson69666 "sy" stayed avg 10% (sy = system cpu time (or) % CPU time spent in kernel space)
18:18 hackman joined #gluster
18:21 Pupeno joined #gluster
18:21 plarsen joined #gluster
18:28 chirino_m joined #gluster
18:43 vshankar joined #gluster
18:44 dlambrig left #gluster
18:51 level7 joined #gluster
18:56 shubhendu joined #gluster
18:58 alvinstarr joined #gluster
19:13 ic0n joined #gluster
19:14 shubhendu joined #gluster
19:19 alvinstarr I am seeing httpd lockups on the client side but just about all the searches I do come back with very old problem descriptions.
19:21 alvinstarr is disabling io-cacheing the way to make this problem go away?
19:25 dlambrig joined #gluster
19:25 kpease joined #gluster
19:27 kpease joined #gluster
19:32 chirino joined #gluster
19:37 chirino joined #gluster
19:49 level7 joined #gluster
20:12 DV_ joined #gluster
20:23 haomaiwang joined #gluster
20:26 R0ok_ joined #gluster
20:27 jgrimmett joined #gluster
20:27 jgrimmett hello once again all...
20:28 jgrimmett i had rdma working... mounted a new disk with xfs...created a gluster volume and now rdma doesnt want to work .... tcp works... but rdma does not
20:29 jgrimmett [2016-05-16 15:16:54.757059] W [MSGID: 103071] [rdma.c:1294:gf_rdma_cm_event_handler] 0-gv0-client-0: cma event RDMA_CM_EVENT_REJECTED, error 8 (me:10.2.15.204:65534 peer:10.2.15.202:24008)
20:29 jgrimmett anyone seen that before?
20:38 post-factum error 8? how informative...
20:38 post-factum looks like windows bsod :(
20:40 jgrimmett its in centos 6.7
20:41 post-factum i mean, you have to find out what "error 8" means first
20:42 jgrimmett yeah....been trying... no luck
21:12 jlp1 joined #gluster
21:15 jgrimmett anyone that might have a clue about:  [2016-05-16 15:16:54.757059] W [MSGID: 103071] [rdma.c:1294:gf_rdma_cm_event_handler] 0-gv0-client-0: cma event RDMA_CM_EVENT_REJECTED, error 8 (me:10.2.15.204:65534 peer:10.2.15.202:24008)
21:15 jgrimmett id really appreciate any direction
21:15 jgrimmett thanks
22:07 DV_ joined #gluster
22:19 stealthrecon joined #gluster
22:41 johnmilton joined #gluster
22:48 crashmag joined #gluster
22:50 plarsen joined #gluster
22:54 jgrimmett anyone online know anything about rdma with gluster?
22:58 johnmilton joined #gluster
23:05 rnowling joined #gluster
23:08 JoeJulian Well, I'm going to guess that error 8 is not "ENOEXEC 8 /* Exec format error */"
23:10 jgrimmett not that i can tell in the logs... seems to be something with gluster's config/rdma
23:10 jgrimmett rdma was tested and is 100% good
23:10 jgrimmett but gluster doesnt want to connect using rdma
23:15 JoeJulian Well, let's see what rdma.c shows... what version is this?
23:18 JoeJulian Ok, so clearly the CM event is rejected by 10.2.15.202:24008. Which distro is this? Is glusterd listening on 24008 on that server?
23:18 jgrimmett tcp connect successfully...but rdma does not
23:18 jgrimmett getting you version info
23:19 JoeJulian tcp uses 24007
23:20 jgrimmett [root@cb-las-p1c1h3 ~]# rpm -qa | grep -i gluster centos-release-gluster37-1.0-4.el6.centos.noarch glusterfs-3.7.11-2.el6.x86_64 glusterfs-fuse-3.7.11-2.el6.x86_64 glusterfs-cli-3.7.11-2.el6.x86_64 glusterfs-rdma-3.7.11-2.el6.x86_64 glusterfs-libs-3.7.11-2.el6.x86_64 glusterfs-client-xlators-3.7.11-2.el6.x86_64 glusterfs-api-3.7.11-2.el6.x86_64 glusterfs-server-3.7.11-2.el6.x86_64 [root@cb-las-p1c1h3 ~]#
23:20 jgrimmett thats the client
23:21 jgrimmett this is the server:
23:21 jgrimmett [root@cb-las-p1c1ps1gfs1 ~]# rpm -qa | grep -i gluster glusterfs-libs-3.7.11-2.el6.x86_64 glusterfs-3.7.11-2.el6.x86_64 glusterfs-api-3.7.11-2.el6.x86_64 glusterfs-server-3.7.11-2.el6.x86_64 glusterfs-client-xlators-3.7.11-2.el6.x86_64 glusterfs-fuse-3.7.11-2.el6.x86_64 glusterfs-cli-3.7.11-2.el6.x86_64 [root@cb-las-p1c1ps1gfs1 ~]#
23:21 JoeJulian I guessed as much.
23:21 JoeJulian You should diff those and see what's missing. ;)
23:22 shyam joined #gluster
23:23 JoeJulian as an aside... rpm -qa 'glusterfs*'
23:23 JoeJulian ... because I'm lazy.
23:23 jgrimmett glusterfs-rdma was missing
23:23 jgrimmett im rebooting the server
23:24 jgrimmett i'll test
23:24 JoeJulian And that's why the server wasn't listening on 24008
23:24 jgrimmett feel kinda stupid
23:24 JoeJulian Meh. You get so many details spinning around in your head after a while, it happens to all of us.
23:25 JoeJulian Only took me 10 years to get to where I can know all this stuff off the top of my head.
23:25 nathwill joined #gluster
23:30 jgrimmett hoping this works
23:30 jgrimmett give me a few more mins...still booting
23:33 dlambrig joined #gluster
23:33 jgrimmett @JoeJulian <virtual high five>
23:34 jgrimmett i disabled iptables... then was able to connect
23:34 jgrimmett i need to add firewall exceptions
23:34 JoeJulian Nice
23:34 JoeJulian @ports
23:34 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
23:34 jgrimmett nice
23:35 jgrimmett i'll get those put in...try to find an iptables gluster example

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary