Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 johnmark rturk|afk: around?
00:17 JoeJulian yo johnmark
00:21 wgao_ joined #gluster
00:21 bala joined #gluster
00:23 lyang0 joined #gluster
00:42 gildub joined #gluster
01:03 jmarley joined #gluster
01:38 semiosis joined #gluster
01:39 Alssi_ joined #gluster
01:45 semiosis joined #gluster
01:45 semiosis_ joined #gluster
01:47 semiosis_ joined #gluster
01:48 semiosis joined #gluster
01:57 harish_ joined #gluster
02:21 tha_dok joined #gluster
02:23 tha_dok Greetings, I was hoping someone might know how to break glusterfs replication (for testing)?
02:23 tha_dok e.g. what particular port(s) should I block?
02:26 huleboer joined #gluster
02:36 suliba joined #gluster
02:36 Lee- i broke mine just by rebooting one of the servers lol
02:37 Lee- and by broke I mean once it came back online it didn't see the other brick
02:46 JoeJulian Lee-: Sounds like you didn't save your iptables after you opened up the ,,(ports)
02:46 glusterbot Lee-: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
02:46 JoeJulian tha_dok: ^
02:48 Lee- I had no iptables rules
02:48 tha_dok thanks, I believe I have all those blocked...
02:49 JoeJulian Lee-: ip address change? hostname not resolvable? selinux? ... that's about it.
02:49 JoeJulian I've never had a problem with a client reconnecting after rebooting a server.
02:49 Lee- ip address was a LAN IP that was static. hostname was set in /etc/hosts and I wasn't using selinux, just a fresh install of debian 7 + the gluster server packages.
02:50 JoeJulian I presume, since this is all past tense, you got it working again?
02:50 Lee- the client could connect, but when I would run gluster volume status it would list the other brick as not connected or something along those lines. I'd have to find my pastebins of the output
02:50 JoeJulian Oh, I think I remember that.
02:51 Lee- No, unfortunately I had to ditch gluster and go back to NFS. It happened like 6 hours after I launched gluster in to production. I had no problems at all during my tests (in which I tried breaking gluster). EVerything was fine during my testing, but about 6 hours after I put it in to production I rebooted one of the servers and things went south :\
02:51 tha_dok Hrmm, ok I have almost all of those blocked. I am not blocking 38468, but now when I try to cd to the mount point I get "endpoint not connected" so I can't touch the data...
02:51 Lee- It was about 2 weeks ago. You were in some training.
02:51 harish_ joined #gluster
02:52 tha_dok essentially all I am trying to do is create a "file split-brain" so I can get accustomed with how to repair it
02:52 JoeJulian tha_dok: ,,(paste info)
02:52 glusterbot tha_dok: I do not know about 'paste info', but I do know about these similar topics: 'pasteinfo'
02:53 JoeJulian @meh
02:53 glusterbot JoeJulian: I'm not happy about it either
02:53 JoeJulian tha_dok: ,,(pasteinfo)
02:53 glusterbot tha_dok: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
02:54 haomaiwa_ joined #gluster
02:55 tha_dok JoeJulian: http://fpaste.org/136306/13720141/
02:55 glusterbot Title: #136306 Fedora Project Pastebin (at fpaste.org)
02:56 tha_dok purely a test cluster, I just want to purposfully create a file split-brain for testing purposes
02:57 fubada joined #gluster
03:03 PeterA1 joined #gluster
03:05 jiku joined #gluster
03:05 haomaiw__ joined #gluster
03:06 haomaiwang joined #gluster
03:08 haomaiw__ joined #gluster
03:10 haomaiwang joined #gluster
03:10 JoeJulian tha_dok: And one of these is your client?
03:13 tha_dok they're both clients, I don't need to share the data out, but simply keep the data between these two (apollo & zeus) in sync
03:15 tomased joined #gluster
03:16 bharata-rao joined #gluster
03:18 tha_dok regardless though, the current goal is simply to create a split-brain
03:18 JoeJulian Sure, I understand. Can each server ping themselves and the other by hostname?
03:19 tha_dok yes
03:21 JoeJulian hmm, so the question remains, why can't the client connect to its own server?
03:23 tha_dok ok, so I am now blocking everything on one host, so replication has to be broken now...
03:24 tha_dok but volume info has not changed, ahh volume status has...
03:25 haomai___ joined #gluster
03:25 tha_dok what is the preferred method to check the replication links status?
03:27 haomaiwa_ joined #gluster
03:27 JoeJulian gluster volume heal $vol info
03:32 nbalachandran joined #gluster
03:33 tha_dok ahh, thank you..
03:39 haomaiw__ joined #gluster
03:50 RameshN joined #gluster
03:55 overclk joined #gluster
04:06 shubhendu joined #gluster
04:12 hagarth joined #gluster
04:23 kanagaraj joined #gluster
04:26 haomaiwa_ joined #gluster
04:31 RameshN joined #gluster
04:36 _pol joined #gluster
04:39 rafi1 joined #gluster
04:40 anoopcs joined #gluster
04:40 karnan joined #gluster
04:43 spandit joined #gluster
04:49 dusmantkp_ joined #gluster
04:53 aravindavk joined #gluster
04:54 smohan joined #gluster
04:55 nishanth joined #gluster
04:58 ramteid joined #gluster
05:01 ndarshan joined #gluster
05:02 fubada joined #gluster
05:05 smohan joined #gluster
05:08 kdhananjay joined #gluster
05:16 plarsen joined #gluster
05:27 hagarth joined #gluster
05:31 kshlm joined #gluster
05:31 kshlm joined #gluster
05:35 frb joined #gluster
05:37 overclk joined #gluster
05:40 deepakcs joined #gluster
05:47 nshaikh joined #gluster
05:48 lezo__ joined #gluster
05:51 ppai joined #gluster
05:52 pkoro joined #gluster
05:53 raghu joined #gluster
05:53 _pol joined #gluster
06:03 _pol joined #gluster
06:06 lalatenduM joined #gluster
06:08 Guest42780 joined #gluster
06:11 Philambdo joined #gluster
06:11 atalur joined #gluster
06:12 spandit joined #gluster
06:12 RaSTar joined #gluster
06:14 kdhananjay joined #gluster
06:20 kumar joined #gluster
06:25 dusmantkp_ joined #gluster
06:26 dusmant joined #gluster
06:30 bala joined #gluster
06:39 jiffin joined #gluster
06:46 ekuric joined #gluster
06:59 saurabh joined #gluster
07:04 Fen1 joined #gluster
07:07 RaSTar joined #gluster
07:15 glusterbot New news from newglusterbugs: [Bug 1079709] Possible error on Gluster documentation (PDF, Introduction to Gluster Architecture, v3.1) <https://bugzilla.redhat.com/show_bug.cgi?id=1079709>
07:16 fubada joined #gluster
07:19 ricky-ti1 joined #gluster
07:21 coredumb Hello folks, i have a weird behaviour with gluster client on EL5
07:21 coredumb when using mount -t glusterfs xxxxx:/vol /mnt
07:21 coredumb it fails
07:22 coredumb but if i do glusterfs -s xxxxxx -N --volfile-id=vol /mnt/
07:22 coredumb it works correctly ....
07:23 deepakcs joined #gluster
07:23 fsimonce joined #gluster
07:24 dmachi joined #gluster
07:24 coredumb https://bpaste.net/show/5febed974bff here's the log i get from the mount command... any idea ?
07:24 glusterbot Title: show at bpaste (at bpaste.net)
07:36 ndevos coredumb: the /sbin/mount.glusterfs does a stat on the root of the volume, and if it does not return inode=1 it errors-out
07:36 coredumb how should i confirm that ?
07:36 ndevos coredumb: when you mount it manually, can you check the inode of the root of the volume with the stat command?
07:37 coredumb ndevos: Device: 20h/32d Inode: 1
07:37 ndevos oh, and /sbin/mount.glusterfs is a shell script, you can also try to run it like: sh -x /sbin/mount.glusterfs xxxxx:/vol /mnt
07:37 ndevos hmm, that looks ok, so it is something else :)
07:38 coredumb yeah seems like it fails on stat
07:38 ndevos coredumb: you can also 'mount -o log-level=DEBUG -t glusterfs ...' to get more verbose logs
07:38 coredumb # stat -c %i /mnt/
07:38 coredumb 1
07:39 ndevos sounds like a timing issue of some kind, does the stat from the script return an error?
07:39 coredumb nope
07:41 coredumb ndevos: right
07:42 coredumb if i add sleep 1 just before the inode check it works
07:42 coredumb -_-
07:50 ndevos coredumb: hmm, how strange - and no useful error from stat?
07:52 coredumb nope
07:53 ndevos coredumb: and in the logs when mounting with log-level=DEBUG or log-level=TRACE ?
07:55 coredumb ndevos: haven't tried yet
07:56 eryc joined #gluster
07:56 eryc joined #gluster
07:57 bharata-rao joined #gluster
07:58 Slydder joined #gluster
08:03 glusterbot New news from resolvedglusterbugs: [Bug 831653] Gluster seeding urandom <https://bugzilla.redhat.com/show_bug.cgi?id=831653>
08:03 Slydder hey all.
08:04 Slydder I just noticed in gfs 3.5 that I cannot mount with noatime anymore. has this been removed for a reason? is noatime shutoff per default now? anyone know what's the deal with that?
08:15 glusterbot New news from newglusterbugs: [Bug 859248] Mount fails when the Gluster Server has an IPv4 and IPv6 address <https://bugzilla.redhat.com/show_bug.cgi?id=859248> || [Bug 762766] rename() is not atomic <https://bugzilla.redhat.com/show_bug.cgi?id=762766> || [Bug 763890] checksum computation needs to be changed <https://bugzilla.redhat.com/show_bug.cgi?id=763890>
08:15 coredumb ndevos: nothing more concerning an error with log-level=DEBUG
08:15 ekuric joined #gluster
08:21 hybrid512 joined #gluster
08:21 RameshN joined #gluster
08:23 ndevos coredumb: care to try with log-level=TRACE?
08:32 liquidat joined #gluster
08:33 glusterbot New news from resolvedglusterbugs: [Bug 846737] file /usr/share/doc/xxx is not owned by any package <https://bugzilla.redhat.com/show_bug.cgi?id=846737>
08:45 glusterbot New news from newglusterbugs: [Bug 1146413] Symlink mtime changes when rebalancing <https://bugzilla.redhat.com/show_bug.cgi?id=1146413>
08:46 rjester joined #gluster
09:02 rjester Hi all. I am currently upgraded my gluster environment from 3.0.8 to 3.4.2. The upgrade goes through fine, but I am getting input/output errors when I try to list the contents of a mounted gluster volume. The error is "No gfid present". I have looked through my directory on the gluster server with getfattr and I see no "trusted.gfid" on any file. Is this a new feature, the gfid? And do I need to take extra steps during the upgrade to fix this?
09:03 Philambdo joined #gluster
09:05 jmarley joined #gluster
09:13 johndescs joined #gluster
09:14 johndescs hello all
09:15 glusterbot New news from newglusterbugs: [Bug 1146426] glusterfs-server and the regression tests require the 'killall' command <https://bugzilla.redhat.com/show_bug.cgi?id=1146426>
09:15 vimal joined #gluster
09:15 johndescs semiosis: (or anyone ^^), I'm suffering from both localhost gluster mount problem and upstart wait-for-state.conf bug on ubuntu 14.04
09:16 johndescs http://danpisarski.com/blog/2014/07/11/mounting-a-glusterfs-mountpoint-on-bootup-in-ubuntu-14-dot-04/ and https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/876648
09:16 glusterbot Title: Mounting a GlusterFS Mountpoint On Bootup in Ubuntu 14.04 - Discreet Cosine Transform (at danpisarski.com)
09:16 johndescs you said we should ask here for any problem so here I am :)
09:19 calum_ joined #gluster
09:20 saurabh joined #gluster
09:20 suliba joined #gluster
09:33 ndarshan joined #gluster
09:35 Slydder johndescs: and what is your problem?
09:37 spandit joined #gluster
09:39 johndescs Slydder: it ends up with the server not rebooting at all, so OK the workarounds are working but I think it's best to be fixed in the official packages…
09:42 Slydder johndescs: it would seem that those are quite a bit older. which version of gfs are you running?
09:43 LebedevRI joined #gluster
09:44 johndescs Slydder: 3.4.2-1ubuntu1, but the problematic files are the same in the PPA
09:45 johndescs (and here : https://github.com/gluster/glusterfs/blob/master/extras/Ubuntu/mounting-glusterfs.conf)
09:45 glusterbot Title: glusterfs/mounting-glusterfs.conf at master · gluster/glusterfs · GitHub (at github.com)
09:45 Slydder hmmmm. I run 3.5 but of course not on a ubu box. debian only. and have no such problems. good thing is those bugs have already been reported.
09:46 milka joined #gluster
09:47 johndescs if I only could put debian everywhere … and yes, but the "localhost" one is marked as fixed
09:48 Slydder it most likely is fixed but somehow not in the ppa and/or version you are using.
09:56 diegows joined #gluster
09:58 fubada joined #gluster
09:59 haomaiwang joined #gluster
10:06 fubada joined #gluster
10:07 haomaiw__ joined #gluster
10:12 _Bryan_ joined #gluster
10:16 fubada joined #gluster
10:20 social joined #gluster
10:24 Slashman joined #gluster
10:45 glusterbot New news from newglusterbugs: [Bug 1146492] mount hangs for rdma type transport if the network is busy. <https://bugzilla.redhat.com/show_bug.cgi?id=1146492>
10:45 kkeithley1 joined #gluster
10:47 dusmant joined #gluster
10:51 Guest42780 joined #gluster
10:55 hagarth joined #gluster
11:02 ppai joined #gluster
11:11 Philambdo1 joined #gluster
11:12 shireesh joined #gluster
11:13 smohan joined #gluster
11:13 DV joined #gluster
11:24 Arrfab joined #gluster
11:27 milka joined #gluster
11:29 shireesh left #gluster
11:31 shireesh joined #gluster
11:31 shireesh left #gluster
11:37 chirino joined #gluster
11:37 rjester joined #gluster
11:40 dusmant joined #gluster
11:45 glusterbot New news from newglusterbugs: [Bug 763746] We need an easy way to alter client configs without breaking DVM <https://bugzilla.redhat.com/show_bug.cgi?id=763746> || [Bug 1146519] Wrong summary in GlusterFs spec file <https://bugzilla.redhat.com/show_bug.cgi?id=1146519>
11:47 hchiramm_call joined #gluster
11:49 B21956 joined #gluster
11:50 B219561 joined #gluster
11:55 julim joined #gluster
11:56 Fen1 joined #gluster
12:01 bennyturns joined #gluster
12:04 Slashman_ joined #gluster
12:07 sputnik13 joined #gluster
12:16 glusterbot New news from newglusterbugs: [Bug 1146523] glusterfs.spec.in — synch minor diffs with fedora dist-git glusterfs.spec <https://bugzilla.redhat.com/show_bug.cgi?id=1146523> || [Bug 1146524] glusterfs.spec.in — synch minor diffs with fedora dist-git glusterfs.spec <https://bugzilla.redhat.com/show_bug.cgi?id=1146524>
12:19 bene2 joined #gluster
12:29 Shakkan joined #gluster
12:29 Shakkan hi
12:29 glusterbot Shakkan: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:35 fubada joined #gluster
12:40 Shakkan I need to implement a file storage server that will receive/serve files ranging from 1MB to 1GB. I am wondering if Gluster is a good candidate?
13:00 Zordrak_ joined #gluster
13:04 sputnik13 joined #gluster
13:04 theron joined #gluster
13:05 theron joined #gluster
13:09 mojibake joined #gluster
13:11 harish_ joined #gluster
13:17 dusmant joined #gluster
13:41 spiekey joined #gluster
13:41 spiekey Hello!
13:41 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:43 spiekey i have a total bandwith of 1GBit between two replicated nodes. If it does its self healing, it does only use 100Mbit and not the full 1GBit bandwith which is available
13:43 spiekey is the self healing speed somehow limited?
13:44 bene2 you might not want to use up ALL your network bandwidth to self-heal, otherwise your app. is locked out.  Careful what you wish for ;-)
13:45 diegows joined #gluster
13:46 spiekey bene2: yes, thats true, but i have lots left
13:46 bene2 what is your file size distribution?
13:46 spiekey what speed does the self-heal-damon take?
13:47 msmith_ joined #gluster
13:48 jmarley joined #gluster
13:48 spiekey bene2: http://fpaste.org/136407/52890141/
13:48 glusterbot Title: #136407 Fedora Project Pastebin (at fpaste.org)
13:48 lmickh joined #gluster
13:48 B21956 joined #gluster
13:55 bene2 spiekey, you have a 210-GB files in the mix, 100 Mbit/sec ~= 10 MB/s = 1 GB/100 sec, so this will take about 21000 sec = 7 hours for just that one file.    Sure, it could heal faster.   But why is it healing in the first place?  Does this happen often?  try command "gluster v heal your-volume help" and get some status on self-healing.
13:56 sputnik13 joined #gluster
13:57 spiekey bene2: i just added a nagios check_glusterfs check for monitoring, so i triggered that on purpose.
13:57 spiekey so it copies the full file? not just the missing increments?
14:02 dan4linux joined #gluster
14:05 bene2 yes
14:07 bennyturns joined #gluster
14:16 clutchk joined #gluster
14:17 Fen1 joined #gluster
14:18 Arrfab hi guys, was searching for some tunable parameters for gluster volumes. any good pointer ? I have some IO diff when comparing local perf (directly writting to the brick) or through gluster (so same client - one gluster node itself -) so to the gluster vol
14:28 B21956 joined #gluster
14:30 xavih joined #gluster
14:41 sprachgenerator joined #gluster
14:49 liquidat joined #gluster
14:50 coredump joined #gluster
14:53 B21956 joined #gluster
14:54 fubada joined #gluster
14:55 deepakcs joined #gluster
14:56 bennyturns joined #gluster
15:00 spiekey did i run into split brain here? http://fpaste.org/136432/11657195/
15:00 glusterbot Title: #136432 Fedora Project Pastebin (at fpaste.org)
15:02 Andreas-IPO joined #gluster
15:04 daMaestro joined #gluster
15:15 siXy left #gluster
15:18 plarsen joined #gluster
15:19 plarsen joined #gluster
15:21 nishanth joined #gluster
15:23 sputnik13 joined #gluster
15:29 jobewan joined #gluster
15:29 dtrainor joined #gluster
15:29 lmickh joined #gluster
15:29 hagarth joined #gluster
15:37 jbrooks joined #gluster
15:41 _pol joined #gluster
15:52 xavih joined #gluster
15:53 rwheeler joined #gluster
15:56 diegows joined #gluster
16:00 pkoro joined #gluster
16:00 lanning joined #gluster
16:03 PeterA joined #gluster
16:03 PeterA how do we deal with healfailed entries?
16:05 nishanth joined #gluster
16:05 spiekey left #gluster
16:09 PeterA anyone can help??
16:12 PeterA 15:58:37.250479] E [afr-self-heal-common.c:1615:afr_sh_common_lookup_cbk] 0-sas03-replicate-0: Conflicting entries for /RdB2C_20140917.dat
16:15 mbukatov joined #gluster
16:21 PeterA http://pastie.org/9594260
16:21 glusterbot Title: #9594260 - Pastie (at pastie.org)
16:21 PeterA seems like i have a file that conflict on both brick
16:22 PeterA i bounced glusterfs-server on both servers and now looks like this:
16:22 PeterA http://pastie.org/9594265
16:22 glusterbot Title: #9594265 - Pastie (at pastie.org)
16:23 ndevos PeterA: healing fails if files or dirs are in a ,,(split-brain)
16:23 glusterbot PeterA: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
16:23 PeterA it doesn't show as split-brain
16:23 PeterA but the glustershd keep throwing the conflicting entries
16:23 PeterA on that particular file
16:24 ndevos hmm, I would not know about that, sorry
16:24 PeterA how do we deal with heal-failed normally?
16:25 xavih_ joined #gluster
16:39 hagarth PeterA: are the contents of both files the same?
16:39 PeterA nope
16:39 PeterA i just delete on copy on one of the brick with an older date
16:40 PeterA and restarting the volume now
16:40 hagarth do you happen to know which file has the right content?
16:40 PeterA i mean restarting the service
16:40 PeterA i do not....
16:40 PeterA actually i do :P
16:40 PeterA i checked the md5 on the file
16:41 hagarth if you happen to know, do retain the good copy and move out the stale copy out of the brick
16:41 PeterA ya i just did
16:41 PeterA then i did a volume heal
16:41 PeterA and the client can see the file
16:41 PeterA but the heal-failed entries still exist
16:41 PeterA so i am restarting glusterfs-server now
16:42 hagarth PeterA: the next crawl of gluster self-heal daemon should set it right, crawl happens once every 10 minutes IIRC
16:42 PeterA oh
16:42 PeterA i should have wait
16:44 xavih joined #gluster
16:45 PeterA how do we manually kick off the crawl?
16:49 hagarth gluster volume heal <volname> should do
16:49 PeterA that's what i did....
16:50 PeterA it healed the file after i removed the other copy
16:50 PeterA but the heal info heal-failed showed this http://pastie.org/9594265#11
16:50 glusterbot Title: #9594265 - Pastie (at pastie.org)
16:50 PeterA the file became the gfid
16:50 PeterA and stays there
16:50 PeterA until i bounced the glusterfs-server
16:51 PeterA1 joined #gluster
16:54 PeterA1 oh it seems like the file was healed but the directory was not
16:55 PeterA1 so after i removed the file
16:55 PeterA1 and issue volume heal
16:55 PeterA1 the file was healed and gluster client can see the files
16:55 PeterA1 but the directory show up in the info heal-failed
16:55 PeterA1 wonder if any other ways to heal those entries without bounce
16:57 hagarth PeterA1: would be a very appropriate question for Pranith, probably you can send an email on gluster-users and based on the response, we can add the procedure to documentation/wiki.
17:02 ricky-ticky1 joined #gluster
17:07 fubada joined #gluster
17:24 diegows joined #gluster
17:25 an joined #gluster
17:31 aulait joined #gluster
17:39 PeterA1 ya just sent :)
17:39 PeterA1 awaiting moderator approval
17:44 PeterA1 nvm got an email changed...
17:47 ekuric joined #gluster
17:53 plarsen joined #gluster
18:33 yoavz joined #gluster
18:51 an joined #gluster
18:58 lalatenduM joined #gluster
19:09 chirino joined #gluster
19:12 virusuy joined #gluster
19:14 fubada joined #gluster
19:16 xavih joined #gluster
19:24 dmachi1 joined #gluster
19:35 fattaneh1 joined #gluster
19:57 coredump joined #gluster
20:00 hchiramm joined #gluster
20:20 fattaneh1 left #gluster
20:22 Pupeno joined #gluster
20:39 xavih joined #gluster
20:48 xavih joined #gluster
20:52 fubada joined #gluster
20:57 refrainblue joined #gluster
20:59 refrainblue has anyone gotten glusterfs to successfully mount while running uek?
21:02 jmarley joined #gluster
21:03 pkoro joined #gluster
21:11 adamdrew joined #gluster
21:14 adamdrew trying to set up glusterfs 3.5.2 on centos 7. Getting "Connection failed" when attempting peer probe. glusterd is running on both nodes, and bound to the right IP. Nothing obvious in the logs. Logs and cmd output > http://paste.fedoraproject.org/136574/ - help very much appreciated :)
21:14 glusterbot Title: #136574 Fedora Project Pastebin (at paste.fedoraproject.org)
21:18 semiosis adamdrew: iptables?  selinux?
21:19 semiosis name resolution?
21:23 adamdrew iptables is off, but I'm checking selinux. that could be the problem. name resolution is working.
21:26 Philambdo joined #gluster
21:29 theron_ joined #gluster
21:30 semiosis those are the three most common problems when peering
21:31 adamdrew thanks. it was selinux. I'd run "setenforce 0" as I would have in previous rhel, but that doesn't seem to do the trick on 7
21:31 adamdrew had to edit the config and reboot
21:31 adamdrew peering works now
21:31 adamdrew thanks!
21:31 semiosis great!
21:31 semiosis yw
21:34 xavih joined #gluster
21:40 XpineX_ joined #gluster
21:46 m0zes joined #gluster
21:53 cyberbootje joined #gluster
22:09 adamdrew OK, got another noob question. I can't access my gluster volume over NFS. gluster volume status shows Port N/A and Online N for NFS Server on both nodes. Attempting to mount from a client results in a mount error. I am specifying tcp as the proto and vers 3.
22:09 adamdrew Is there something I need to do to enable NFS?
22:15 cyberbootje joined #gluster
22:25 msmith_ joined #gluster
22:30 zerick joined #gluster
22:55 sprachgenerator joined #gluster
23:08 cyberbootje joined #gluster
23:09 fubada joined #gluster
23:20 gildub joined #gluster
23:28 msmith_ joined #gluster
23:31 semiosis adamdrew: ,,(nfs)
23:31 glusterbot adamdrew: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
23:32 adamdrew rpcbind is probably it. let me bring my lab back up and try that out.
23:36 semiosis afk for the day.  good luck
23:39 cyberbootje joined #gluster
23:39 adamdrew thanks!
23:49 sprachgenerator joined #gluster
23:55 dtrainor joined #gluster
23:57 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary