Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 theron joined #gluster
00:42 jbd1 joined #gluster
00:57 diegows joined #gluster
01:04 koodough joined #gluster
01:04 hflai joined #gluster
01:30 harish joined #gluster
01:54 sjm joined #gluster
01:57 theron joined #gluster
02:02 jcsp1 joined #gluster
02:13 elico joined #gluster
02:57 bharata-rao joined #gluster
03:10 Guest4610 joined #gluster
03:13 dusmant joined #gluster
03:39 kshlm joined #gluster
03:43 itisravi joined #gluster
03:49 DV_ joined #gluster
03:52 kumar joined #gluster
03:53 RameshN joined #gluster
03:57 davinder6 joined #gluster
03:58 kanagaraj joined #gluster
03:59 RameshN joined #gluster
04:09 ppai joined #gluster
04:19 haomaiwa_ joined #gluster
04:24 haomaiwang joined #gluster
04:24 vpshastry joined #gluster
04:25 ndarshan joined #gluster
04:33 haomaiwa_ joined #gluster
04:38 Matthaeus joined #gluster
04:38 kdhananjay joined #gluster
04:47 davinder7 joined #gluster
04:49 MacWinner joined #gluster
04:53 psharma joined #gluster
05:07 saurabh joined #gluster
05:08 kaushal_ joined #gluster
05:09 aravindavk joined #gluster
05:12 hagarth joined #gluster
05:13 Matthaeus joined #gluster
05:13 ngoswami joined #gluster
05:14 nbalachandran joined #gluster
05:23 dusmant joined #gluster
05:27 rastar joined #gluster
05:27 hagarth joined #gluster
05:27 kanagaraj joined #gluster
05:29 hchiramm_ joined #gluster
05:38 kaushal_ joined #gluster
05:45 kaushal_ joined #gluster
05:50 dusmant joined #gluster
05:52 aravindavk joined #gluster
06:01 mbukatov joined #gluster
06:05 deepakcs joined #gluster
06:07 XpineX joined #gluster
06:09 raghu joined #gluster
06:13 lalatenduM joined #gluster
06:19 kaushal_ joined #gluster
06:24 vimal joined #gluster
06:26 aravindavk joined #gluster
06:26 kshlm joined #gluster
06:26 kshlm joined #gluster
06:28 dusmant joined #gluster
06:29 ricky-ti1 joined #gluster
06:32 LebedevRI joined #gluster
06:36 velladecin joined #gluster
06:43 ekuric joined #gluster
06:50 nshaikh joined #gluster
06:59 eseyman joined #gluster
07:08 ctria joined #gluster
07:10 Matthaeus joined #gluster
07:16 hagarth joined #gluster
07:17 kshlm joined #gluster
07:29 kshlm joined #gluster
07:33 dusmant joined #gluster
07:41 eseyman joined #gluster
07:42 Pupeno joined #gluster
07:45 fsimonce joined #gluster
07:45 ubungu joined #gluster
07:46 ubungu hello everybody
07:49 keytab joined #gluster
07:49 DV_ joined #gluster
07:52 mbukatov joined #gluster
08:05 GabrieleV joined #gluster
08:07 gmcwhistler joined #gluster
08:08 Norky joined #gluster
08:08 delhage joined #gluster
08:09 hflai joined #gluster
08:13 stickyboy joined #gluster
08:27 jag3773 joined #gluster
08:28 ktosiek joined #gluster
08:32 radez_g0n3 joined #gluster
08:34 dusmant joined #gluster
08:34 spandit joined #gluster
08:39 nbalachandran joined #gluster
08:41 vikumar joined #gluster
08:46 ricky-ti1 joined #gluster
08:50 ramteid joined #gluster
08:52 harish joined #gluster
08:54 ProT-0-TypE joined #gluster
08:56 calum_ joined #gluster
09:11 jcsp joined #gluster
09:14 hagarth joined #gluster
09:15 juhaj joined #gluster
09:15 rwheeler joined #gluster
09:17 fsimonce joined #gluster
09:21 kdhananjay joined #gluster
09:23 rastar joined #gluster
09:24 kshlm joined #gluster
09:29 kaushal_ joined #gluster
09:32 kdhananjay joined #gluster
09:33 davinder8 joined #gluster
09:38 hagarth1 joined #gluster
09:46 Pupeno joined #gluster
09:49 ctria joined #gluster
09:51 bharata-rao joined #gluster
09:53 kshlm joined #gluster
09:55 meghanam joined #gluster
09:55 meghanam_ joined #gluster
09:57 Pupeno_ joined #gluster
10:03 kshlm joined #gluster
10:03 dusmant joined #gluster
10:08 kaushal_ joined #gluster
10:27 cyberbootje anyone tried glusterFS witch ZFS ?
10:30 edward1 joined #gluster
10:41 haomaiwa_ joined #gluster
10:45 haomai___ joined #gluster
10:53 nshaikh joined #gluster
10:57 harish joined #gluster
10:57 shyam joined #gluster
11:01 ngoswami joined #gluster
11:17 bfoster joined #gluster
11:17 pdrakeweb joined #gluster
11:17 ppai joined #gluster
11:20 diegows joined #gluster
11:22 morse joined #gluster
11:26 haomaiwa_ joined #gluster
11:31 [o__o] joined #gluster
11:46 dusmant joined #gluster
12:03 ramteid joined #gluster
12:04 B21956 joined #gluster
12:07 ppai joined #gluster
12:15 glusterbot New news from newglusterbugs: [Bug 1094815] [FEAT]: User Serviceable Snapshot <https://bugzilla.redhat.co​m/show_bug.cgi?id=1094815>
12:20 primechuck joined #gluster
12:30 ninkotech joined #gluster
12:30 ninkotech_ joined #gluster
12:34 hybrid512 joined #gluster
12:35 zero_ark joined #gluster
12:39 theron joined #gluster
12:40 chirino joined #gluster
12:40 yosafbridge joined #gluster
12:46 theron_ joined #gluster
12:52 sroy_ joined #gluster
12:52 firemanxbr joined #gluster
12:53 bennyturns joined #gluster
12:53 sjm joined #gluster
13:00 brad_mssw joined #gluster
13:06 ctria joined #gluster
13:06 atrius joined #gluster
13:10 deeville joined #gluster
13:11 japuzzo joined #gluster
13:15 glusterbot New news from newglusterbugs: [Bug 1101647] gluster volume heal volname statistics heal-count not giving desired output. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101647>
13:15 sulky joined #gluster
13:20 plarsen joined #gluster
13:25 Guest4610 joined #gluster
13:26 bharata-rao joined #gluster
13:52 deeville is it possible to use the kernel nfs / nfsd alongside glusterfs? The reason why I ask is I want to use one of the gluster server nodes to serve iSCSI-mounted non-gluster volumes of a SAN
13:52 deeville or am I asking for trouble? :)
13:54 bennyturns joined #gluster
13:55 bene2 joined #gluster
14:01 brad_mssw is there ever an issue using 'localhost' as the address to connect to a gluster cluster when quorum is _enabled_ via fuse or libgfapi?
14:02 brad_mssw vs using a virtual ip address that floats with something like keepalived
14:06 daMaestro joined #gluster
14:07 davinder8 joined #gluster
14:12 tdasilva joined #gluster
14:13 jag3773 joined #gluster
14:14 dbruhn joined #gluster
14:17 wushudoin joined #gluster
14:20 saurabh joined #gluster
14:22 jskinner_ joined #gluster
14:23 coredump joined #gluster
14:29 dbruhn_ joined #gluster
14:31 rwheeler joined #gluster
14:35 shyam joined #gluster
14:35 kdhananjay joined #gluster
14:37 jdarcy joined #gluster
14:37 jdarcy join #gluster-meeting
14:37 jdarcy Oops.
14:40 plarsen joined #gluster
14:45 jag3773 joined #gluster
14:48 gmcwhist_ joined #gluster
14:49 lpabon joined #gluster
14:49 lmickh joined #gluster
14:50 jbrooks joined #gluster
14:50 jmarley joined #gluster
14:53 haomaiwa_ joined #gluster
14:56 bennyturns joined #gluster
15:15 jbd1 joined #gluster
15:16 jag3773 joined #gluster
15:35 sputnik13 joined #gluster
15:50 _dist joined #gluster
15:56 Pupeno joined #gluster
16:00 MacWinner joined #gluster
16:02 vpshastry joined #gluster
16:08 mortuar joined #gluster
16:18 bennyturns joined #gluster
16:20 Mo_ joined #gluster
16:33 zaitcev joined #gluster
16:34 jobewan joined #gluster
16:40 hagarth joined #gluster
16:54 MacWinner joined #gluster
16:59 SFLimey joined #gluster
17:02 zero_ark joined #gluster
17:03 kanagaraj joined #gluster
17:03 Matthaeus joined #gluster
17:03 John_HPC joined #gluster
17:06 rwheeler joined #gluster
17:11 sputnik13 joined #gluster
17:26 sputnik13 joined #gluster
17:27 sputnik13 joined #gluster
17:39 cfeller joined #gluster
17:40 chirino_m joined #gluster
17:43 d-fence joined #gluster
17:43 zero_ark joined #gluster
17:43 Matthaeus joined #gluster
17:57 JoeJulian deeville: it should be "possible" but I haven't seen anybody figure out how to do it successfully.
17:58 JoeJulian brad_mssw: Feel free to use localhost when mounting the volume on a server. If the client is not part of the trusted peer group and running glusterd, then you can't do that of course.
18:07 deeville JoeJulian, thanks for the reply. It's probably similar to running 2 instances of the nfs server daemon, on different ports I'm assuming, which I've never done. Right now, I'm testing if I create a new folder where the gluster volumes are, if I can mount that new folder by NFS…probably futile, but at least I'll test Gluster mount via NFS
18:17 John_HPC My glusterfsd isn't start on reboot; its failing on this line:     [ $GLUSTERFSD_CONFIG -a -f $GLUSTERFSD_CONFIG ] || exit 6. What sets GLUSTERFSD_CONFIG?
18:23 JoeJulian John_HPC: thats just for legacy volumes.
18:23 JoeJulian pre 3.1
18:27 John_HPC hmm
18:27 John_HPC strange
18:27 John_HPC so why are my mount points not starting automatically
18:27 JoeJulian A more generic general reference answer to the actual question you asked, environment variables for init/systemd services are set in their correspondingly named file under /etc/sysconfig
18:28 JoeJulian Did you specify _netdev in the fstab mount options?
18:28 kmai007 joined #gluster
18:29 John_HPC Yes, but the mount points aren't even available.
18:29 John_HPC root      4661  0.1  0.1 133724 14176 ?        Ssl  14:26   0:00 /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
18:29 John_HPC that is the only thing running, I don't have glusterfsd running at all.
18:30 JoeJulian Just for reference: ,,(glossary)
18:30 John_HPC I can start it manually with "gluster volume start glustervol01"
18:30 glusterbot reference: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
18:30 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
18:30 JoeJulian @processes
18:30 glusterbot JoeJulian: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
18:30 JoeJulian So what's missing are your bricks.
18:30 John_HPC Correct
18:30 JoeJulian ~pasteinfo | John_HPC
18:30 glusterbot John_HPC: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:31 John_HPC http://fpaste.org/108437/33870814/
18:31 glusterbot Title: #108437 Fedora Project Pastebin (at fpaste.org)
18:32 JoeJulian John_HPC: start your volume, ie. gluster volume start glustervol01
18:32 John_HPC volume start: glustervol01: success
18:33 _dist I've got two glusterfs servers both in a replicate volume. I want to add a third, I've heard stories about performance issues when doing this, people say to use rsync etc instead first, any truth to that?
18:34 SFLimey joined #gluster
18:34 kmai007 _dist: yo man, you ever use use-readdirp=no ?
18:34 JoeJulian _dist: I don't see why it would matter. You still have the same IO across the same channels.
18:34 _dist JoeJulian: that's what I though, no different than a heal from my perspective, except I have two read sources instead of 1
18:34 JoeJulian _dist: I've never actually tested that though.
18:34 _dist kmai007: no, never heard of that, never used it (yet)
18:35 JoeJulian kmai007: Why?
18:35 kmai007 https://bugzilla.redhat.co​m/show_bug.cgi?id=1041109 i've been seening stale file handle errors
18:35 glusterbot Bug 1041109: urgent, unspecified, ---, csaba, NEW , structure needs cleaning
18:35 JoeJulian Ah
18:35 kmai007 this appears to address the logging of the errors
18:36 kmai007 but i'm not sure if its corrected in 3.4.4 so i'd have to undo everything to adjust for it...
18:36 kmai007 i'm on 3.4.3 now
18:36 kmai007 sorry 3.4.2-1
18:37 _dist I'm 3.4.2 01/29/2014 from debian. Never seen that problem
18:38 JoeJulian Ah, so it's disabled to avoid a kernel bug. If your kernel no longer has that bug then you should be able to use readdirp again for faster directory access.
18:39 JoeJulian It would be nice if Susant had mentioned which kernel release had the fix.
18:39 kmai007 exactly
18:40 kmai007 i didn't see what version, but i'm using rhel
18:40 kmai007 2.6.32-431.3.1.el6.x86_64
18:40 kmai007 and i'm not sure its addressed there b/c it didn't stop logging until i mounted the fuse mount with use-readdirp=no
18:40 kmai007 so my guess is no
18:41 kmai007 i think John the ticket orginator expressed the same experience
18:41 _dist when does readirplus enhance performance, and by how much (if it was working correctly) ?
18:41 JoeJulian Did you check dmesg on your servers to make sure it's not just passing an xfs error through to your client?
18:41 deeville is the gluster nfs server enabled by default? or do I have to enable it for every volume? I can't seem to mount via nfs, I've opened 2049, 111 (TCP/UDP), 38456:38467 in iptables
18:42 deeville I've tried using mountproto nfs vers 3
18:42 brad_mssw I've got 3 gluster nodes, but only want replica 2 ... does that mean I need 2 bricks on each node?  if so, I'm guessing order matters when specifying the bricks in a volume?
18:42 JoeJulian _dist: when ever you access a directory listing thats more than some buffer size (I don't know what size that is though).
18:42 JoeJulian ~nfs | deeville
18:42 glusterbot deeville: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
18:43 _dist JoeJulian: so directories with large amounts of files. We don't have performance issues there, but faster is always better.
18:43 JoeJulian +1
18:45 kmai007 JoeJulian: no dmesg errors reported on gluster nodes/ clients
18:46 kmai007 deeville: you should mount with -vvv verboseness
18:48 glusterbot New news from newglusterbugs: [Bug 1041109] structure needs cleaning <https://bugzilla.redhat.co​m/show_bug.cgi?id=1041109>
18:49 edward1 joined #gluster
18:50 Matthaeus joined #gluster
18:51 deeville kmai007, I get this: mount.nfs: prog 100003, trying vers=3, prot=6
18:51 deeville mount.nfs: portmap query retrying: RPC: Program not registered
18:51 deeville mount.nfs: prog 100003, trying vers=3, prot=17
18:51 deeville mount.nfs: portmap query failed: RPC: Program not registered
18:51 deeville mount.nfs: requested NFS version or transport protocol is not supported
18:51 glusterbot deeville: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
18:52 deeville is nfs.disable=on by default?
18:54 semiosis deeville: it's off by default
18:55 deeville semiosis, there doesn't seem to be something like glusterfs volume list/show <option>. Is there a file that show all the options set?
18:55 semiosis @options
18:55 glusterbot semiosis: see the old 3.2 options page here http://goo.gl/dPFAf or run 'gluster volume set help' on glusterfs 3.3 and newer
18:56 semiosis hmm
18:56 semiosis that could be updated
18:56 semiosis @undocumented options
18:56 glusterbot semiosis: Undocumented options for 3.4: http://www.gluster.org/community/documentat​ion/index.php/Documenting_the_undocumented
18:56 semiosis @forget options
18:56 glusterbot semiosis: The operation succeeded.
18:57 semiosis @learn options as See config options and their defaults with 'gluster volume set help' and see also this page about undocumented options: http://goo.gl/mIAe4E
18:57 glusterbot semiosis: The operation succeeded.
18:57 semiosis @options
18:57 glusterbot semiosis: See config options and their defaults with 'gluster volume set help' and see also this page about undocumented options: http://goo.gl/mIAe4E
18:58 kmai007 so on the client if you type "showmount -e <gluster_node>" do you see any exported volume names?
19:01 brad_mssw I've got 3 gluster nodes, but only want replica 2 ... does that mean I need 2 bricks on each node?  does order matter for proper distribution to ensure the replica isn't on the same machine?  meaning I should do   n1/brk1 n2/brk1 n3/brk1 n1/brk2 n2/brk2 n3/brk2 ... rather than  n1/brk1 n1/brk2 n2/brk1 n2/brk2 n3/brk1 n3/brk2 ?
19:03 deeville kmai007, one sec, just rebooted the gluster nodes, can't seem to get glusterfsd to start
19:04 deeville if I change -t nfs to -t glusterfs the mounting works
19:04 deeville I'll try show mount in a sec
19:04 kmai007 ok, typically i've found in order to get glusterfsd started again, i've 'service glusterd restart' will bring it all back
19:06 John_HPC JoeJulian: I am not sure if this is related to my bricks not automatically starting. http://fpaste.org/108446/02340465/    having a few errors: "
19:06 glusterbot Title: #108446 Fedora Project Pastebin (at fpaste.org)
19:06 John_HPC http://fpaste.org/108446/02340465/
19:06 John_HPC cra[
19:06 John_HPC crap
19:06 John_HPC can't copy/paste today
19:06 John_HPC " Server and Client lk-version numbers are not same, reopening the fds" and "lookup failed on index dir on glustervol01-client-12 - (Stale NFS file handle)"
19:06 glusterbot John_HPC: This is normal behavior and can safely be ignored.
19:07 John_HPC ok
19:07 kmai007 deeville: maybe also on the client you can go through these steps: Solved by: /etc/init.d/rpcbind start;  /etc/init.d/nfslock start;  chkconfig rpcbind on;
19:07 kmai007 but maybe a reboot will fix your problem
19:13 calum_ joined #gluster
19:23 chirino joined #gluster
19:28 deeville kmai007, show mount -e <gluster_node> actually shows the gluster volume
19:29 deeville let me try to mount again
19:30 kmai007 deeville: that is a good sign
19:31 JoeJulian John_HPC: your bricks didn't start automatically because your volume wasn't started.
19:31 JoeJulian Hence my instructions to start your volume...
19:31 deeville kmai007, glusterd is on CentOS, but I'm testing the nfs mount on Ubuntu, those client Nfs commands are bit different, e.g. rpcbind, nfslock, let me try on a centOS client
19:32 John_HPC Correct. Trying to figure out why my volume didn't start. I did shut it down before rebooting. Does it remember the state it was in?
19:32 deeville kmai007, but yah still can't on the Ubuntu client
19:32 JoeJulian It does
19:32 John_HPC holy crap
19:32 John_HPC lol
19:32 John_HPC I missed that then. That makes sense
19:32 kmai007 deeville: http://fpaste.org/ your cmd and output please with the -vvv
19:32 John_HPC Thanks Joe
19:32 glusterbot Title: New paste Fedora Project Pastebin (at fpaste.org)
19:34 deeville kmai007, thanks: http://ur1.ca/hhn8d
19:34 glusterbot Title: #108454 Fedora Project Pastebin (at ur1.ca)
19:34 JoeJulian You're welcome John_HPC
19:35 kmai007 mount.nfs: portmap query failed: RPC: Remote system error - No route to host
19:35 kmai007 i think you're missing some additional ports
19:35 kmai007 @ports
19:35 glusterbot kmai007: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
19:37 kmai007 deeville: i'm not good with iptables, but you may find something there
19:37 kmai007 are you able to mount the volume as NFS from another client?
19:39 deeville agh upon reboot 2049 closed
19:39 deeville ok now that it's open..the error is slightly different
19:39 deeville http://ur1.ca/hhn9l
19:39 glusterbot Title: #108457 Fedora Project Pastebin (at ur1.ca)
19:40 kmai007 this is mount.nfs: mount(2): Input/output error, doesn't help
19:41 _dist so just on a side note, it took one brick 833G replicate volume of VM data about 4 days to heal after a 10 minute outage. Nothing (that I can see) in the logs, I'm wondering if anyone can guess why? Or if that's a normal amount of time?
19:42 kmai007 deeville: on your client did you try  to restart your rpcbind ?
19:42 kmai007 or portmapper
19:43 deeville kmai007, i just did and still didn't work in ubuntu
19:43 deeville going to try now in CentOS
19:43 kmai007 what if you just turn off iptables as your last resort
19:44 deeville kmai007, just tried, same error
19:44 kmai007 fpaste rpcinfo -p
19:45 deeville kmai007, on the gluster node? or client?
19:45 kmai007 client
19:45 deeville kmai007, same error on CentOS actually
19:46 kmai007 on the storage server, do you have a a gluster nfs pid ?
19:47 deeville kmai007, rpcinfo -p http://ur1.ca/hhnb4
19:47 glusterbot Title: #108461 Fedora Project Pastebin (at ur1.ca)
19:49 deeville on the storage server, i ran as root 'ps aux | grep gluster' and there is one….http://ur1.ca/hhnbr
19:49 glusterbot Title: #108462 Fedora Project Pastebin (at ur1.ca)
19:49 kmai007 in that output you should tail that nfs log
19:50 deeville kmai007, smart, let me see
19:50 kmai007 and try to mount it on the client and see what clues are revealed
19:56 chirino_m joined #gluster
19:57 deeville kmai007, couple of errors, i think when I restarted glusterd/glusterfsd I caused a split-brain which is preventing nfs to mount…but before that there's hmmmm…it's complaining about a split-brain on "/"
19:58 kmai007 can someone tell me why gNFS mount appears quicker then the gluster.fuse mount ?
19:58 kmai007 i don't recall restarting glusterd causing a split-brain
19:58 kmai007 @split-brain
19:58 glusterbot kmai007: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
19:59 deeville kmai007, I don't understand a split-brain on "/"
19:59 deeville usually it's a file which I can manually delete
19:59 kmai007 the gfid is not the same across your storage at /
19:59 kmai007 hopefully glusterbot procedure can help you fix it
20:00 kmai007 i'm speaking out loud, but is the reason for gNFS to appear faster to the client is because of the local caching?  Please correct me if i'm wrong....
20:01 JoeJulian yep
20:03 JoeJulian deeville: check to make sure your bricks are all actually mounted.
20:06 deeville JoeJulian, they are mounted on the storage servers and on clients that use -t glusterfs
20:06 JoeJulian deeville: You mounted the bricks on both?!?! That's odd.
20:06 * semiosis smells a SAN
20:07 deeville JoeJulian, but the split-brain is still there: http://ur1.ca/hhnfm
20:07 glusterbot Title: #108470 Fedora Project Pastebin (at ur1.ca)
20:08 JoeJulian typically I just delete trusted.afr.* attributes from the brick roots to cure that.
20:09 deeville JoeJulian, the storage servers are running glusterd, the brick is at /store0/gluster-brick0, and I mounted use the glusterfs-client on /mnt/gluster-volume0
20:09 deeville JoeJulian, or is that not best-practice?
20:10 JoeJulian That's right. You just said you mounted the brick on the servers and the client.
20:11 _dist JoeJulian: One last question about the replicate. If I do an add-brick, is there a way to cancel it if things go nuts? Maybe just stop the glusterfs-server on the new brick?
20:12 JoeJulian I haven't checked how upstart handles that, but I would expect it to just stop glusterd. You would have to kill the brick ,,(process).
20:12 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
20:12 _dist ok, I don't think there'll be issues, just being careful. Thanks!
20:13 kmai007 _dist: when u go through with this, please hook me up with the process doc. LOL
20:13 deeville JoeJulian, hmmm ok..I don't get it lol. Is that ok? Just to clear it up..Ok, I have 2 gluster nodes that replicate gluster-volume0/gluster-brick0, on the same gluster nodes, I mounted gluster-volume0 via the mount -t glusterfs……..I have "client" machines that gluster-volume0 using the glusterfs-client
20:13 andreask joined #gluster
20:13 kmai007 just in case i have to go down that road
20:13 mortuar joined #gluster
20:13 JoeJulian ~glossary | deeville
20:13 glusterbot deeville: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
20:14 deeville JoeJulian, thanks :), in that case, the servers are also clients
20:14 JoeJulian That's not uncommon. :D
20:15 kmai007 so you're doing a local mount via NFS protocol
20:15 kmai007 of the gluster vol.
20:15 kmai007 seems easy
20:15 deeville JoeJulian, haha yah i was doing with when my servers were also clients and also KVM hosts :)
20:15 deeville kmai007, nope..it's through the glusterfs protocol
20:15 JoeJulian Check the ,,(extended attributes) of the brick roots. If there are trusted.afr.* attributes, remove them with setfattr -x trusted.afr.blah
20:15 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/​04/glusterfs-extended-attributes/
20:15 deeville I'm testing the NFS protocol from other clients
20:15 kmai007 ok i understand now
20:16 deeville JoeJulian, I'll do that..it's just scary that it's on "/"
20:17 JoeJulian No. / is the brick root. In your case /store0/gluster-brick0
20:17 kmai007 deeville: i think once you do that, go to a client and stat the /gluster.mount
20:17 kmai007 i think that will start the HEAL
20:17 JoeJulian Actually, it won't even need healed after that.
20:17 kmai007 cool beans JoeJulian
20:18 JoeJulian Be aware, those log entries will not go away. There just won't be any new entries in heal info split-brain
20:18 bchilds joined #gluster
20:18 kmai007 hence the only way to get 'rid' of the info split-brain output is to restart glusterd
20:19 bchilds who maintains the gluster build server?
20:19 JoeJulian gluster-infr
20:19 JoeJulian gluster-infra
20:19 bchilds i’m wanting to sign a subproject during build want to know if i need my own key or should share
20:19 kmai007 JoeJulian: ok so i wrote in the newsletter about a when there is a new vol file, when do clients become aware of it?  i've observed where some clients dont' react until 15mins later
20:19 JoeJulian #gluster-infra
20:20 JoeJulian kmai007: through "gluster volume set" it should be immediate. If you modify the vol file directly, you'll need to HUP the client.
20:21 bchilds #gluster-infra on freenode? it’s empty
20:21 JoeJulian gah
20:21 kmai007 i thought so too, but i've seen it where its not the case, not sure b/c of the congestion of management/file network on the same IP?
20:21 JoeJulian then it must just be the gluster-infra mailing list...
20:21 JoeJulian gluster-infra@gluster.org
20:22 kmai007 i was thinking if i make a CLI change, for 100 accountability i'd try to unmount/remount my clients to force the vol file
20:22 JoeJulian kmai007: You certainly /can/ but the feature was designed in so you wouldn't have to.
20:23 kmai007 JoeJulian: http://review.gluster.org/#/c/7531/ how will i know which version this is released in?
20:23 glusterbot Title: Gerrit Code Review (at review.gluster.org)
20:24 kmai007 i sent a reply to prasanth to the gluster mailing list
20:24 JoeJulian 3.1.0
20:24 JoeJulian oh
20:24 JoeJulian that...
20:25 bene2 joined #gluster
20:26 deeville JoeJulian, ko i think I did
20:27 deeville JoeJulian, had to go back to my notes, this getfattr/setfattr stuff is confusing
20:27 deeville kmai007, thanks for the tip kmai007 I'll restart the daemon or reboot the server
20:27 kmai007 deeville: good luck
20:27 kmai007 np
20:28 deeville kmai007, and then I can try mounting via nfs again :) back to the original problem lol
20:28 kmai007 hahaha....1 step at a time
20:42 Matthaeus joined #gluster
20:43 deeville kmai007, JoeJulian  ok happy to report that after fixing the weird split-brain on / and restarting glusterd on both nodes to clear the split-brain logs, I was able to mount via gNFS in both Ubuntu and CentOS
20:43 kmai007 makes sense
20:44 kmai007 good job
20:44 JoeJulian congrats
20:44 deeville kmai007, makes sense that I can't nfs mount if there's a split-brain but I can glusterfs mount?
20:44 kmai007 that will be a JoeJulian question
20:45 kmai007 but my assumption is that FUSE has no restrictions
20:45 JoeJulian It's an NFS thing and I try to avoid knowing anything about nfs.
20:45 deeville lol
20:45 kmai007 i think it has to do with the FSID
20:46 deeville well the main goal that I was trying accomplish here is how to run kernel NFS alongside GlusterFS, but first I wanted to try gNFS hahahahaha
20:46 JoeJulian There might be a clue in the nfs.log
20:46 Matthaeus joined #gluster
20:47 deeville JoeJulian, during the split-brain, and nfs mount tries, it was complaining about I/O error
20:49 deeville but whatever it works now…don't want to use gNFS anyway, unless I need to noun in Windows,
20:49 deeville or is there a glusterfs-client for windows
20:50 JoeJulian Windows can mount gluster nfs
20:50 kmai007 so deeville you're going to mount a gluster volume on a server via fuse, then re-export from that client as NFS ?
20:50 kmai007 seems like alot of layers
20:51 deeville no..the storage will be separate from the gluster
20:51 JoeJulian I think he should reshare the glusterfs-fuse-knfs layer via samba... glusterfs-fuse-knfs-samba
20:52 JoeJulian @ports
20:52 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:52 deeville kmai007, the bluster-volume will stay as it is…due to lack of additional hardware, I'd like to mount a SAN via iscsi, that server it via kernel NFS if I can
20:52 wushudoin left #gluster
20:53 deeville *then server it via kernel NFS
20:53 deeville JoeJulian, thanks, I'll keep that in mind if I need to accommodate Windows clients
20:53 kmai007 gotcha
20:54 deeville kmai007, my experiment was 1) get gNFS working, 2) create a folder, test, in /store0/ (where gluster-volume0) is located, and see if gNFS will let me NFS-mount 'test' :)
20:55 deeville I'm not hopeful
20:55 Pupeno_ joined #gluster
20:56 bgpepi joined #gluster
20:59 jruggiero joined #gluster
21:00 deeville as a separate issue, does anyone now if the mount option backupvol-server works in AutoFS (as opposed to /etc/fstab)?
21:07 JoeJulian as long as it passes it to mount.glusterfs
21:17 * JoeJulian still hates ubuntu
21:22 Matthaeus joined #gluster
21:34 jruggiero joined #gluster
21:38 jruggier1 joined #gluster
21:38 badone joined #gluster
21:40 Slashman joined #gluster
21:45 jruggiero left #gluster
21:48 Pupeno joined #gluster
22:02 Matthaeus joined #gluster
22:19 sputnik13 joined #gluster
22:22 bgpepi joined #gluster
22:28 lpabon joined #gluster
22:31 Matthaeus joined #gluster
22:33 edward1 joined #gluster
22:43 dtrainor joined #gluster
23:11 fidevo joined #gluster
23:29 dtrainor joined #gluster
23:30 dtrainor Hi.  I had a brick fail.  I'm going to rma the drive, so I'll have a fresh one, but the only examples I see about actually replacing a brick/drive involve the use of more than one system and the 'replace-brick' command.  What do I do when I physically have to replace a brick on the same Gluster system?  I'm only using one.
23:32 dtrainor I thiiiiink I can just put a new drive back in there with the same label, same mount point, xfs formatted filesystem, and a rebalance will take care of it?
23:32 dtrainor I'm using a distributed-replicate model doing 2x2.  For this volume, anyway.
23:45 ctria joined #gluster
23:47 FooBar dtrainor: you can replace a brick with itself... as long as you have replica's
23:47 FooBar (you need to specify a force flag though)
23:48 dtrainor oh?  hrm.  so replace-brick <volname> server1:<brick> server1:<brick> ?
23:48 dtrainor will putting a new blank brick in the old brick's spot conflict with this or is that how it's supposed to work by design?
23:51 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary