Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 spechal joined #gluster
00:02 tyl0r joined #gluster
00:21 chirino joined #gluster
00:31 davidbierce joined #gluster
00:34 davidbierce joined #gluster
00:40 B21956 joined #gluster
00:57 asias joined #gluster
00:58 ninkotech joined #gluster
01:06 ninkotech joined #gluster
01:10 glusterbot New news from resolvedglusterbugs: [Bug 839950] libgfapi: API access to gluster volumes (a.p.k.a libglusterfsclient) <http://goo.gl/YEXSd>
01:21 ninkotech joined #gluster
01:27 hagarth joined #gluster
01:34 bennyturns joined #gluster
01:35 DV__ joined #gluster
01:42 failshell joined #gluster
01:51 ninkotech joined #gluster
02:20 bennyturns joined #gluster
02:22 ninkotech__ joined #gluster
02:44 chirino joined #gluster
02:46 vpshastry joined #gluster
02:57 bulde joined #gluster
03:01 jag3773 joined #gluster
03:03 kshlm joined #gluster
03:05 bharata-rao joined #gluster
03:16 shubhendu joined #gluster
03:18 vpshastry left #gluster
03:20 sprachgenerator joined #gluster
03:30 sgowda joined #gluster
03:33 kanagaraj joined #gluster
03:35 recidive joined #gluster
03:37 recidive joined #gluster
03:38 wgao joined #gluster
03:45 shylesh joined #gluster
03:50 rjoseph joined #gluster
03:54 RameshN joined #gluster
04:08 Glusted joined #gluster
04:08 Glusted hello
04:08 glusterbot Glusted: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
04:09 Glusted i'm having problem with split-brain files
04:09 Glusted and heal-failed files.
04:09 Glusted Can anyone help please? (I already followed JoeJulian help: http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/, its not sufficient)
04:09 glusterbot <http://goo.gl/T8dTud> (at joejulian.name)
04:14 lalatenduM joined #gluster
04:35 ababu joined #gluster
04:35 recidive joined #gluster
04:37 Rio_S2 joined #gluster
04:40 vpshastry joined #gluster
04:43 mohankumar joined #gluster
04:45 mohankumar joined #gluster
04:58 ppai joined #gluster
05:03 dusmant joined #gluster
05:08 raghu joined #gluster
05:20 CheRi joined #gluster
05:24 spandit joined #gluster
05:28 saurabh joined #gluster
05:28 harish_ joined #gluster
05:30 bala joined #gluster
05:32 aravindavk joined #gluster
05:37 mohankumar__ joined #gluster
05:45 bulde joined #gluster
05:45 hagarth joined #gluster
06:10 DV__ joined #gluster
06:11 eryc joined #gluster
06:11 eryc joined #gluster
06:13 gluslog joined #gluster
06:14 meghanam joined #gluster
06:14 meghanam_ joined #gluster
06:16 vshankar joined #gluster
06:17 nshaikh joined #gluster
06:21 ngoswami joined #gluster
06:24 vimal joined #gluster
06:29 itisravi joined #gluster
06:31 bala joined #gluster
06:43 rjoseph joined #gluster
06:54 anonymus joined #gluster
06:54 anonymus hi guys
06:55 anonymus tell me please is there i kind of configuration file where I would see the list of gluster volumes?
06:56 anonymus a kind*
07:04 asias joined #gluster
07:09 XpineX Don't know about a configuration file, but the command: gluster volume info should provide some information
07:10 anonymus cat /etc/glusterfs/glusterd.vol
07:10 anonymus XpineX: thank you. I've found already
07:11 anonymus cat /var/lib/glusterd/vols/gv0/gv0.server2.data-gv0-brick1.vol
07:11 anonymus here it is
07:12 anonymus maybe
07:24 rastar joined #gluster
07:25 jtux joined #gluster
07:28 rjoseph joined #gluster
07:37 ctria joined #gluster
07:38 mbukatov joined #gluster
07:43 ricky-ticky joined #gluster
07:57 mgebbe_ joined #gluster
07:58 mgebbe_ joined #gluster
08:05 eseyman joined #gluster
08:06 getup joined #gluster
08:09 onny1 joined #gluster
08:28 franc joined #gluster
08:28 franc joined #gluster
08:35 hybrid5121 joined #gluster
08:40 andreask joined #gluster
08:53 dusmant joined #gluster
08:54 aravindavk joined #gluster
08:55 harish_ joined #gluster
08:55 ricky-ticky joined #gluster
08:55 shubhendu joined #gluster
08:59 lalatenduM joined #gluster
09:03 getup hi, is there a way to find out if there is a split brain situation on version 3.2 other than parsing the log files?
09:08 getup- joined #gluster
09:08 getup left #gluster
09:10 glusterbot New news from newglusterbugs: [Bug 808073] numerous entries of "OPEN (null) (--) ==> -1 (No such file or directory)" in brick logs when an add-brick operation is performed <http://goo.gl/zQN2F>
09:20 andreask joined #gluster
09:32 mgebbe joined #gluster
09:33 mgebbe_ joined #gluster
09:38 klaxa|work1 joined #gluster
09:41 dneary joined #gluster
09:57 dasfda joined #gluster
10:00 VerboEse joined #gluster
10:06 ngoswami joined #gluster
10:08 aravindavk joined #gluster
10:08 ndarshan joined #gluster
10:09 shubhendu joined #gluster
10:09 dusmant joined #gluster
10:22 RameshN joined #gluster
10:26 klaxa|work1 i think i've asked this before, but does the self-heal daemon in glusterfs 3.3.2 log when it's done synchronizing a file? specifically for files that are being accessed while being healed
10:27 onny1 hm I don't know, I really would like to help you out with this one :(
10:27 psharma joined #gluster
10:28 klaxa|work1 in glusterfs 3.2.7 it showed up in the logs, it doesn't show in the glustershd.log though
10:29 shylesh joined #gluster
10:40 mbukatov joined #gluster
11:07 ricky-ticky joined #gluster
11:12 onny1 klaxa|work1: do you have an answer to that question already?
11:16 eseyman joined #gluster
11:20 lalatenduM joined #gluster
11:28 klaxa|work1 left #gluster
11:48 Shri joined #gluster
11:49 ababu joined #gluster
11:54 lalatenduM joined #gluster
11:59 rcheleguini joined #gluster
12:18 ppai joined #gluster
12:19 ngoswami joined #gluster
12:23 getup- joined #gluster
12:43 recidive joined #gluster
12:47 vpshastry left #gluster
12:51 CheRi joined #gluster
13:00 mohankumar__ joined #gluster
13:02 lalatenduM joined #gluster
13:03 marbu joined #gluster
13:04 pravka joined #gluster
13:13 ppai joined #gluster
13:15 ndarshan joined #gluster
13:16 piap joined #gluster
13:19 piap joined #gluster
13:20 piap need help on a 3 replicate gluster server with libvirt with "file=gluster://"
13:21 piap the 3 server are : store-1 , store-2 , store-3
13:21 piap used with qemu + libvirt (file="gluster://")
13:22 piap it work when used with store-1
13:22 piap but doesn't work with the other store (2 et 3)
13:22 piap here the error message : qemu-system-x86_64: -drive file=gluster://store/KVM/SERVERS/lfst-avirus.dac-ne.aviation.img,if=none,id=drive-virtio-disk0,format=raw: could not open disk image gluster://store/KVM/SERVERS/lfst-avirus.dac-ne.aviation.img: Transport endpoint is not connected
13:22 piap any idea ?
13:29 recidive joined #gluster
13:31 kanagaraj joined #gluster
13:39 ira joined #gluster
13:40 badone joined #gluster
13:41 B21956 joined #gluster
13:42 davidbierce joined #gluster
13:54 hagarth joined #gluster
13:56 mbukatov joined #gluster
13:59 plarsen joined #gluster
14:04 jskinner_ joined #gluster
14:05 bennyturns joined #gluster
14:11 bala joined #gluster
14:12 rastar joined #gluster
14:15 DV joined #gluster
14:18 vpshastry joined #gluster
14:18 vpshastry left #gluster
14:21 ndarshan joined #gluster
14:22 khushildep joined #gluster
14:27 andreask joined #gluster
14:39 ricky-ticky joined #gluster
14:42 dbruhn joined #gluster
14:45 spechal Is there any docs about replication internals and how it works?  I've found it is MUCH quicker to copy a virtual disk and attach it to a new virtual machine than it is to make the VM and heal to it.  My question is, how does Gluster know what to replicate?  If a new file is written after I copied the disk, does it eventually get replicated?  Is there a journal?
14:45 Shri joined #gluster
14:49 Remco spechal: How much data are you talking about?
14:49 spechal 750GB
14:49 Remco Since replication is very much limited by network bandwidth
14:49 spechal yeah, it took over a day to heal, which is why I want to go the disk copy route
14:51 Remco Someone else can probably explain how replication works better, but from what I know the client takes care of replication
14:51 Remco The client can also be the NFS server
15:20 jbautista|brb joined #gluster
15:27 X3NQ joined #gluster
15:29 bugs_ joined #gluster
15:40 lalatenduM joined #gluster
15:46 zaitcev joined #gluster
15:47 recidive joined #gluster
15:47 daMaestro joined #gluster
15:51 spechal I had a mount that disappeared and now when I go to mount it I get a message like I am using the command wrong: mount.glusterfs 10.2.181.230:/gluster /gluster
15:51 spechal Usage:  mount.glusterfs <volumeserver>:<volumeid/volumeport> -o <options> <mountpoint>
15:51 spechal Anyone know what typically causes this?
15:55 Rio_S2 joined #gluster
16:00 neofob joined #gluster
16:03 tqrst joined #gluster
16:08 tqrst I noticed a bunch of "/some/path - disk layout missing" "mismatching layouts for /some/path" in my client logs. Figured it was time to run fix-layout, so I did "gluster volume rebalance $volname fix-layout start", which sat there for ~2 minutes and exited with 146. One server's gluster{d,fs,fsd} processes all segfaulted at some point in between. Then I started seeing bunch of "readv failed (No data available)" for many different clients.
16:08 tqrst and now gluster volume status on the server I started this on just hangs, whereas it errors out with "Another transaction is in progress" on other servers.
16:09 tqrst semiosis: long time no s{ee,egfault} :O
16:10 tqrst this is 3.4.0 on scientificlinux 6.4, kernel 2.6.32-358.14.1.el6.x86_64
16:10 JoeJulian I'd restart all glusterd.
16:10 JoeJulian Ah, that's why.
16:10 JoeJulian 3.4.0
16:11 tqrst known bug?
16:11 JoeJulian Yep
16:11 JoeJulian Fixed in 3.4.1
16:12 * tqrst wonders why the baseurl of his /etc/yum.repos.d/glusterfs-epel.repo has a hardcoded version number in it
16:13 tqrst interesting (http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.1/RHEL/glusterfs-epel.repo6)
16:13 glusterbot <http://goo.gl/XlhZVr> (at download.gluster.org)
16:13 JoeJulian kkeithley: ^
16:13 JoeJulian I replaced 3.4.1 with LATEST
16:14 kkeithley okay
16:14 glusterbot New news from resolvedglusterbugs: [Bug 927422] GlusterFileSystem.getDefaultBlockSize() is hardcoded <http://goo.gl/pKFu9d>
16:15 JoeJulian That is to say, I did that on my own .repo file.
16:15 tqrst ah neat, there is also a 6.x now so I can remove my hardcoded releasever
16:15 jag3773 joined #gluster
16:17 LoudNoises joined #gluster
16:18 tqrst just in case: are the 3.4.1 and 3.4.0 clients/servers compatible?
16:19 kkeithley they should be
16:19 tqrst guessing the answer is yes given that it's just a bugfix release
16:23 dneary left #gluster
16:26 failshell joined #gluster
16:28 spechal Is there any docs about replication internals and how it works?  I've found it is MUCH quicker to copy a virtual disk and attach it to a new virtual machine than it is to make the VM and heal to it.  My question is, how does Gluster know what to replicate?  If a new file is written after I copied the disk, does it eventually get replicated?  Is there a journal?
16:38 diegol_ joined #gluster
16:39 JoeJulian spechal: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/ - about half way down the page wrt afr.
16:39 glusterbot <http://goo.gl/Bf9Er> (at hekafs.org)
16:40 diegol_ hi guys, does anybody know if I can recover a distributed volume? I mean somebody erased the config directory and I would like to recreate this volume without losing my data
16:41 JoeJulian yes, just recreate the volume the same as it previously was.
16:41 JoeJulian however....
16:43 diegol_ googling a little I found a thread regarding this issue in the gluster-users mailing list and they talk that data loss is possible due to the gluster healing
16:43 JoeJulian Since your bricks exist and are mounted, you'll get an error: path.or.prefix is part of a blah blah blah (dots added to confound glusterbot). You'll want to unmount or mv the brick directories before recreating the volume. After it's created, mount or mv them back before "volume start"
16:45 JoeJulian If they claim that data loss is possible in a dht volume due to self-healing, then they don't understand how it all works. There is no self-healing in a dht volume.
16:45 diegol_ Ok JoeJulian I will try this thanks a lot :-)
16:46 diegol_ I was scared of all this data to be unaccesible anymore
16:46 RameshN joined #gluster
16:47 uebera|| joined #gluster
16:48 hchiramm__ joined #gluster
16:52 JoeJulian diegol_: That's one of the things I love about GlusterFS. Worst case, all your files are intact and can easily be backed up before doing anything.
16:55 badone joined #gluster
17:04 vpshastry joined #gluster
17:04 kaptk2 joined #gluster
17:09 spechal Thank you for the doc JoeJulian ... it was an interesting read, along with the doc it links to about replication
17:09 sprachgenerator joined #gluster
17:09 tqrst JoeJulian: done updating; let's try this again
17:10 tqrst no segfaults so far, although all my peers are showing up as localhost in rebalance (deja vu)
17:11 vpshastry left #gluster
17:17 neofob diegol_: there is another way to recover data of a distributed volume: you can rsync -av each brick to final storage place
17:17 neofob if you have disk space for that, of course
17:20 uebera|| joined #gluster
17:26 jmalm joined #gluster
17:26 jmalm left #gluster
17:28 dneary joined #gluster
17:35 Mo__ joined #gluster
17:39 shylesh joined #gluster
17:42 glusterbot New news from newglusterbugs: [Bug 1028672] BD xlator <http://goo.gl/y6DSIl>
17:52 ira joined #gluster
18:01 aliguori joined #gluster
18:05 recidive joined #gluster
18:11 andreask joined #gluster
18:11 andreask joined #gluster
18:17 RameshN joined #gluster
18:35 immaybegay joined #gluster
18:35 immaybegay I have a question about performance compared to gfs2
18:49 immaybegay Are there any problems with multipathing to a brick? Does it just add a shitload more lock handling for dlm?
19:12 andreask joined #gluster
19:14 neofob left #gluster
19:36 tyl0r joined #gluster
19:46 ndk joined #gluster
19:46 uebera|| joined #gluster
19:48 ngoswami joined #gluster
19:48 bugs_ joined #gluster
19:49 recidive joined #gluster
20:01 davidbierce We've been having issues with gluster client filling up servers log files with things like behing written a rate of 10s of lines a minute https://gist.github.com/anonymous/e5f96d55cb87fb237636
20:01 glusterbot <http://goo.gl/7iejo9> (at gist.github.com)
20:03 davidbierce The issue happened after a new nodes went offline unexpectedly, but there doesn't seem to be a lock that we can find.  Is there a way to clear the FD or at least see what it is trying to write to to clear it and cancel the operation?  Or are we just resigned to restarted the glusterfs client?
20:03 uebera|| joined #gluster
20:03 uebera|| joined #gluster
20:19 recidive joined #gluster
20:25 P0w3r3d joined #gluster
20:29 semiosis :O
20:33 uebera|| joined #gluster
20:33 JoeJulian :O
20:39 lpabon joined #gluster
20:44 davidbierce :O
20:55 davidbierce I can't get seem to get which FD is stuck in on the client or on the servers
20:56 semiosis davidbierce: what glusterfs version?  what distro?
20:57 JoeJulian davidbierce: I've not yet done it, but there is a "gluster volume statedump $vol fd" that might tell you
20:57 semiosis have you checked all the brick logs for corresponding info?
20:57 JoeJulian It dumps it under /var/run/gluster
20:58 davidbierce CentOS release 6.2 (Final) glusterfs 3.4.0  Linux kh5.us2.local 2.6.32-358.14.1.el6.x86_64
20:59 JoeJulian @latest
20:59 glusterbot JoeJulian: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
20:59 davidbierce There is a lot of information and the system is quite busy with VM images, so I'm not sure what lock would be good
21:00 davidbierce Was hoping someone would know of the issue off the top of their head to avoid restarting it anytime soon :)
21:01 JoeJulian So the statedump is not an option?
21:03 davidbierce I did the dump, but I'm not quite sure what I'm looking for in the dump.  And unfortunately it is 2 Servers but 28 bricks so it is lots of files ;)
21:04 JoeJulian Does that no create a client dump?
21:04 JoeJulian s/no/not/
21:04 glusterbot What JoeJulian meant to say was: So the statedump is nott an option?
21:05 JoeJulian glusterbot: .!..
21:05 glusterbot JoeJulian: sorry... :(
21:07 davidbierce The only thing installed on the client is glusterfs, no cli.  Does the CLI statedump work on something that isn't a server?  The documentation seems to imply statedump is a server report, broken out by brick.
21:08 JoeJulian It may be. I haven't actually had a need to try using it since I upgraded to 3.4. The old-school kill -USR1 on the client should still dump state on the client then.
21:09 davidbierce Oh, I didn't think to go old school.  I have a non production 3.4 install I can test it with, kind of scary to do that on buy production
21:09 JoeJulian good plan
21:10 davidbierce but a couple of gigs of log file writing and a bunch more CPU on that client is kind of annoying :)
21:11 JoeJulian I hear that.
21:11 * JoeJulian has voices in his head that read all the text aloud...
21:24 davidbierce Well, if you kill -USR1 in 3.4 it will dump the client output to /var/run/gluster/ (even if you have the state dump set in the volume options) and it will tell you what inodes it is having issues with :)
21:24 JoeJulian excellent
21:24 xavih joined #gluster
21:27 davidbierce Stated in public for all to see and learn from :)  Also for the public to know, if you are hosting VMs on gluster, you need to set the network.ping-timeout on the volume to very, very low.  As 42 seconds will cause writes inside a VM to error.
21:27 davidbierce Which is what caused this issue in the first place
21:28 JoeJulian no
21:28 JoeJulian http://joejulian.name/blog/keeping-your-vms-from-going-read-only-when-encountering-a-ping-timeout-in-glusterfs/
21:28 glusterbot <http://goo.gl/N20EJC> (at joejulian.name)
21:29 davidbierce Windows also gets grumpy when writes are suspended for a long time
21:29 badone joined #gluster
21:29 davidbierce Also JFS and XFS were not happy either
21:30 JoeJulian Windows... ,,(meh)
21:30 glusterbot I'm not happy about it either
21:31 pravka joined #gluster
21:31 JoeJulian I've had xfs withstand the ping-timeout... you didn't?
21:33 davidbierce The VMs that were busy did not recover, the mostly idle ones came back like a champ
21:35 davidbierce Is reseting FDs after a node goes offline that expensive where there are only 100s or 1000s of VM files in the volume instead of the millions of files?
21:40 JoeJulian Honestly, I don't know. It was a really big deal for my installs back <= 3.0, but I don't know if I've just been avoiding it, or if it's better now.
21:45 davidbierce Hopefully both are true :)
21:46 recidive joined #gluster
22:27 eclectic joined #gluster
22:32 andreask joined #gluster
22:42 Technicool joined #gluster
22:42 davidbierce After I sent USR1 to the glusterfs daemon on the effected client.....it stoopped logging the error.
22:43 glusterbot New news from newglusterbugs: [Bug 1030098] Development files not packaged for debian/ubuntu <http://goo.gl/uCV2Eu>
22:44 davidbierce But it did show which inode it was having issues with.
22:44 davidbierce in the dump
22:50 jbrooks left #gluster
22:52 jbrooks joined #gluster
22:53 nueces joined #gluster
23:05 nage joined #gluster
23:05 nage joined #gluster
23:08 sprachgenerator hello all - I'm trying to recover a crashed node and when peer probing one of the known good nodes a majority of them (96 out of 99) end up in status: State: Peer Rejected (Connected), whereas 3 are in state: State: Peer in Cluster (Connected), following this document: http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected provides limited success - any ideas? 3.4.1 here on Ubuntu
23:08 glusterbot <http://goo.gl/g0b4Oi> (at www.gluster.org)
23:10 JoeJulian Wierd. All of them should have been rejected. I wonder how any of them ended up connected.
23:11 JoeJulian You can't probe a trusted peer group from a non-trusted server.
23:11 JoeJulian You would have had to probe your new server from an existing peer to add it to the group.
23:12 sprachgenerator 3 of them are in state connected from this
23:17 bgpepi joined #gluster
23:19 rcoup joined #gluster
23:19 rcoup left #gluster
23:19 sprachgenerator stopping glistered on the replacement node, deleting all in /var/lib/glusterd (except uuid info) and then restarting gluster and probing from a node in the cluster results in peer rejected for only the new node
23:22 JoeJulian sprachgenerator: ,,(replace)
23:22 glusterbot sprachgenerator: Useful links for replacing a failed server... if replacement server has different hostname: http://goo.gl/nIS6z ... or if replacement
23:22 glusterbot server has same hostname: http://goo.gl/rem8L
23:22 JoeJulian Are you following one of those two documented processes?
23:24 sprachgenerator I am following this one: http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
23:24 glusterbot <http://goo.gl/hFwCcB> (at gluster.org)
23:25 semiosis tried this? http://gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
23:25 glusterbot <http://goo.gl/SW0uUJ> (at gluster.org)
23:25 sprachgenerator yes
23:27 JoeJulian It's been so long since I looked at that, I forgot that you do, indeed, probe the existing peer group from the new server since it's uuid is know to the rest of the trusted pool.
23:27 JoeJulian *known
23:27 JoeJulian And you only should have had to probe one server, not 99 of them. :D
23:30 JoeJulian One trick we've found that often works, and I personally haven't taken the time to find out why, is to stop ALL glusterd then start them again.
23:30 JoeJulian This won't affect your clients.
23:31 sprachgenerator yes I have done this before
23:31 sprachgenerator and it has worked
23:31 sprachgenerator the goal was to try and keep the cluster up and running during the replacement of a failed brick
23:31 JoeJulian If any glusterd fails to remain started it's usually due to some mismatch in vols/. I just rsync from another good server when that happens.
23:32 JoeJulian Like I said. The glusterd management daemon will not impede your goal. glusterfsd is the brick ,,(process) and will remain running.
23:32 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
23:34 jbrooks joined #gluster
23:38 chirino joined #gluster
23:39 sprachgenerator so funny thing
23:39 sprachgenerator there is no glusterd init script under ubuntu for 3.4.1
23:39 semiosis it's an upstart job
23:39 semiosis /etc/init/glusterfs-server.conf
23:40 semiosis hilarious
23:41 semiosis and there should be a symlink from /etc/init.d/glusterfs-server -> /lib/init/upstart-job
23:41 chirino joined #gluster
23:43 sprachgenerator initctl list | grep gluster
23:43 sprachgenerator glusterfs-server start/running, process 27641
23:43 sprachgenerator mounting-glusterfs stop/waiting
23:45 semiosis what about it?
23:46 sprachgenerator well maybe I just don't have the experience you have with upstart, but I don't know of any way to discretely restart just the glusterd component, if you could enlighten me that would be great
23:46 semiosis s/glusterd/glusterfs-server
23:46 semiosis just service glusterfs-server restart
23:46 semiosis that will restart glusterd
23:46 sprachgenerator right - but wouldn't that kill the processes that also serve files?
23:47 semiosis no
23:47 bgpepi joined #gluster
23:49 sprachgenerator ok - fair enough, I see the PID's stay the same then for those others - this was something I was not aware of, thx
23:49 semiosis yw
23:50 sprachgenerator unfortunately this still didn't solve the peer rejected issue
23:53 sprachgenerator for the glusterfs processes - what is the best way to stop those outside of a kill - since the upstart job doesn't whack those
23:53 recidive joined #gluster
23:57 semiosis glusterfs -- client processes?  umount usually
23:57 semiosis other stuff like the glusterfs nfs process you'd need to kill afaik, then it should be respawned by restarting glusterd
23:58 semiosis be more specific please

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary