Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 yinyin joined #gluster
00:27 DV joined #gluster
00:53 jmalm joined #gluster
01:17 harish joined #gluster
01:40 twx joined #gluster
01:45 sprachgenerator joined #gluster
01:45 matthewh joined #gluster
01:46 matthewh Hi I have some questions regarding the gluster client that I can't seem to find any info on.
01:48 matthewh 1) When you create a mount with  something like "mount -t glusterfs server1:/gv0 /mnt"  and since it uses TCP, does that mean the client will receive the file only from server1.  i.e. does server1 handles all the I/O requests for this particular client?
01:57 Durzo the client asks server1 for a list of all servers/bricks
01:57 Durzo server1 responds with all the servers and bricks for your requested volume (gv0)
01:57 Durzo client then begins communication with them
01:58 Durzo (this is, atleast, how i think it all works.. )
02:01 satheesh joined #gluster
02:12 asias joined #gluster
02:30 JoeJulian ~mount server | matthewh
02:30 glusterbot matthewh: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds
02:31 matthewh Great.  Second question.  Which is generally faster in windows.  a) gluster => samba => windows or b) gluster (nfs) => Windows ?
02:32 matthewh I may have to benchmark that myself
02:32 kevein joined #gluster
02:32 matthewh but maybe someone has already done so
02:32 matthewh using the windows nfs client
02:35 JoeJulian I've been hearing some pretty poor results from the windows nfs stack from one user. Not sure if that's just them, though.
02:36 matthewh But where using NFS (or samba for that matter), the client would be only talking to the one server. Correct in both cases?
02:36 JoeJulian correct
02:37 matthewh so it's not as scalable as using the native client.
02:37 JoeJulian Although....http://www.gluster.org/communit​y/documentation/index.php/CTDB
02:37 glusterbot <http://goo.gl/Yt3pOb> (at www.gluster.org)
02:38 bala joined #gluster
02:38 matthewh thanks, that looks interesting
02:49 vshankar joined #gluster
02:58 awheeler joined #gluster
03:03 bharata-rao joined #gluster
03:05 shubhendu joined #gluster
03:09 lalatenduM joined #gluster
03:19 hagarth joined #gluster
03:19 xPsycho I am using Gluster v3.4.0 with 4 peers.  After moving each of them to new hardware, a few of them are showing "Sent and Received peer request" instead of "Peer in Cluster".  I updated /var/lib/glusterd/glusterd.info with the correct UUID on each of them before starting gluster services.  I don't see anything obvious in the logs.
03:36 bala joined #gluster
03:46 satheesh joined #gluster
03:48 ppai joined #gluster
03:53 itisravi joined #gluster
03:53 hagarth joined #gluster
03:59 robo joined #gluster
04:02 CheRi joined #gluster
04:04 RameshN joined #gluster
04:05 dusmant joined #gluster
04:09 sgowda joined #gluster
04:16 robo joined #gluster
04:28 karthik joined #gluster
04:34 mohankumar joined #gluster
04:42 ababu joined #gluster
04:46 rastar joined #gluster
05:02 rjoseph joined #gluster
05:03 jmalm joined #gluster
05:04 psharma joined #gluster
05:07 lalatenduM joined #gluster
05:12 lalatenduM joined #gluster
05:20 raghu joined #gluster
05:20 hagarth joined #gluster
05:30 kanagaraj joined #gluster
05:34 ababu joined #gluster
05:35 RameshN joined #gluster
05:45 shruti joined #gluster
05:57 36DABANPO joined #gluster
05:59 zwu joined #gluster
06:02 shylesh joined #gluster
06:07 aravindavk joined #gluster
06:13 nshaikh joined #gluster
06:18 jtux joined #gluster
06:21 spresser joined #gluster
06:21 ababu joined #gluster
06:22 jayunit100 joined #gluster
06:28 johnmwilliams joined #gluster
06:30 satheesh1 joined #gluster
06:33 itisravi_ joined #gluster
06:35 rjoseph joined #gluster
06:44 ngoswami joined #gluster
06:45 vimal joined #gluster
06:59 jayunit100 joined #gluster
07:04 hybrid512 joined #gluster
07:05 bulde joined #gluster
07:08 tjikkun_work joined #gluster
07:14 RameshN joined #gluster
07:19 shylesh joined #gluster
07:21 aravindavk joined #gluster
07:24 andreask joined #gluster
07:26 X3NQ joined #gluster
07:28 psharma joined #gluster
07:32 mmalesa joined #gluster
07:40 piotrektt joined #gluster
07:40 piotrektt joined #gluster
07:40 1JTAAPDMS joined #gluster
07:42 mdjunaid joined #gluster
07:49 mooperd joined #gluster
08:18 Norky joined #gluster
08:19 mmalesa joined #gluster
08:44 nshaikh joined #gluster
08:57 bharata-rao joined #gluster
09:07 RameshN joined #gluster
09:18 rastar joined #gluster
09:18 shylesh joined #gluster
09:22 6JTAAF831 joined #gluster
09:22 sgowda joined #gluster
09:26 64MAAGTM8 joined #gluster
09:30 ababu_ joined #gluster
09:32 mohankumar joined #gluster
09:37 mmalesa_ joined #gluster
09:37 ruhe_ joined #gluster
09:39 toad joined #gluster
09:40 ricky-ticky joined #gluster
09:40 Deeps joined #gluster
09:46 pedbor joined #gluster
09:47 pedbor ?
09:48 CheRi joined #gluster
09:48 sgowda joined #gluster
09:49 shruti joined #gluster
09:51 bharata-rao joined #gluster
09:59 duerF joined #gluster
10:02 pedbor quit
10:02 pedbor left #gluster
10:06 spider_fingers joined #gluster
10:10 shylesh joined #gluster
10:15 shubhendu joined #gluster
10:15 thommy_ka joined #gluster
10:16 thommy_ka joined #gluster
10:16 vijaykumar joined #gluster
10:18 TomKa joined #gluster
10:22 shylesh joined #gluster
10:29 shylesh joined #gluster
10:39 sgowda joined #gluster
10:43 ruhe_ joined #gluster
10:52 rwheeler joined #gluster
10:55 CheRi joined #gluster
11:14 ppai joined #gluster
11:19 CheRi joined #gluster
11:20 bala joined #gluster
11:20 harish joined #gluster
11:32 hagarth joined #gluster
11:41 ababu joined #gluster
11:42 rastar joined #gluster
11:49 shylesh joined #gluster
11:58 nexus joined #gluster
12:01 bulde joined #gluster
12:13 neofob joined #gluster
12:15 toad joined #gluster
12:17 toad joined #gluster
12:20 harish joined #gluster
12:21 dusmant joined #gluster
12:23 sprachgenerator joined #gluster
12:25 guigui1 joined #gluster
12:26 shylesh joined #gluster
12:33 mohankumar joined #gluster
12:46 ujjain joined #gluster
12:47 B21956 joined #gluster
12:52 RameshN yes
12:59 ppai joined #gluster
13:02 harish joined #gluster
13:16 samppah_ joined #gluster
13:17 hflai_ joined #gluster
13:17 bfoster_ joined #gluster
13:17 zoldar_ joined #gluster
13:18 NeatBasis_ joined #gluster
13:20 sprachgenerator joined #gluster
13:20 portante_ joined #gluster
13:25 rcheleguini joined #gluster
13:28 aliguori joined #gluster
13:28 kkeithley joined #gluster
13:29 bivak joined #gluster
13:35 gGer joined #gluster
13:36 gGer hey. why do I get this: peer probe: failed: Peer storage8 is already at a higher op-version
13:36 raghu joined #gluster
13:36 gGer i upgraded gluster cluster from 3.2.7 to 3.4, and storage8 is a fresh installation of 3.4
13:37 gGer how do I get the existing cluster to the same op-version as the storage8 node
13:37 gGer so that i can do gluster peer probe storage8 from existing nodes
13:39 bennyturns joined #gluster
13:40 gGer no help?
13:44 Norky you stopped gluster on the existing nodes, upgraded it then restarted gluster?
13:46 failshell joined #gluster
13:50 gGer norky, yes
13:50 gGer i followed this http://vbellur.wordpress.com/2012/​05/31/upgrading-to-glusterfs-3-3/
13:50 glusterbot <http://goo.gl/qOiO7> (at vbellur.wordpress.com)
13:50 ruhe joined #gluster
13:54 bugs_ joined #gluster
13:55 robo joined #gluster
13:56 jmalm joined #gluster
14:00 plarsen joined #gluster
14:01 Norky can you try it the other way around, i.e on storage8 run "gluster peer probe existingserver"
14:02 premera joined #gluster
14:02 gGer norky, I think that will work, but then the old cluster wont be using the max op-version, right?
14:03 plarsen joined #gluster
14:03 gGer /var/lib/glusterd/glusterd.info on old nodes:
14:03 gGer operating-version=1
14:07 chirino joined #gluster
14:11 ruhe_ joined #gluster
14:15 kaptk2 joined #gluster
14:17 sgowda joined #gluster
14:18 Norky does that match the version on storage8?
14:18 Norky I have operating-version=1 on all machines here
14:24 dusmant joined #gluster
14:26 rwheeler joined #gluster
14:26 spider_fingers left #gluster
14:28 jebba joined #gluster
14:32 gGer norky, there is no glusterd.info on storage8 as its a fresh install/new node
14:33 gGer also in the log:
14:33 gGer [2013-08-19 13:54:24.560339] E [glusterd-handshake.c:900:__gl​usterd_mgmt_hndsk_version_cbk] 0-management: failed to validate the operating version of peer (storage8)
14:36 Norky ahh, I've just done a fresh install of glusterfs 3.4. The operating version ended up as 2
14:36 sprachgenerator joined #gluster
14:37 Norky I'm not sure what should be happening in your case
14:37 redragon_ joined #gluster
14:38 redragon_ okay I have a 4 replicate setup and we broke a brick on purposes (testing), removed the brick and created a new brick on that node, the rebuild/transfer seems really slow
14:38 redragon_ is there any way to check the replication status?
14:41 gGer norky, did you upgrade the old cluster or did you just install a new node?
14:41 Norky on a cluster which I upgraded the version remains 1
14:41 Norky on a freshly-installed system, it is 2
14:42 aliguori joined #gluster
14:42 gGer norky, but how do you get the old cluster to 2?
14:42 gGer norky, manually editing the file?
14:42 Norky I can't test adding the fresh machine to the running system - it's on a separate network
14:44 Norky gGer, I'm not sure - this will probably have to wait for more knowledgeable people
14:47 jag3773 joined #gluster
14:48 redragon_ think i found the issues
14:51 Norky redragon_, jolly good, and they are?
14:51 redragon_ i think those machines have epel packages and not packages straight from gluster, checking that now
14:52 redragon_ @repo
14:52 glusterbot redragon_: I do not know about 'repo', but I do know about these similar topics: 'git repo', 'ppa repo', 'repos', 'repository', 'yum repo'
14:52 redragon_ @yum repo
14:53 glusterbot redragon_: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
14:56 Norky gGer, I've actually just done an upgrade on my existing cluster, from glusterfs-3.4.0-3.el6.x86_64 to glusterfs-3.4.0-8.el6.x86_64
14:56 Norky and that caused the operating version to change
14:57 Norky make sure you're running exactly the same major AND minor version from the same repository on all machines
15:00 rwheeler joined #gluster
15:01 redragon_ Norky, so we stopped everything and replaced the package with packages from download.gluster.org instead of epel repo
15:01 redragon_ we'll see if the rebuild goes any faster
15:02 redragon_ all the machines are on gig network, same switches, and underlying drive is raid 0 for my glusterfs, I would think the sync would be fairly quick for 5T of data
15:04 gGer norky, yes, I'm running the same version
15:04 gGer norky, this is ubuntu, not RHEL
15:05 deepakcs joined #gluster
15:07 Norky in my case: sudo grep version /var/lib/glusterd/glusterd.info ; sudo yum -y update > /dev/null ; sudo grep version /var/lib/glusterd/glusterd.info    led to
15:07 Norky operating-version=1
15:07 Norky operating-version=2
15:07 gkleiman joined #gluster
15:08 Norky see if there are more recenet versions from the Ubuntu ppa
15:08 gGer no there isnt
15:08 gGer i did apt-get update.. apt-get upgrade, nothing
15:08 Norky what exact version are you running?
15:09 daMaestro joined #gluster
15:10 kkeithley /var/lib/glusterd/glusterd.info is generated at run-time (by glusterd). FWIW, the _source_ did not change between 3.4.0-3 and 3.4.0-8, only the packaging. I could go look through the code to see what made the op-version increment like that, but maybe one of our other devs might know, and answer before I can figure it out.
15:11 gGer ii  glusterfs-server                   3.4.0final-ubuntu1~raring1       amd64        clustered file-system (server package)
15:12 gGer btw i see in logs
15:12 gGer [2013-08-19 10:46:00.136684] I [client-handshake.c:1658:sele​ct_server_supported_programs] 0-datavol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
15:13 Norky ty kkeithley
15:13 Norky it surprised me a little too
15:15 gGer Started running /usr/sbin/glusterfs version 3.4.0
15:15 Norky the changelog suggests it mostly just packaging changes between -3 and -8 (with your name on :)
15:15 kkeithley to be a bit clearer, /var/lib/glusterd/glusterd.info is generated at run-time the first time it runs, mainly to generate the node uuid.
15:15 kkeithley right
15:16 gGer kkeithley, so how do I get new node with op-version=2 to added to old cluster?
15:16 gGer kkeithley, can I edit op-version=1 => 2 on old nodes and restart the glustrds?
15:18 kkeithley I think that should be okay.
15:20 kkeithley But I don't do the Ubuntu packaging, and AFAIK that hasn't changed anyway, so why the op-version changed during an update is a puzzle.
15:20 Norky let me triple check what I'm seeing...
15:20 robo joined #gluster
15:21 Norky oh course, that woudl require having a copy of the old packages...
15:22 kkeithley The old RPMs are on the download site in the "old" directory
15:23 kkeithley http://download.gluster.org/pub/​gluster/glusterfs/3.4/3.4.0/old/
15:23 glusterbot <http://goo.gl/uHktbx> (at download.gluster.org)
15:23 Norky ahh, I was looking in the wrong "old" directory
15:32 LoudNoises joined #gluster
15:32 awheeler joined #gluster
15:32 awheeler joined #gluster
15:38 mmalesa joined #gluster
15:40 ryant I've got a volume that needs heavy self-healing and I worry that it's bogged down in locks.  A statedump on one of the servers shows that there's 50010 instances of xlator.features.locks.work-locks.inode.
15:40 mmalesa_ joined #gluster
15:41 ryant when I try to use the cli to get even volume status, it often fails silently
15:41 ryant and I haven't been able to run heal info for days
15:41 ryant is there any way get visibility into what files are causing problems?  Perhaps manual intervention could help
15:43 kaptk2 joined #gluster
15:45 Norky hmm, right, installing 3.4.0-3 on a 'fresh' machine still gives me "operating-version=2"
15:46 Norky so it looks like "operating-version=1" is only on machines that *were* running 3.3
15:47 rwheeler joined #gluster
15:48 Norky presumably it should be updated when upgrading to 3.4. IT looks like that doesn't happen for me going from 3.3.0 to 3.4.0-3, but it *does* when I upgrade to 3.4.0-8 . gGer on the other hand with Ubuntu ppa packages still gets version 1
15:48 mmalesa joined #gluster
15:55 [o__o] joined #gluster
15:59 kkeithley that doesn't match the comments in the code either.
15:59 Technicool joined #gluster
16:03 ryant why does "gluster volume heal $VOLNAME info" timeout?  When I run it, it thinks for a while and then just dies without producing any output?
16:04 ryant rerunning it seems to hit a CLI timeout becuase an immediate re-run fails
16:07 bulde joined #gluster
16:13 rwheeler joined #gluster
16:23 zerick joined #gluster
16:27 ryant "gluster volume heal $VOLNAME info" runs for exactly 2 minutes and dies with return value 110
16:27 ryant why is this?
16:28 ryant in the cli log I get
16:28 ryant [2013-08-19 16:03:04.017604] W [dict.c:2339:dict_unserialize] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0xa5) [0x7fb1c8c513c5] (-->/usr/lib64/libgfrpc.so.0​(rpc_clnt_handle_reply+0xa5) [0x7fb1c8c50945] (-->gluster(gf_cli3_1_heal_volume_cbk+0x1e5) [0x4229c5]))) 0-dict: buf is null!
16:28 ryant [2013-08-19 16:03:04.017631] E [cli-rpc-ops.c:5956:gf_cli3_1_heal_volume_cbk] 0-: Unable to allocate memory
16:40 nshaikh left #gluster
16:45 ryant the unable to allocate memory comes for the immediate re-run of the command
16:45 ryant [2013-08-19 16:44:36.114726] W [rpc-transport.c:174:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
16:45 ryant [2013-08-19 16:44:36.177715] I [cli-rpc-ops.c:5916:gf_cli3_1_heal_volume_cbk] 0-cli: Received resp to heal volume
16:45 ryant [2013-08-19 16:44:36.209104] W [dict.c:2339:dict_unserialize] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0xa5) [0x7faae30513c5] (-->/usr/lib64/libgfrpc.so.0​(rpc_clnt_handle_reply+0xa5) [0x7faae3050945] (-->gluster(gf_cli3_1_heal_volume_cbk+0x1e5) [0x4229c5]))) 0-dict: buf is null!
16:45 ryant [2013-08-19 16:44:36.209195] E [cli-rpc-ops.c:5956:gf_cli3_1_heal_volume_cbk] 0-: Unable to allocate memory
16:45 ryant [2013-08-19 16:44:36.209275] I [input.c:46:cli_batch] 0-: Exiting with: -1
16:46 ryant first time through it exits with 110, second time the return value is 255 since it died with a memory signal
16:46 hagarth joined #gluster
16:46 ryant this is 3.3.2 BTW
16:50 duerF joined #gluster
17:04 gGer 1-datavol-dht: setattr of uid/gid on <gfid:7800cc88-1a5a-4cb6-b63​d-e4b11ee9f5bb>filename.foo :<gfid:00000000-0000-0000-0000-000000000000> failed (Invalid argument)
17:04 gGer do I need to have same uid/guid on each glusterfs server? As it seems setting uid/gid fails?
17:04 _pol joined #gluster
17:04 cfeller joined #gluster
17:07 lpabon joined #gluster
17:19 bulde joined #gluster
17:31 theron joined #gluster
17:37 cfeller New config question: I'm setting up a new storage config, and all of my the machines that will comprise my bricks are currently configured RAID 10.  Would there be any advantage (beyond ridiculous redundancy) to also configure Gluster (the initial setup will be four bricks), as replica 2? Or would it make more sense to go straight pure distributed since I have RAID 10 on the bricks?
17:37 cfeller (or would you suggest doing it the other way around: no RAID on the bricks, and only do redundancy in Gluster via replica 2?)
17:37 cfeller thoughts?
17:42 semiosis i like gluster replication because i can lose a whole DC and things keep running
17:43 * semiosis in EC2
17:43 semiosis s/DC/AZ/
17:43 glusterbot What semiosis meant to say was: i like gluster replication because i can lose a whole AZ and things keep running
17:44 semiosis also, i can reboot servers (for kernel upgrades, for ex) without any downtime to clients
17:44 semiosis generally speaking, if uptime is important to you, use glusterfs replication
17:46 lalatenduM joined #gluster
17:46 cfeller OK.
18:00 robos joined #gluster
18:15 cfeller semiosis: do you still see value in keeping RAID on the bricks, given a replica 2 configuration in gluster?
18:16 semiosis depends on what matters to you
18:17 semiosis and how likely are disk failures
18:18 JoeJulian I choose not to use raid on my disks. I find the failure rate without notice is low enough that gluster replication satisfies my current needs. 3.4 is /supposed/ to recognize disk failures (somehow) and kick the failed disk from the volume.
18:28 _pol joined #gluster
18:36 robo joined #gluster
19:00 cfeller JoeJulian: each server here has 12 disks for storage (they are Dell R515 servers).  If I nuked the RAID config, would you present each disk of the machine to Gluster as a brick (pairing it with its matching disk on another machine), or would you LVM all 12 disks of each machine as a single brick?
19:00 cfeller (I guess I'm asking this question, as I'm not sure how LVM would handle a failure if I went that route.)
19:00 cfeller (and the other route would just be a lot of bricks.)
19:05 JoeJulian cfeller: Actually, I do use lvm and I generate each lv tied to a specific disk, "lvcreate -n foo -l 500 clustervg /dev/sda1". This gives me the flexibility to migrate an lv to another disk temporarily if I want to.
19:05 JoeJulian Plus, of course, I can resize bricks easily.
19:16 nueces joined #gluster
19:23 tqrst joined #gluster
19:26 Recruiter joined #gluster
19:29 tqrst what's the latest on gluster native vs. nfs in terms of speed? I just came across a mailing list post by Jeff Darcy claiming that the nfs client does more caching, but couldn't find much else in terms of comparison except for slides about 3.1.
19:30 JoeJulian It's not just a claim. The kernel nfs stack uses the kernel fscache.
19:30 semiosis but you can disable that with the noac mount option
19:34 bennyturns joined #gluster
19:41 zombiejebus joined #gluster
20:07 a2_ the bigger hit for the native client is the fuse context switches, more than anything else
20:07 a2_ new upstream fuse enhancements improve the situation pretty significantly - write caching and readdirplus.. with the two in place (hopefully available in a distro kernel in a few months?) the native client will pretty much be better than nfs in *any* workload/test
20:15 xPsycho I am using Gluster v3.4.0 with 4 peers.  After moving each of them to new hardware, a few of them are showing "Sent and Received peer request" instead of "Peer in Cluster".  I updated /var/lib/glusterd/glusterd.info with the correct UUID on each of them before starting gluster services.  I don't see anything obvious in the logs.
20:18 semiosis try restarting glusterd
20:19 semiosis one at a time, on each server
20:21 xPsycho did try that
20:21 xPsycho do you know what that status message mean?
20:21 semiosis try again
20:25 xPsycho done ... still showing same status
20:31 robos joined #gluster
20:39 glusterbot New news from resolvedglusterbugs: [Bug 862082] build cleanup <http://goo.gl/pzQv9M>
20:41 semiosis xPsycho: are the right ,,(ports) allowed by iptables?
20:41 glusterbot xPsycho: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
20:42 xPsycho iptables is disabled ... no internal firewall between these boxes
20:47 semiosis hmmm
20:47 xPsycho does it sound like a connectivity issue?
20:48 semiosis well you could rule out connectivity issues doing a telnet test to port 24007
20:49 xPsycho that's working fine
20:49 xPsycho of the 3 boxes, it's just one that seems to be suspect ... should I just erase gluster and have it rebuild the config?
20:49 xPsycho err, of the 4 boxes
20:50 semiosis on that server, stop glusterd, make a backup of /var/lib/glusterd, then delete everything in that folder EXCEPT for the glusterd.info file
20:50 semiosis start glusterd up again, then probe from that server to one of the others in the cluster
20:50 semiosis wait a minute, check gluster peer status, and if necessary restart glusterd on both that server & the one you sent the probe to
20:51 semiosis then again...
20:51 semiosis wait a minute, check gluster peer status, and if necessary restart glusterd on both that server & the one you sent the probe to
20:56 xPsycho okay, we're in a slightly better now, but not 100%
20:56 xPsycho the server in question is now showing "Peer in cluster" with everyone else (which it was not before)
20:57 xPsycho but "peer status" on 2 of the servers still show the offending server as "Sent and Received peer request"
20:57 semiosis o/
20:57 semiosis restart glusterd on those
20:57 xPsycho did
20:57 semiosis probe
20:57 semiosis sound like i am just guessing?  i am.  but this strategy has helped me out of the same situation more than a few times :)
20:57 xPsycho "already in peer list"
20:59 badone joined #gluster
20:59 xPsycho should I try the same /var/lib/glusterd removal technique on these other two?
21:09 semiosis might as well
21:29 andreask joined #gluster
21:30 xPsycho VERY VERY close now LOL
21:30 xPsycho one damn connection showing "Accepted peer request"
21:30 xPsycho the rest are okay
21:31 xPsycho nevermind, a restart fixed that guy too
21:31 MugginsM joined #gluster
21:31 xPsycho okay ... that wasn't so bad to resolve ... thanks for making this software so damn reslient :)
21:31 xPsycho I appreciate the help
21:33 semiosis \o/
21:33 semiosis gluster achievement unlocked
21:33 xPsycho HAHAH
21:34 xPsycho nice
21:37 johnmark w00t ;)
21:51 _pol joined #gluster
21:58 duerF joined #gluster
21:58 fidevo joined #gluster
22:02 chirino joined #gluster
22:06 jebba joined #gluster
22:15 chirino joined #gluster
22:32 awheele__ joined #gluster
22:49 toad joined #gluster
23:24 Shdwdrgn joined #gluster
23:35 robo joined #gluster
23:40 Shdwdrgn joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary