Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 asias joined #gluster
00:17 dhsmith joined #gluster
00:45 bala joined #gluster
00:48 vpshastry joined #gluster
00:56 MugginsM joined #gluster
00:57 spresser_ joined #gluster
00:57 spresser joined #gluster
01:00 recidive joined #gluster
01:02 asias joined #gluster
01:15 Peanut_ joined #gluster
01:15 yinyin joined #gluster
01:42 asias joined #gluster
01:46 bennyturns joined #gluster
01:56 ricky-ticky joined #gluster
01:56 raghug joined #gluster
02:02 harish_ joined #gluster
02:10 dhsmith joined #gluster
02:15 raghug joined #gluster
02:17 zombiejebus joined #gluster
02:18 dhsmith joined #gluster
02:24 dhsmith joined #gluster
02:27 dhsmith_ joined #gluster
02:27 asias joined #gluster
02:49 lalatenduM joined #gluster
02:53 saurabh joined #gluster
02:53 kshlm joined #gluster
03:06 vpshastry joined #gluster
03:19 Kins joined #gluster
03:19 shubhendu joined #gluster
03:23 _pol joined #gluster
03:33 crashmag joined #gluster
03:42 Paul-C joined #gluster
03:45 Paul-C joined #gluster
03:49 Paul-C left #gluster
03:51 Paul-C joined #gluster
03:57 itisravi joined #gluster
03:58 shylesh joined #gluster
03:59 sgowda joined #gluster
04:02 mohankumar joined #gluster
04:06 bharata joined #gluster
04:09 Paul-C joined #gluster
04:21 raghug joined #gluster
04:28 dusmant joined #gluster
04:28 CheRi joined #gluster
04:32 rjoseph joined #gluster
04:45 vpshastry joined #gluster
04:45 satheesh joined #gluster
04:52 bala joined #gluster
05:07 raghu joined #gluster
05:22 bala joined #gluster
05:31 vijaykumar joined #gluster
05:33 shireesh joined #gluster
05:37 kshlm joined #gluster
05:41 45PAA95XA joined #gluster
05:53 bulde joined #gluster
05:57 rastar joined #gluster
05:59 ricky-ticky joined #gluster
06:01 lalatenduM joined #gluster
06:02 lala_ joined #gluster
06:06 vimal joined #gluster
06:10 pea_brain joined #gluster
06:14 psharma joined #gluster
06:27 Recruiter joined #gluster
06:33 ngoswami joined #gluster
06:54 mooperd joined #gluster
06:55 ekuric joined #gluster
06:57 ipalaus joined #gluster
06:57 ipalaus joined #gluster
06:58 mooperd joined #gluster
06:58 ctria joined #gluster
07:05 saurabh joined #gluster
07:09 shireesh joined #gluster
07:09 raghu joined #gluster
07:33 vshankar joined #gluster
07:37 MACscr joined #gluster
07:39 manik joined #gluster
07:49 bala joined #gluster
07:53 zykure|uni joined #gluster
07:54 deepakcs joined #gluster
07:56 risibusy joined #gluster
07:58 kshlm joined #gluster
08:04 chirino joined #gluster
08:12 pea_brain joined #gluster
08:15 risibusy joined #gluster
08:15 ipalaus joined #gluster
08:15 ipalaus joined #gluster
08:16 Norky joined #gluster
08:18 puebele1 joined #gluster
08:19 ricky-ticky joined #gluster
08:25 kaushal_ joined #gluster
08:27 manik joined #gluster
08:34 NeatBasis_ joined #gluster
08:35 chirino joined #gluster
08:38 harish_ joined #gluster
08:39 aravindavk joined #gluster
08:46 atrius joined #gluster
08:47 edward1 joined #gluster
08:51 nagenrai joined #gluster
08:53 manik joined #gluster
08:57 manik joined #gluster
08:59 manik joined #gluster
09:03 Dga joined #gluster
09:05 Dga How I can get Gluster Storage Platform, is it a open source project ?
09:12 ndevos Dga: have you seen http://gluster.org ?
09:12 glusterbot Title: Frontpage | Gluster Community Website (at gluster.org)
09:14 satheesh1 joined #gluster
09:16 Dga ndevos I found this on website : http://gluster.org/community/documentat​ion/index.php/Gluster_3.1:_Downloading_​the_Gluster_Storage_Platform_Software
09:16 glusterbot <http://goo.gl/pYwhMP> (at gluster.org)
09:17 Dga but on download page I dont have any link for Gluster Storage Platform
09:19 rjoseph joined #gluster
09:22 ujjain joined #gluster
09:23 ndevos Dga: version 3.1 is old, 3.4 is current, packages can be found here: http://download.gluster.org/pu​b/gluster/glusterfs/3.4/3.4.0/
09:23 glusterbot <http://goo.gl/Rz8gw> (at download.gluster.org)
09:33 Dga ok thanks for your answer, but its not possible get this version now so ? :s
09:33 bulde joined #gluster
09:35 chirino joined #gluster
09:48 hagarth Dga: if you are looking for a GUI to manage gluster, you can use ovirt
09:49 hagarth Dga: Storage Platform is not developed or supported anymore.
09:53 ctria joined #gluster
09:56 mohankumar joined #gluster
09:57 skyw joined #gluster
09:58 glusterbot New news from resolvedglusterbugs: [Bug 975599] enabling cluster.nufa on the fly does not change client side graph <http://goo.gl/CTk2y>
09:59 Norky joined #gluster
10:00 aravindavk joined #gluster
10:01 Dga ok thanks for your answer hagarth
10:04 vpshastry joined #gluster
10:22 skyw joined #gluster
10:23 satheesh joined #gluster
10:46 raghug joined #gluster
10:50 ctria joined #gluster
10:51 ipalaus joined #gluster
10:51 ipalaus joined #gluster
10:51 bulde joined #gluster
10:52 nagenrai joined #gluster
10:52 nagenrai joined #gluster
10:57 duerF joined #gluster
11:08 bfoster joined #gluster
11:08 nagenrai joined #gluster
11:12 kkeithley joined #gluster
11:13 kwevers joined #gluster
11:17 Chombly joined #gluster
11:18 harish_ joined #gluster
11:18 CheRi joined #gluster
11:23 ctria joined #gluster
11:27 kwevers joined #gluster
11:29 nagenrai joined #gluster
11:31 vijaykumar joined #gluster
11:34 mohankumar joined #gluster
11:43 hagarth joined #gluster
11:45 raghug joined #gluster
11:47 bala joined #gluster
11:54 nagenrai joined #gluster
11:54 manik joined #gluster
11:56 nagenrai joined #gluster
12:02 pea_brain joined #gluster
12:11 ricky-ticky joined #gluster
12:12 kaushal_ joined #gluster
12:13 nagenrai joined #gluster
12:16 rcheleguini joined #gluster
12:19 mohankumar joined #gluster
12:22 pea_brain joined #gluster
12:24 lpabon joined #gluster
12:26 ninkotech joined #gluster
12:31 vpshastry left #gluster
12:32 plarsen joined #gluster
12:33 nagenrai joined #gluster
12:33 pea_brain joined #gluster
12:45 vpshastry joined #gluster
12:48 recidive joined #gluster
12:51 nagenrai joined #gluster
12:52 skyw joined #gluster
13:03 _ilbot joined #gluster
13:03 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
13:03 deepakcs joined #gluster
13:05 nagenrai joined #gluster
13:06 jdarcy joined #gluster
13:06 jdarcy Cloudy/rainy today, might as well work.
13:07 ndevos no rain or clouds here, so working from the balcony :)
13:11 longsleep joined #gluster
13:12 longsleep Hi guys, i am playing around with Gluster 3.4 and have troubles to recreate a failed brick (Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /export/brick1) - anyone can help?
13:13 bennyturns joined #gluster
13:13 xymox joined #gluster
13:15 lpabon joined #gluster
13:15 manik joined #gluster
13:17 vpshastry joined #gluster
13:18 bugs_ joined #gluster
13:21 jebba joined #gluster
13:22 ricky-ticky joined #gluster
13:24 mbukatov joined #gluster
13:29 nagenrai joined #gluster
13:39 Mick27 joined #gluster
13:40 Mick27 hello
13:40 glusterbot Mick27: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:40 Mick27 lol
13:41 Mick27 last week I asked how to 'rebalance' a replicate that I just recreated after a fail. for 3.2 it was some "find . | stat xarg"
13:41 Mick27 if there something better for 3.4 ?
13:43 kkeithley_ self heal!   3.4 will automatically self heal after your replace a failed volume. Or you can issue cli to start it, or your find cmd will also start it IIRC
13:43 kkeithley_ rebalance occurs when you add a new volume.
13:43 Mick27 can you confirm it is new from 3.2 ? just to make sure I am explaining myself clearly
13:44 kkeithley_ I'm just one of the devs, don't take my word. ;-)
13:44 kkeithley_ auto self heal was new in 3.3
13:44 Mick27 ok
13:44 Mick27 should be this then
13:45 Mick27 I'll recreate the issue with 3.4 if I have some time
13:47 tziOm joined #gluster
13:47 bennyturns joined #gluster
13:48 nagenrai joined #gluster
13:52 kwevers joined #gluster
13:53 aliguori joined #gluster
14:00 ricky-ticky joined #gluster
14:01 longsleep So i have problems with auto heal in 3.4. I just recreated a brick (empty) and try to recover to it - no luck as it never is able to start up this new brick.
14:05 JoeJulian longsleep: Yeah, I just discovered that yesterday... Has to do with a missing volume id.
14:06 longsleep JoeJulian: yes
14:06 longsleep JoeJulian: i have tried anything i can think of to get it recreated, no luck so far.
14:07 JoeJulian The brute-force method is to create the extended attribute. I'm looking through the source to see if there's a more reasonable alternative.
14:07 longsleep JoeJulian: i tried to recreate the attribute, but the value is somehow computed. Did not look into the details yet. Its not that easy to just set the uuid there.
14:08 nagenrai joined #gluster
14:08 longsleep JoeJulian: the exact error i get is E [posix.c:4288:init] 0-machines-posix: Extended attribute trusted.glusterfs.volume-id is absent
14:09 JoeJulian yep
14:09 longsleep JoeJulian: good idea to look at the source - will do that too
14:14 nagenrai joined #gluster
14:19 nagenrai joined #gluster
14:21 kaptk2 joined #gluster
14:29 JoeJulian longsleep: brute force method: (vol=myvol; brick=/tmp/brick1; setfattr -n trusted.glusterfs.volume-id -v $(grep volume-id /var/lib/glusterd/vols/$vol/info | cut -d= -f2 | sed 's/-//g') $brick)
14:30 jdarcy joined #gluster
14:30 JoeJulian nope... 1 thing wrong with that...
14:30 JoeJulian longsleep: brute force method: (vol=myvol; brick=/tmp/brick1; setfattr -n trusted.glusterfs.volume-id -v 0x$(grep volume-id /var/lib/glusterd/vols/$vol/info | cut -d= -f2 | sed 's/-//g') $brick)
14:31 longsleep JoeJulian: Thanks - i will try that in a view moments. I noticed that the posix translator has changed on github quite a lot when compared to 3.4 branch. So this problem is a bug?
14:32 JoeJulian Since there doesn't seem to be a cli method of replacing a brick with a blank one which is a fairly common repair method, I'd certainly call it a bug.
14:32 JoeJulian I'm sure the purpose was to keep self-heal from filling up your root if your brick failed to mount, but...
14:33 longsleep JoeJulian: Ok good - so i will report an issue. I though this would be a valid test for a brick where the disk just died.
14:34 _pol joined #gluster
14:36 longsleep JoeJulian: Your solution worked and the empty brick is now online. Thank you!
14:37 JoeJulian You're welcome
14:37 zaitcev joined #gluster
14:41 longsleep JoeJulian: heal worked perfectly well too. So this really should be added to the documentation / wiki until a command line for this becomes available.
14:41 _pol joined #gluster
14:47 raghug joined #gluster
14:47 _pol joined #gluster
14:48 fleducquede joined #gluster
14:50 chirino joined #gluster
14:54 longsleep JoeJulian: I just added bug report for this: https://bugzilla.redhat.com/show_bug.cgi?id=991084
14:54 glusterbot <http://goo.gl/uERz9z> (at bugzilla.redhat.com)
14:54 glusterbot Bug 991084: high, unspecified, ---, vbellur, NEW , No way to start a failed brick when replaced the location with empty folder
14:56 ekuric1 joined #gluster
14:57 ekuric joined #gluster
14:58 daMaestro joined #gluster
14:59 sprachgenerator joined #gluster
15:00 puebele joined #gluster
15:01 45PAA95XA left #gluster
15:07 cicero welp
15:07 cicero one of my bricks went into read-only mode
15:08 cicero in a distributed-replicated setup
15:08 cicero (two pairs of bricks)
15:08 cicero can i just nuke that one brick and restart glusterd?
15:09 JoeJulian what version of glusterfs?
15:10 cicero 3.3.1
15:10 JoeJulian @learn 3.4 as To replace a brick with a blank one, see http://joejulian.name/blog/repl​acing-a-brick-on-glusterfs-340
15:10 glusterbot JoeJulian: The operation succeeded.
15:10 JoeJulian cicero: yes.
15:10 vpshastry joined #gluster
15:11 cicero i suppose that blog post about 3.4 is just a coincidence
15:11 JoeJulian It is.
15:12 cicero :)
15:12 cicero so if i present a brand new fs at the path of the old brick, gluster will realize this is a fresh brick and *NOT* wipe the other bricks, right
15:12 cicero that's my biggest fear of all time
15:14 JoeJulian right
15:14 cicero ok
15:14 cicero what are the chances the entire volume will not accept more writes until the brick is in sync?
15:18 vpshastry joined #gluster
15:25 semiosis @3.4
15:25 glusterbot semiosis: (#1) 3.4 sources and packages are available at http://goo.gl/zO0Fa Also see @3.4 release notes and @3.4 upgrade notes, or (#2) To replace a brick with a blank one, see http://goo.gl/bhbwd2
15:25 bluefoxxx joined #gluster
15:26 bluefoxxx I'm showing no split-brain but showing heal-failed, which I don't understand.
15:27 JoeJulian check the glustershd.log(s) to find clues to why that's failing.
15:29 bluefoxxx https://bugzilla.redhat.com/show_bug.cgi?id=876214 it appears to be exactly this.
15:29 mrfsl joined #gluster
15:29 glusterbot <http://goo.gl/eFkPQ> (at bugzilla.redhat.com)
15:29 glusterbot Bug 876214: high, unspecified, ---, jdarcy, CLOSED WONTFIX, Gluster "healed" but client gets i/o error on file.
15:29 bluefoxxx JoeJulian, it's because I ran rsync at the same time on 2 nodes from a cronjob in a 2 node cluster
15:29 sjoeboo joined #gluster
15:29 JoeJulian Use --inplace when doing rsync to gluster volumes. You'll be much happier. :D
15:30 bluefoxxx what's that do specifically?
15:30 bluefoxxx also looking into how to break these gfids down and delete the files off the silo
15:30 bluefoxxx because I can't find a way to repair this
15:30 JoeJulian It doesn't create tempfiles then rename them.
15:30 bluefoxxx ah ok
15:31 bluefoxxx is there a way to tell gluster to destructively fix this?
15:31 JoeJulian @gfid
15:31 glusterbot JoeJulian: The gfid is a uuid that's assigned to represent a unique inode that can be identical across replicas. It's stored in extended attributes and used in the .glusterfs tree. See http://goo.gl/Bf9Er and http://goo.gl/j981n
15:31 bluefoxxx 2013-08-01 09:10:01 <gfid:b4db5b63-57b6-45f6-84d0​-938fc41f9a3c>/wwmtpress.smil
15:31 solo18t joined #gluster
15:31 mrfsl Hello Again All. Can someone answer this question: Is there recommendations on brick size? Larger and fewer verses Smaller and more for performance?
15:32 JoeJulian mrfsl: My answer is always yes.
15:32 bluefoxxx JoeJulian, why isn't there an oplog?
15:33 JoeJulian I've always assumed because it would add an additional expense in complexity for no clear benefit.
15:33 misuzu joined #gluster
15:33 JoeJulian mrfsl: It all depends on the use case.
15:33 bluefoxxx Uh.  You could heal by coming up and replaying the oplog, optimizing it down (create->update->delete becomes 'ignore this change', create->update->update becomes "create with current content", etc)
15:34 JoeJulian Or you just find the end-state and ensure your replica matches it.
15:34 solo18t Hello All.  Our existing Appliance.repo file points to http://bits.gluster.com/pub​/gluster/appliance/latest/.  I found out that the URL is no longer valid.  Is there an updated Appliance.repo file that I can download?
15:34 glusterbot <http://goo.gl/U1FWx6> (at bits.gluster.com)
15:34 manik joined #gluster
15:34 bluefoxxx JoeJulian, yes, which I do by doing an ls -lR against 3 million files, which takes about 45 minutes.
15:35 JoeJulian I'm not a dev, I just help people make it work. Feel free to file a bug report requesting that feature.
15:35 glusterbot http://goo.gl/UUuCq
15:36 JoeJulian solo18t: Wow! That's old. :D There is no appliance anymore.
15:36 JoeJulian @yum repo | solo18t
15:36 solo18t LOL… it was an old script that my ex-coworker wrote.
15:37 JoeJulian gah.. I botched my own syntax...
15:37 JoeJulian ~yum repo | solo18t
15:37 glusterbot solo18t: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
15:37 cicero JoeJulian: thx for your help so far. re --inplace rsync tip, is that for all rsyncs *to* gluster and what's the rationale behind that? not creating new files?
15:37 bluefoxxx JoeJulian, nod.  My point is that GlusterFS is a huge database that stores large contiguous bytestream data, and replication is a well-known concept.
15:39 JoeJulian rsync creates a new tempfile, syncs it, then renames over the original file. The tempfile filename hashes out to dht subvolume X. The original filename hash calculates out to belong on Y. A lookup for the original filename will create a sticky-pointer on brick X pointing to Y. Any lookups are going to have that one extra dereference in order to get to your file.
15:40 bluefoxxx essentially, it's the same thing as inode numbering
15:40 JoeJulian I think I reversed X and Y in that second to last sentence, but you get the idea.
15:40 bluefoxxx you create different files on gluster by making temfiles, they get different inodes.  They get different guids too.
15:40 ipalaus joined #gluster
15:40 ipalaus joined #gluster
15:40 bluefoxxx when you mv them to the same location, you have two different guids in the same place
15:41 cicero JoeJulian: whoa, cool
15:41 bluefoxxx do this at the same time and gluster goes, "Wait, wtf?"
15:41 bluefoxxx do it in-place and you're accessing the same file, same guid
15:41 bluefoxxx so while you might make a mess... you won't break gluster
15:41 bluefoxxx it's a simple concept but you have to be a giant nerd for it to be obvious
15:41 bluefoxxx lots of stuff nobody should ever have to think about in there
15:41 JoeJulian Hehe, true that.
15:42 bluefoxxx JoeJulian, i'm not using rsync
15:42 JoeJulian Well, clustered filesystems do have their idiosyncrasies.
15:42 bluefoxxx I'm using 'rm' and 'install' to delete the files and put new ones in there
15:42 bluefoxxx same thing
15:44 JoeJulian So you've got a race condition rm'ing and install'ing on two clients simultaneously?
15:44 mrfsl @JoeJulian I have three gluster nodes each with DAS 1 array ~ 16TB RAID6. I have more 85% writes and millions of files approximately 16KiB in size.
15:45 JoeJulian Is a node a server, a client, or both?
15:45 mrfsl server
15:45 JoeJulian How many clients?
15:45 mrfsl more logical volumes (bricks) or less
15:45 mrfsl a few hundred clients
15:46 theron joined #gluster
15:46 JoeJulian I presume they're writing unique files?
15:46 mrfsl yes
15:46 bluefoxxx yeah
15:46 bluefoxxx can't I prevent this with an arbiter :|
15:48 bluefoxxx ugh
15:48 JoeJulian mrfsl: Unless you have some logic reason to separate volumes, I can't see any advantage to doing so. Go with the large bricks.
15:48 JoeJulian bluefoxxx: What version is this?
15:48 bluefoxxx needs gluster volume heal wowza failed delete
15:48 bluefoxxx JoeJulian, 3.3 on CentOS 6
15:48 JoeJulian bluefoxxx: Can you try it with 3.4?
15:48 bulde joined #gluster
15:49 bluefoxxx JoeJulian, full risk analysis of upgrading in production.
15:49 JoeJulian Ah. You have the same resources I do, eh? :(
15:49 bluefoxxx we don't have a dev environment
15:50 bluefoxxx JoeJulian, does 3.4 have a command to delete heal-failed files?
15:50 JoeJulian I would file a bug report on the race. Deletion and creation of the same filename from different nodes should be idempotent.
15:50 glusterbot http://goo.gl/UUuCq
15:50 mrfsl I was thinking that with three nodes and a replicate once system only two nodes would ever be in use for the writing of a single file. Since I had many files I was wondering if more brick pairs would allow me to distribute the load more uniformly between the nodes. --- Thoughts?
15:51 JoeJulian bluefoxxx: And no. There's no cli fix for that. I'm just hoping (and with no valid reason) that 3.4 would prevent that from happening.
15:51 bluefoxxx ah
15:52 bluefoxxx JoeJulian, writing awk scripts >:|
15:53 JoeJulian bluefoxxx: If a gfid file in .glusterfs has a link count of 1, it should be able to be removed.
15:53 bluefoxxx what the fuck just happened
15:54 bluefoxxx I rm'd the silo/$file but not the gfid file
15:54 bluefoxxx and suddenly the file is accessible and has content
15:54 bluefoxxx I rm'd it on both nodes
15:54 JoeJulian Oh, I see...
15:54 bluefoxxx rm'ing it on one node works
15:54 bluefoxxx wtf?
15:55 bluefoxxx where did it even
15:55 bluefoxxx did it like self-heal because the file was missing on the local silo?
15:56 JoeJulian two files on two different dht subvolumes with different gfids? You removed one and the remaining then was able to succeed with the lookup()?
15:57 bluefoxxx say I have ext4 silo /mnt/silo0 mounted as glusterfs /silo0 on both servers, and /silo0/dir/ has a bunch of files that give ioerror on stat()
15:57 bluefoxxx If I rm /mnt/silo0/dir/* on one or both(!) servers, suddenly the files are fine and contain content (even if rm'd on BOTH)
15:58 bluefoxxx "Hey Jim, that was strange.  What the fuck just happened?"  "I dunno, Sam.  Crazy shit."
15:59 JoeJulian Oh, that's the gfid file that still has the contents. They're hardlinks.
15:59 JoeJulian @split-brain
15:59 glusterbot JoeJulian: To heal split-brain in 3.3+, see http://goo.gl/FPFUX .
15:59 JoeJulian That page might make it a little clearer.
15:59 SynchroM joined #gluster
16:03 mooperd_ joined #gluster
16:05 toad hmm lil question: has anyone made a replication with gluster using like 2 bricks, one very remote on a slow filer for backup purposes ?
16:05 toad could be nice if production app never uses the remote brick
16:05 toad for reading / direct writing i mean
16:05 toad a bit like a mysql replication slave only used as a backup server
16:07 mrfsl left #gluster
16:13 Mo_ joined #gluster
16:22 JoeJulian @lucky glusterfs geo-replication
16:22 glusterbot JoeJulian: http://goo.gl/o9uFLz
16:22 JoeJulian toad: ^
16:23 toad oh.
16:23 toad OH :)
16:23 toad im gonna check it out thanks dude
16:23 JoeJulian You're welcome
16:23 toad i can finally start the migration and stop patching the soft since the crazy gluster team will do it ! :))
16:24 guigui1 joined #gluster
16:25 vpshastry joined #gluster
16:32 Technicool joined #gluster
16:36 bala joined #gluster
16:39 zombiejebus joined #gluster
16:42 jdarcy joined #gluster
16:42 th0gz19 joined #gluster
16:43 delcast hello.. i'm having a problem with gluster peer probe <hostname | ip address>
16:44 delcast nothing happens when i do that... i've already allowed a port 24007 from the client from /etc/sysconfig/iptables
16:44 delcast what might be the common problem with this?
16:47 vpshastry left #gluster
16:48 GLHMarmot joined #gluster
16:49 thomaslee joined #gluster
16:49 zykure joined #gluster
16:50 bulde joined #gluster
16:51 vpshastry joined #gluster
17:06 manik joined #gluster
17:07 JoeJulian delcast: in 3.3.1 there was a bug where glusterd would get into a state like that. Restarting glusterd usually solved it.
17:07 delcast oh thanks... but i'm currently using version 3.2.7
17:08 delcast but i will try to reset it again
17:08 JoeJulian Well that's so old that it most certainly had that bug.
17:08 JoeJulian Do you have to use that old thing?
17:09 delcast im in CentOS-6 and that's what the package manager gave me
17:09 JoeJulian @yum repo
17:09 glusterbot JoeJulian: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
17:10 delcast http://www.howtoforge.com/high-availability-stora​ge-with-glusterfs-3.2.x-on-centos-6.3-automatic-f​ile-replication-mirror-across-two-storage-servers
17:10 glusterbot <http://goo.gl/O2G6IL> (at www.howtoforge.com)
17:10 delcast this is where i got the repo
17:10 * JoeJulian almost got snarky...
17:10 delcast sorry.. but im just starting to learn, generally using linux xD
17:11 ccbn joined #gluster
17:11 JoeJulian howtoforge is usually old and has no way to mark a howto as invalid. I would generally advise against using them as a point of expertise.
17:12 ccbn I followed the Getting Started guide to make a replicated volume, and it is started, but when I write a file to the brick, it doens't get synced to the other node.
17:12 delcast sure.. my boss just provided me a link... i thought it is the official one.. i guess i'll have to change repo
17:12 JoeJulian ccbn: Then you missed the step where you mount the client and access your volume through that.
17:13 JoeJulian Gotta run... bbl.
17:16 raghug joined #gluster
17:23 delcast # gluster peer probe <hostname>
17:23 delcast peer probe: failed: Probe returned with unknown errno 107
17:24 bluefoxxx delcast, have fun with that.
17:24 bluefoxxx We have so many ridiculous restrictions.  Like 2 node GlusterFS in an external data center.
17:25 bluefoxxx No quorum, split brain issue that way.
17:25 bluefoxxx Internally we switched to gfs2 + SAN, but that's not an appropriate solution.  More interestingly, we don't have fencing, so the gfs2 will probably destroy itself with massive data corruption eventually unless I get it onto raw LUN and sanlock.
17:26 bluefoxxx systems engineering is hard.  Building HA clusters is really fucking hard.
17:27 bluefoxxx People ask me why I can't do certain things with databases, or why i.e. MongoDB has way better replication than MySQL/PostgreSQL (Postgre is good though, but not Mongo good) but weaker consistency guarantees
17:27 bluefoxxx and I'm like, "Databases are hard.  Do you understand the problem they are trying to solve?"
17:28 bluefoxxx :| I've reached a point where everything I'm trying to install just doesn't have inadequate tools or documentation or clunky interfaces; but it's all trying to solve problems which are inherently difficult or impossible
17:28 bluefoxxx and then people are like "Why can't you move mountains?"
17:32 bennyturns joined #gluster
17:35 jclift joined #gluster
17:39 ccbn joined #gluster
17:39 ccbn I tried mounting the gluster volume on both hosts using 'mount -t gluster' but it still isn't replicating.
17:40 delcast how do i enable SSL.. or connect to non-SSL connection?
17:40 vpshastry left #gluster
17:43 kkeithley_ gluster volume set $vol server.ssl on; gluster volume set $vol client.ssl on
17:46 delcast thakns
17:51 delcast glusterfs.org
17:52 kkeithley_ it's a trap
17:53 toad !
17:53 toad oh noes
17:54 skyw joined #gluster
17:58 bulde joined #gluster
18:00 vpshastry1 joined #gluster
18:02 toad Geo-replication over the Internet
18:02 toad over the ninternet ! ohlala
18:04 vpshastry1 left #gluster
18:06 _pol joined #gluster
18:11 awheeler_ joined #gluster
18:15 rcheleguini joined #gluster
18:17 duerF joined #gluster
18:22 Bullardo joined #gluster
18:27 raghug joined #gluster
18:28 Recruiter joined #gluster
18:35 SynchroM_ joined #gluster
18:45 _pol joined #gluster
18:49 _pol joined #gluster
18:49 manik joined #gluster
18:50 ke4qqq joined #gluster
18:53 purpleidea joined #gluster
18:53 purpleidea joined #gluster
18:55 Danishman joined #gluster
19:16 lpabon joined #gluster
19:40 aliguori joined #gluster
19:40 _pol joined #gluster
19:42 bennyturns joined #gluster
19:43 ricky-ticky joined #gluster
19:44 ricky-ticky joined #gluster
19:45 jskinner_ joined #gluster
19:47 chirino joined #gluster
19:50 zombiejebus joined #gluster
20:00 neofob https://gist.github.com/ne​ofob/cafa7bd9ca735024dcf9
20:00 glusterbot <http://goo.gl/zQoyoG> (at gist.github.com)
20:01 neofob sorry guys
20:07 rcoup joined #gluster
20:07 _pol what would cause this to happen:
20:07 _pol /mnt/gluster/brick0 or a prefix of it is already part of a volume
20:07 _pol [pol@gstore2:~] sudo gluster volume list
20:07 _pol No volumes present in cluster
20:07 glusterbot _pol: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
20:30 bugs_ joined #gluster
20:51 jruggiero left #gluster
20:56 duerF joined #gluster
21:18 badone joined #gluster
21:48 jebba joined #gluster
21:53 ipalaus joined #gluster
21:53 ipalaus joined #gluster
22:05 delcast [2013-08-01 21:50:49.616674] I [cli-rpc-ops.c:146:gf_cli_probe_cbk] 0-cli: Received resp to probe
22:05 delcast [2013-08-01 21:50:49.616735] E [cli-rpc-ops.c:178:gf_cli_probe_cbk] 0-cli: Peer <hostname | IP address> is already at a higher op-version
22:05 delcast can anyone tell me what does this mean?
22:29 asias joined #gluster
23:12 Mick27 left #gluster
23:40 raghug joined #gluster
23:42 delcast joined #gluster
23:50 chirino joined #gluster
23:52 ultrabizweb joined #gluster
23:53 Mr_SH4RK joined #gluster
23:53 Mr_SH4RK hi
23:53 glusterbot Mr_SH4RK: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
23:54 Mr_SH4RK i have a question, in a glister brick the real directory has a size (using df -h) but the .glusterfs directory is bigger than the real directory
23:56 Mr_SH4RK # du -csh ftps/ lost+found/ : 507Gtotal
23:56 Mr_SH4RK # du -sh .glusterfs/ 727Gtotal
23:57 Mr_SH4RK is any form of clear the .glusterfs directory ?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary