Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 ShaunR In all these docs they keep showing these bricks being hostname:/data but i never see what /data is...
00:02 ShaunR what is /data, just a simple ext3, ext4, etc partition?
00:27 Technicool joined #gluster
00:33 hateya joined #gluster
00:37 neofob joined #gluster
01:42 kevein joined #gluster
01:43 kevein_ joined #gluster
02:19 theron joined #gluster
02:21 raven-np joined #gluster
02:31 designbybeck_ joined #gluster
02:58 bharata joined #gluster
03:08 lanning ShaunR: yes
03:38 harshpb joined #gluster
03:39 hagarth joined #gluster
03:48 shylesh joined #gluster
03:53 lala joined #gluster
03:55 mohankumar joined #gluster
04:19 sahina joined #gluster
04:44 pai joined #gluster
04:47 ramkrsna joined #gluster
04:53 sgowda joined #gluster
04:55 sripathi joined #gluster
05:02 sripathi1 joined #gluster
05:22 hagarth joined #gluster
05:23 bharata joined #gluster
05:32 vpshastry joined #gluster
05:34 lala joined #gluster
05:53 ngoswami joined #gluster
06:04 raghu joined #gluster
06:07 zwu joined #gluster
06:09 deepakcs joined #gluster
06:23 vpshastry joined #gluster
06:23 bharata joined #gluster
06:23 sripathi joined #gluster
06:23 Nevan joined #gluster
06:41 raven-np1 joined #gluster
06:47 theron joined #gluster
07:22 shireesh joined #gluster
07:25 jtux joined #gluster
07:27 rgustafs joined #gluster
07:31 rastar joined #gluster
07:46 deepakcs joined #gluster
07:48 ekuric joined #gluster
08:09 jtux joined #gluster
08:10 vpshastry joined #gluster
08:19 ctria joined #gluster
08:20 tjikkun_work joined #gluster
08:22 guigui3 joined #gluster
08:27 ramkrsna joined #gluster
08:27 dobber joined #gluster
08:29 inodb joined #gluster
08:31 Joda joined #gluster
08:31 vpshastry joined #gluster
08:38 gbrand_ joined #gluster
08:45 mohankumar joined #gluster
09:00 sripathi joined #gluster
09:03 DaveS_ joined #gluster
09:05 bauruine joined #gluster
09:08 Norky joined #gluster
09:15 mohankumar joined #gluster
09:20 sripathi joined #gluster
09:20 sgowda joined #gluster
09:20 clag_ joined #gluster
09:23 vpshastry joined #gluster
09:41 rastar joined #gluster
09:42 sashko joined #gluster
09:43 pai_ joined #gluster
09:44 pai_ left #gluster
09:47 ruissalo joined #gluster
09:50 ruissalo hi guys, i am having this issue http://community.gluster.org/q/can​not-mount-gluster-volume-at-boot/  i am also using ubuntu 12.04 and glusterfs 3.2.5 built on Jan 31 2012 07:39:58 . Note that my gluster server is not on the same machine...
09:50 glusterbot <http://goo.gl/I1BAE> (at community.gluster.org)
09:50 ruissalo as the client
09:55 partner JoeJulian: full heal does not work on that. and i know one must not touch the bricks straight but this was exactly a test on "what if" and how to recover that. after all its just a mount (or even just dir) on a server and we are only humans with root power..
09:58 shylesh joined #gluster
10:02 partner i was just puzzled partly because none of the loop-versioned "stat <file>" didn't work while doing it by hand did. and still am, not sure what gluster wants to initiate the repairs (find ... dd .. bs=1M does work, awfully lot of data to read thought on huge volume)
10:04 rnts how destructive would it be to try to rsync data from one node's brick to another nodes brick that is set to 'replication' - I have a scenario where some files are present on node1:/a but not fully present on node2:/a
10:05 rnts basically, one of my nodes has crashed while accepting writes and won't self-heal (I get tons of IOErrors on read from a client)
10:08 mohankumar joined #gluster
10:10 partner rnts: i'm kinda of trying to figure out almost same situation..
10:11 partner did you see into ,,(split-brain) ?
10:11 glusterbot (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
10:12 rnts partner: yeah I've looked into it
10:12 rnts and I'm on 3.2.x here
10:14 bulde joined #gluster
10:14 partner my case is more simple, just have files missing but still not sure how to get them back.. actions over mount only partially gets files replicated back to the "broken" brick
10:14 rnts It doesn't look like a true split-brain though but it seems like node1 got data that it did not have time to fully sync to node2 before it died, then all clients proceeded to write to node2 and now when node1 is back up it has incosistencies
10:14 sgowda joined #gluster
10:17 partner IMO your description sounds exactly as described on the bottom of (#1)
10:18 partner but i'm too noobie with this stuff anyways, trying to break things here and learn how to fix them before going into production
10:18 rnts yeah it does, but for some reason the 3.3 heal split-brain scenario doesn't apply to 3.2.1
10:20 partner so doing the stat on all the files does not initiate any repairs as gluster is supposed to check all replicas on access?
10:20 partner http://www.gluster.org/community/do​cumentation/index.php/Gluster_3.2:_​Triggering_Self-Heal_on_Replicate
10:20 glusterbot <http://goo.gl/ekuZp> (at www.gluster.org)
10:20 rnts yeah I've tried that a couple of times
10:21 partner assumed so too :)
10:21 partner well, does not work for me either, i am still missing few files out of 10 test files, i've been stat'ing, cat'ing, dd'ing, nothing gets few files back to node 1
10:21 rnts Though, it hasn't run to it's end, we got about 16T's of data on our gluster-implementation
10:22 rnts never got this 'error' before
10:22 rnts I'm sorely tempted to nuke one of the nodes and initiate a new sync
10:22 partner or actually i can get them but for some weird reason it only works if i access the file straight, not via any loop which is *very* weird
10:23 partner i'll put the "crash scenario" next to my list to try out..
10:24 rnts Yeah do it, when we've had crashes before (or volumes/bricks going completely bad) we've just replaced the volumes and ran rebalance and the nodes were happy
10:24 rnts never had this specific scenario before
10:27 rastar joined #gluster
10:32 glusterbot New news from newglusterbugs: [Bug 903566] client process deadlock <http://goo.gl/8VRpL>
10:33 hagarth joined #gluster
10:58 shireesh joined #gluster
10:59 spn joined #gluster
11:01 rastar joined #gluster
11:02 manik joined #gluster
11:22 hagarth joined #gluster
11:23 duerF joined #gluster
11:23 luis_alen joined #gluster
11:24 errstr joined #gluster
11:28 vpshastry joined #gluster
11:44 hagarth joined #gluster
12:00 bulde1 joined #gluster
12:12 morse joined #gluster
12:14 polfilm_ joined #gluster
12:22 kkeithley1 joined #gluster
12:27 edward1 joined #gluster
12:49 pai left #gluster
12:54 aliguori joined #gluster
12:58 hateya joined #gluster
13:18 bulde joined #gluster
13:27 mohankumar joined #gluster
13:31 deepakcs joined #gluster
13:42 bauruine joined #gluster
13:47 dustint joined #gluster
13:53 abkenney joined #gluster
13:53 hateya_ joined #gluster
13:57 polfilm_ joined #gluster
13:59 x4rlos joined #gluster
14:03 theron joined #gluster
14:11 vpshastry joined #gluster
14:19 rwheeler joined #gluster
14:32 sjoeboo its early, so ods are i'm talking to my self, but....
14:32 sjoeboo if i have a 5x2 replicated volume, w/ 28TB bricks (1 per node)
14:32 sjoeboo how.....unadvisable, would it be to add some new nodes w/ lets say 40TB bricks
14:33 sjoeboo and rebalance
14:33 sjoeboo would it "just work" and storage slightly more data on those bigger bricks? or would it get weird? (the bigger bricks would be paired with each other, of course)
14:34 sjoeboo and, second question: if i have this same replica = 2 volume, and want to turn it into a replica = 3 , is there a "Sane" path to do so?
14:35 sjoeboo currently my thoughts are taking the volume down/deleting it, and recreating w/ replica 3 and letting it heal to the empty set...
14:35 lala_ joined #gluster
14:48 shylesh joined #gluster
14:50 nueces joined #gluster
14:52 BumaTon joined #gluster
14:54 BumaTon Hi - I have ran into a problem where my 87TB volume XFS CentOS 5.x 64bit panics when it trys to catch up with its peer volume because it was rebooted and kept down for a while (30 mins).  It's peer is constantly getting files at a 100MB/s rate (that is Megabites not Megabits).  I have noticed that if I stop the copying on the peer, then the node that was panicking will stop panicking, and I believe it starts synching the missing
14:54 BumaTon I have also noticed that if I stop gluster on the panicking box, then no panics will take place.
14:55 BumaTon So my question is:  Is there a problem when too much activity takes place?  Synching catchup, and replication at the same time?  I know this is probably not a gluster problem, perhaps a FUSE problem - but my panic comes from XFS - I just wondered if anyone ran into the same thing.
14:56 BumaTon XFS panic with a can not sync failure
14:57 chouchins joined #gluster
14:59 theron joined #gluster
15:00 dustint joined #gluster
15:01 jbrooks joined #gluster
15:05 semiosis BumaTon: what do you mean panicking?  kernel panic?  or something else?
15:07 stopbit joined #gluster
15:09 hateya_ joined #gluster
15:09 m0zes it almost sounds like the underlying storage is getting overwhelmed by the constant sync.
15:15 hateya joined #gluster
15:22 inodb_ joined #gluster
15:22 BumaTon semiosis - kernel panic
15:24 bugs_ joined #gluster
15:26 theron joined #gluster
15:30 sripathi joined #gluster
15:32 hateya joined #gluster
15:34 aliguori joined #gluster
15:52 hagarth joined #gluster
15:55 manik joined #gluster
15:57 theron joined #gluster
15:58 balunasj joined #gluster
16:10 daMaestro joined #gluster
16:13 DrVonNostren can i expect much speed difference between a replicated - striped layout over a replicated distributed layout?
16:13 jgillmanjr DrVonNostren: My understanding is that striping benefits from when you're dealing with highly concurrent access of large files
16:13 bitsweat joined #gluster
16:14 DrVonNostren so only reading from the cluster jgillmanjr ?
16:15 jgillmanjr Reads, and I would imagine writes would benefits as well
16:16 jgillmanjr But I haven't actually done any testing with striped or replicated striped volumes
16:31 kkeithley1 @stripe
16:31 glusterbot kkeithley1: Please see http://goo.gl/5ohqd about stripe volumes.
16:32 kkeithley1 DrVonNostren, jgillmanjr: ^^^
16:33 DrVonNostren thanks, reading
16:40 shylesh joined #gluster
16:40 aliguori joined #gluster
16:42 jgillmanjr Good read
16:46 tc00per joined #gluster
16:50 tc00per left #gluster
16:57 sashko joined #gluster
16:59 chouchins joined #gluster
17:04 glusterbot New news from newglusterbugs: [Bug 903723] [RFE] Make quick-read cache the file contents in the open fop instead of lookup <http://goo.gl/10NDG>
17:15 vpshastry joined #gluster
17:30 zaitcev joined #gluster
17:36 Mo__ joined #gluster
17:39 andreask joined #gluster
17:54 bauruine joined #gluster
18:01 bitsweat left #gluster
18:15 RicardoSSP joined #gluster
18:15 RicardoSSP joined #gluster
18:41 partner so umm, back to the "i removed files from the brick" - i cannot seem to find a "clean" way of replicating files back so any next step suggestions?
18:42 LoadE_ joined #gluster
18:42 Technicool joined #gluster
18:44 sachin_ joined #gluster
18:46 sachin_ left #gluster
18:47 m0zes partner: does stat'ing the file from a client mount not work?
18:47 storearchie joined #gluster
18:49 storearchie Hi there, does anyone here have pointers on architecting 50TB+ Gluster installations - stuff like hardware, todos, best practices, monitoring, etc...
18:54 xinkeT joined #gluster
19:02 sjoeboo storearchie: we have a number of 100TB+ gluster installations, and for us the kley thing is stick w/ hardware/tech under gluster that we are familiar with, have automation already built around, and are already very happy with
19:02 sjoeboo i personally like dell R515's w/ 10GB nics for a storage node/building block.
19:04 sjoeboo and as for monitoring, we nagios the hell out of our systems, especially dell hardware checks, so we always have a good window into the state of things and ever gen pre fail alerts. for monitoring the gluster volume itself, we do nagios checks on brick count, status, mount availability, etc
19:04 sjoeboo and of course ship logs off to splunk
19:08 sashko joined #gluster
19:14 partner m0zes: well, that's the part that is acting weirdly.. i might need to do several times stat <file> but if doing in loop for several files i'm yet to see the success..
19:17 partner not sure if you see my previous messages but as for testing purposes i set up a small replica and went to delete some files from other brick and then am trying to figure out how to resolve the situation
19:19 isomorphic joined #gluster
19:19 DaveS joined #gluster
19:23 DaveS___ joined #gluster
19:53 y4m4 joined #gluster
20:07 storearchie Thanks for the pointer @sjoeboo...I had stepped out for a bit
20:16 transitdk joined #gluster
20:17 sashko joined #gluster
20:17 transitdk Would anyone be able to point in the right direction on how to do a "root squash" kind of configuration while using the fuse module?
20:17 transitdk basically, just disable remote root from being root
20:25 transitdk For anyone else out there looking for this, apparently you can't yet - https://bugzilla.redhat.com/show_bug.cgi?id=883590
20:25 glusterbot <http://goo.gl/G9RkD> (at bugzilla.redhat.com)
20:25 glusterbot Bug 883590: high, high, ---, vshastry, ON_QA , Gluster CLI does not allow setting root squashing
20:25 transitdk patch has been submitted as of today
20:29 squizzi joined #gluster
20:40 sjoeboo so, I asked this morning about mixing brick sizes, i think it was a bit early for the channel....
20:40 sjoeboo anyone have any insight into mixing brick sizes?
20:40 transitdk isn't an issue AFAIK
20:41 transitdk are you asking specifically to usable size overall?
20:44 sjoeboo transitdk: so, i'm got a 5x2 dist-rep volume, 28TB bricks
20:45 deckid joined #gluster
20:45 sjoeboo if i were to add say a pair of 40TB bricks, and reblanace...would things get weird, or would things jsut balance back ou (obviously with more data going to those bricks to fill them at the same rate as the smaller ones....)
20:51 sjoeboo i had a second question  s well...if i have a replica 2 volume, and i want to turn it into a replica 3 volume...are there any recommended way to get there?
20:51 sjoeboo just thinking about IF i were to do it, how.....delete the volume, re-create w/ replica 3 and let it heal to the "empty" replica set?
20:53 semiosis sjoeboo: since glusterfs 3.3.0 you can do it with add-brick, though i'm a bit fuzzy on the syntax
20:54 partner http://community.gluster.org​/q/expand-a-replica-volume/
20:54 glusterbot <http://goo.gl/z26UO> (at community.gluster.org)
20:54 semiosis howdy partner
20:54 eightyeight so, playing in a sandbox cluster, learning 'replace-brick', and it appears i set up a replication that I don't want. i've already committed
20:54 partner sjoeboo: to my understanding gluster does nothing to check your brick sizes so if you would have say 1 TB and 10 TB bricks you will start to have issues after 1 TB of data..
20:55 eightyeight i wish to undo. but, i'm being told that the brick or a prefix of it is already part of a volume. yet listing out the bricks shows otherwise. what to do?
20:55 glusterbot eightyeight: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
20:55 semiosis glusterbot: awesome
20:55 glusterbot semiosis: ohhh yeeaah
21:01 eightyeight glusterbot: thx
21:01 glusterbot eightyeight: I do not know about 'thx', but I do know about these similar topics: 'THP', 'time'
21:01 eightyeight :)
21:03 semiosis `learn thx as you're welcome
21:03 semiosis @learn thx as you're welcome
21:03 glusterbot semiosis: The operation succeeded.
21:03 semiosis glusterbot: thx
21:03 glusterbot semiosis: you're welcome
21:03 semiosis @alias thx as thanks
21:03 glusterbot semiosis: (alias [<channel>] <oldkey> <newkey> [<number>]) -- Adds a new key <newkey> for factoid associated with <oldkey>. <number> is only necessary if there's more than one factoid associated with <oldkey>. The same action can be accomplished by using the 'learn' function with a new key but an existing (verbatim) factoid content.
21:03 semiosis @alias thx thanks
21:03 glusterbot semiosis: The operation succeeded.
21:03 semiosis glusterbot: thanks
21:04 glusterbot semiosis: you're welcome
21:04 semiosis yay
21:08 gbrand_ joined #gluster
21:21 sjoeboo partner: so, re: mixing brick sizes, your saying, if i have a bunch of 28TB bricks, then add a 40TB brick (or in my cause, a pair for the replica 2), things should be fine....but once 28TB is hit on all bricks, the volume is full, and it won't be smart a dput a higher % of things on teh bigger bricks?
21:35 sashko joined #gluster
21:49 eightyeight is it possible to 'move' data off of a peer, so i can remove its bricks without data loss?
21:49 eightyeight similar in function to pvmove(8) in lvm2?
21:54 partner sjoeboo: well ask around for more details, Something(tm) happens when the smaller bricks get full..
21:55 sjoeboo partner: cool, thanks! just trying to do some growth planning....
21:55 partner i just tested it out, it seems to be somewhat aware of the situation and keeps writing to larger volume
21:55 partner -rw-r--r-- 2 root root 104857600 Jan 24 23:52 brick2/test.22
21:55 partner ---------T 2 root root 0 Jan 24 23:52 brick3/test.22
21:57 partner being: Number of Bricks: 2 x 2 = 4
21:59 partner not sure how recommended such approach is..?
22:00 the-me joined #gluster
22:19 partner i wonder what do i do now with those 0-sized files, rebalance complains 3 failures (there's 3 with sticky bit) left from the brick filling up
22:21 tryggvil joined #gluster
22:21 tryggvil_ joined #gluster
22:29 hattenator joined #gluster
22:34 sashko joined #gluster
22:36 raven-np joined #gluster
23:02 eightyeight i just ran: 'gluster volume geo-replication sandbox clusterfsck:/pool/sandbox start', and the status says 'faulty'. what did i do wrong?
23:03 eightyeight ah
23:04 eightyeight nope. still faulty.
23:04 eightyeight what do i do with the data from 'config'?
23:04 JoeJulian eightyeight: "is it possible to 'move' data off of a peer"? Yes, the remove-brick process does that in 3.3.
23:05 eightyeight JoeJulian: cool. so, with two nodes having two nodes each, if we remove one brick from each node, all the data will remain consistent
23:06 JoeJulian partner: Those 3 files are distribute link pointers. It's safe to remove those and their .glusterfs counterpart. See ,,(split brain) for details on what I mean by that.
23:06 glusterbot partner: I do not know about 'split brain', but I do know about these similar topics: 'split-brain'
23:06 lorderr joined #gluster
23:06 JoeJulian @split-brain
23:06 glusterbot JoeJulian: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
23:06 JoeJulian @alias "split-brain" "split brain"
23:06 glusterbot JoeJulian: Error: This key has more than one factoid associated with it, but you have not provided a number.
23:07 partner JoeJulian: roger that, thanks
23:08 * JoeJulian hates the word "node".
23:08 eightyeight s/two nodes each/two bricks each/
23:08 glusterbot What eightyeight meant to say was: JoeJulian: cool. so, with two nodes having two bricks each, if we remove one brick from each node, all the data will remain consistent
23:09 JoeJulian With two smurfs that each have two smurfs, you can smurf one smurf to another smurf just smurfily.
23:09 * eightyeight smurfs out
23:09 JoeJulian hehe
23:10 eightyeight right now, i can't see to get past a faulty replication status
23:10 eightyeight geo-replication, that is
23:11 jbrooks joined #gluster
23:14 tc00per joined #gluster
23:14 tc00per left #gluster
23:15 tc00per joined #gluster
23:15 tc00per left #gluster
23:17 eightyeight http://ae7.st/p/51i are the errors
23:17 glusterbot Title: Pastebin on ae7.st » 51i (at ae7.st)
23:18 tc00per joined #gluster
23:18 tc00per left #gluster
23:18 JoeJulian And I have a windows machine with an effing humongous roaming profile from XP that has taken over 2 hours to load in Win7 and everybody thinks I should magically be able to do something about it.
23:19 partner heh
23:19 JoeJulian It's WINDOWS people. Ask Microsoft if they're willing to fix their piece of crap.
23:21 partner so umm is it safe to use different sized bricks, i tested and it seems it works fine and i will just collect these extra 0 byte files / dist. link pointers that can be cleaned ? like fill one, add more, fill it up, add more..?
23:21 partner (thought i don't think filling any fs is wise thing to do)
23:21 JoeJulian Link pointers are created when the file doesn't exist on the brick that it's supposed to (based on the hash map).
23:22 JoeJulian It's inefficient and growing a file that does exist on the smaller brick will not succeed.
23:22 JoeJulian ... if it's full.
23:22 partner ah yeah, read about that somewhere
23:22 partner yeah, makes perfectly sense
23:23 JoeJulian One suggestion that seems to be readily accepted is to use lvm to break the larger drive into bricks that match in size.
23:23 JoeJulian "read about that somewhere" - probably my blog.
23:23 partner possibly, i read it all through as in hunger for docs
23:24 * eightyeight is confused
23:27 xymox joined #gluster
23:29 partner alright, thanks dudes, past 1AM already so continuing tomorrow ->
23:36 theron joined #gluster
23:43 * JoeJulian grumbles... glusterbot, I need to file a bug
23:43 glusterbot http://goo.gl/UUuCq
23:43 JoeJulian glusterbot: thanks
23:43 glusterbot JoeJulian: you're welcome

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary