Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 vjarjadian joined #gluster
00:04 jbrooks joined #gluster
00:10 nightwalk joined #gluster
00:30 nightwalk joined #gluster
00:39 balunasj joined #gluster
00:46 inodb_ joined #gluster
00:51 eightyeight joined #gluster
01:03 dalekurt joined #gluster
02:01 layer3 joined #gluster
02:37 duffrecords joined #gluster
02:42 dalekurt joined #gluster
03:00 TSM joined #gluster
03:13 lng joined #gluster
03:17 eightyeight joined #gluster
03:18 lng Hi! I have created EBS snapshot, then made Volume from it, attached and mounted on the EC2 Instance, finally, I can't list files as it takes long time. The device is listed: '/dev/xvdf      104806400 27248196  77558204  26% /storage/1a'
03:18 lng this is XFS
03:19 lng what can I do about that and what might be the reason of inability to list files?
03:20 lng I can create file
03:21 bulde joined #gluster
03:21 lng now I can see them
03:22 lng interesting
03:22 lng resolved
03:23 dalekurt joined #gluster
03:25 robo joined #gluster
03:25 lng Now I want to create new volume using this disk, but there's .glusterfs/ dir present. Should I dlete it?
03:27 bharata joined #gluster
03:30 lanning if you are sure that it is not currently part of a gluster volume, there are steps to make it usable by a new gluster volume
03:32 lanning http://community.gluster.org/q/how-​do-i-reuse-a-brick-after-deleting-t​he-volume-it-was-formerly-part-of/
03:32 glusterbot <http://goo.gl/HTfdm> (at community.gluster.org)
03:33 lng lanning: thanks!
03:33 lng no, it's not part
03:33 lng it's byte copy
03:38 duffrecords left #gluster
03:40 lng lanning: is it litterally 'trusted.glusterfs.volume-id'?
03:41 lng or I need to substitute volume-id with real id value?
03:41 sunus joined #gluster
03:42 shylesh joined #gluster
03:43 lng shouldn't I remove them all? http://pastie.org/private/awanr1idsj2nnszk2s5wng
03:43 glusterbot <http://goo.gl/ZeYRH> (at pastie.org)
04:02 shireesh joined #gluster
04:06 lanning you can, but you only really need to remove the "trusted" ones
04:07 lanning and it is literally "volume-id" (it contains the volume-id
04:07 lanning )
04:44 bulde joined #gluster
04:45 deepakcs joined #gluster
04:51 vpshastry joined #gluster
04:51 lng lanning: I removed all
04:54 bala1 joined #gluster
04:55 layer3switch joined #gluster
05:01 layer3switch left #gluster
05:02 layer3switch joined #gluster
05:04 ramkrsna joined #gluster
05:04 ramkrsna joined #gluster
05:12 lng I have created replica 2 volume of two bricks. The one of the bricks already has some files. I thought the files should be syncronized automatically after volume is started, but it doesn't happen. Why?
05:14 lng And what is the difference with the situation when one of the nodes was down for some time and when it's up again, files are synced?
05:18 sripathi joined #gluster
05:19 ika2810 joined #gluster
05:27 lng wow! they synced after some time
05:29 lng but not completely
05:34 sripathi joined #gluster
05:43 ika2810 left #gluster
05:43 pithagorians joined #gluster
05:46 shireesh joined #gluster
05:52 raghu joined #gluster
05:54 rudimeyer joined #gluster
05:58 rudimeyer joined #gluster
06:08 mohankumar joined #gluster
06:11 sunus joined #gluster
06:11 RobertLaptop joined #gluster
06:14 sunus joined #gluster
06:14 bulde joined #gluster
06:23 sunus joined #gluster
06:27 sunus joined #gluster
06:31 glusterbot New news from resolvedglusterbugs: [Bug 824533] Crash in inode_path <http://goo.gl/mmfCE>
06:33 lng New news new bug
06:34 sunus joined #gluster
06:42 atrius joined #gluster
06:45 sripathi joined #gluster
07:01 glusterbot New news from resolvedglusterbugs: [Bug 797163] ls on the nfs mount shows double entries of files after a replace-brick and add-brick operation of the same brick <http://goo.gl/ovkYc>
07:04 hchiramm_ joined #gluster
07:08 puebele joined #gluster
07:13 pkoro joined #gluster
07:20 lng I can see the load drop after upgrading 3.3.0 to 3.3.1
07:20 overclk joined #gluster
07:20 lng before, during rebalancing, I got load average ~7
07:21 lng now, it's ~2.5
07:21 lng on 2 cores
07:21 lng so it's much better
07:23 hchiramm_ joined #gluster
07:27 puebele joined #gluster
07:29 ngoswami joined #gluster
07:30 lkoranda joined #gluster
07:36 vpshastry joined #gluster
07:38 webwurst joined #gluster
07:43 guigui3 joined #gluster
07:49 nightwalk joined #gluster
07:54 statix_ joined #gluster
07:56 Humble joined #gluster
07:57 Azrael808 joined #gluster
08:01 quillo joined #gluster
08:01 sripathi1 joined #gluster
08:07 quillo joined #gluster
08:09 rudimeyer joined #gluster
08:10 Nr18 joined #gluster
08:14 ctria joined #gluster
08:19 quillo joined #gluster
08:20 ika2810 joined #gluster
08:21 ika2810 joined #gluster
08:26 quillo joined #gluster
08:31 JoeJulian lng: Yay!
08:32 JoeJulian everybody (even you lurkers): Please look this over and give any feedback on the Discussion page: http://www.gluster.org/community/d​ocumentation/index.php/Life_Cycle
08:32 glusterbot <http://goo.gl/zkCmY> (at www.gluster.org)
08:35 tjikkun_work joined #gluster
08:35 ndevos JoeJulian: 3.4 is still in the devel phase, but closed for features, I think?
08:36 JoeJulian Oh, I thought I had that in there...
08:36 JoeJulian I messed around for HOURS on that damned table.
08:37 ndevos I suspected as much, or was thinking you had a lot of experience with mediawiki tables
08:38 andreask joined #gluster
08:40 ndevos JoeJulian: and maybe rephrase the "Release candidates should start being released ... 6 months before the developement cycle ends ..." or something?
08:40 JoeJulian I'm betting that Avati will want a 3.5 in there. My feeling is that the ability to upgrade with no downtime is a significant enough change to warrant a major version.
08:41 ndevos maybe add an empty row where the version is ...?
08:41 JoeJulian You don't like mensiversary?
08:41 ndevos no :)
08:42 JoeJulian I love that word. :D
08:42 ndevos nobody ever uses that!
08:42 JoeJulian I always correct people when they talk about their 6 month anniversary.
08:42 JoeJulian ... ok, yes... I'm THAT guy...
08:42 ndevos yeah, but "6 month anniversary" also sounds weird to me
08:43 JoeJulian anni- = year
08:43 andreask ;-) ... so a half-anniversary?
08:43 JoeJulian hehe
08:46 JoeJulian ... and there's no way in hell I'm changing the column count on that damned table. ;)
08:47 duerF joined #gluster
08:51 vpshastry joined #gluster
08:51 JoeJulian I went to a puppet meetup today. Garrett Honeycutt and I were the only ones there that used puppet in production. Ebay will be rolling out a 2000+ node configuration in a little over a week though.
08:52 JoeJulian So if you can't buy your ebay stuff in time for Christmas, I know who to blame.
08:55 barbe joined #gluster
08:57 kspr joined #gluster
09:02 JoeJulian @qa releases
09:02 glusterbot JoeJulian: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
09:05 ndevos puppet seems to be used quite a lot, but I dont have any experiences with it, I'm doing one-off-and-throwaway installations only
09:06 lng JoeJulian: I can see Gluster needs more RAM rather than CPU
09:07 JoeJulian I run 16Gig on my servers to run 60 bricks.
09:08 gbrand_ joined #gluster
09:08 lng JoeJulian: in my case, on 4 nodes and 8 bricks (replica 2), 1.7G per node is not enough - swap was used.
09:08 davdunc joined #gluster
09:08 davdunc joined #gluster
09:08 lng concurrency is ~300
09:09 lng 2 cores
09:09 JoeJulian Oh, right... I also set performance.cache-size: 8MB on my volumes.
09:09 lng so I'm switching to 1 core, but 3.75G RAM
09:10 lng performance.cache-size?
09:10 lng I have not touched that
09:10 lng does it improve a lot?
09:10 JoeJulian For my use case, having more doesn't really do me any good.
09:10 lng I have more reads than writes
09:10 lng 1/3
09:11 JoeJulian As usual, I recommend you test your use case and see if it works for you, then report it publicly somewhere. :D
09:11 lng JoeJulian: hard to test
09:11 lng JoeJulian: as we didn't know the flow
09:11 lng and also lack of time
09:12 lng but it's okay
09:12 lng we are on EC2 - can change
09:14 lng and thanks Gluster!
09:14 lng :-)
09:15 JoeJulian :)
09:18 lng JoeJulian: do I need to run any procedures on Gluster 3.3.1 from time to time?
09:18 lng like self-heal
09:18 lng or in this version, this is done automatically?
09:19 JoeJulian It's automatic.
09:19 lng nice
09:20 JoeJulian I have nagios check "gluster volume heal $vol info split-brain" and ..."heal-failed" but so far so good.
09:21 lng JoeJulian: when turn off and turn on nodes temporarily, I can see CPU spike which lasts for few minutes - does it mean auto rebalancing is working?
09:21 lng we use Zabbix
09:21 JoeJulian That means that the fd's and locks are getting re-established.
09:22 lng fd - file descriptor?
09:22 JoeJulian yes
09:22 JoeJulian It's known to be an expensive operation which is why ping-timeout is so long.
09:22 lng how about the files which were written to only 1 node?
09:22 lng will they come to replica?
09:22 JoeJulian The self-heal daemon works on a 10 minute cycle.
09:23 JoeJulian So within 10 minutes, they'll be healed.
09:23 lng I see, I see
09:23 lng great
09:23 JoeJulian You /can/ force it with "gluster volume heal $vol"
09:23 lng ah, yes
09:26 tryggvil joined #gluster
09:27 mgebbe_ joined #gluster
09:28 clag_ joined #gluster
09:28 tryggvil joined #gluster
09:29 tryggvil_ joined #gluster
09:35 shireesh joined #gluster
09:40 inodb_ joined #gluster
09:40 m0zes joined #gluster
09:40 stigchristian joined #gluster
09:41 tryggvil joined #gluster
09:47 Triade joined #gluster
10:00 manik joined #gluster
10:01 DaveS_ joined #gluster
10:10 inodb_ joined #gluster
10:10 m0zes joined #gluster
10:10 stigchristian joined #gluster
10:17 ramkrsna joined #gluster
10:18 inodb_ joined #gluster
10:23 m0zes joined #gluster
10:26 stigchri1tian joined #gluster
10:33 saz joined #gluster
10:40 puebele1 joined #gluster
10:44 vpshastry joined #gluster
10:59 puebele joined #gluster
11:01 sripathi joined #gluster
11:01 saz joined #gluster
11:05 nightwalk joined #gluster
11:13 purpleidea joined #gluster
11:14 tryggvil joined #gluster
11:33 ika2810 left #gluster
11:45 H__ joined #gluster
11:53 rudimeyer_ joined #gluster
12:05 tjikkun_work joined #gluster
12:07 nightwalk joined #gluster
12:20 tryggvil joined #gluster
12:26 nueces joined #gluster
12:30 luis_alen joined #gluster
12:34 Nr18_ joined #gluster
12:40 plarsen joined #gluster
12:42 duerF joined #gluster
12:48 purpleidea joined #gluster
12:49 balunasj joined #gluster
12:52 dalekurt joined #gluster
12:58 purpleidea joined #gluster
12:58 purpleidea joined #gluster
13:03 nightwalk joined #gluster
13:09 tjikkun_work joined #gluster
13:10 edward1 joined #gluster
13:14 luis_alen Hello. What's the best place to find gluster docs? I've checked http://community.gluster.org/ and saw lots of QAs and how tos, but not docs of how it works internally.
13:14 glusterbot Title: Gluster Community (at community.gluster.org)
13:16 ndevos luis_alen: jdarcy wrote some nice blog posts, like http://hekafs.org/index.php/2011/11/tra​nslator-101-class-1-setting-the-stage/
13:16 glusterbot <http://goo.gl/OXo7o> (at hekafs.org)
13:17 ndevos luis_alen: but I guess it also depends a little on what you are looking for though...
13:20 luis_alen ndevos: Yes, sure. I'm actually looking for "newbie" docs, such as an user guide and an "introduction to gluster". Also, docs on the translators would be good too.
13:23 nightwalk joined #gluster
13:32 joeto joined #gluster
13:32 Nr18 joined #gluster
13:33 ndevos luis_alen: hmm, I'm not sure if there are docs about that, you may find useful details on http://www.gluster.org/community/d​ocumentation/index.php/Developers
13:33 glusterbot <http://goo.gl/qhNCo> (at www.gluster.org)
13:35 aliguori joined #gluster
13:37 ndevos luis_alen: I think the admin guide is pretty good too: http://www.gluster.org/community/docume​ntation/index.php/Main_Page#GlusterFS_3.3
13:37 glusterbot <http://goo.gl/wuhOc> (at www.gluster.org)
13:37 luis_alen ndevos: Thank you. This will point me to the right direction.
13:37 ndevos good luck!
13:37 luis_alen ndevos: This pdf looks like what I've been seeking: http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
13:37 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
13:38 luis_alen Found it on the links you sent
13:38 ndevos luis_alen: yeah, thats the one I meant
13:42 esm_ joined #gluster
13:44 haakond left #gluster
13:58 sshaaf joined #gluster
14:00 robo joined #gluster
14:01 ramkrsna joined #gluster
14:06 ekuric joined #gluster
14:15 luis_alen Guys, why does the minimum setup require at least two nodes? I know we're talking about a clustered and distributed fs and this must be a stupid question, but after having read some docs, I couldn't find a convincing answer. I'd like to start running my tests with a standalone node… Actually we need a NAS but we can't afford a two-server setup now. Doesn't matter if it's striped or replicated…
14:17 johnmark luis_alen: you cna do that. I do that all the time for demos and such
14:17 johnmark you can even create two bricks on the same server
14:19 luis_alen johnmark: Have you ever seen a setup like this running in production?
14:19 johnmark oh god no :)
14:19 luis_alen lol
14:19 johnmark for just trying stuff to see if it works, sure
14:19 johnmark demos, poc's, etc.
14:20 johnmark luis_alen: although, it hsould be said that there's no reason why it wouldn't work in production
14:20 johnmark but at that point, the question becomes "why?"
14:24 luis_alen hmmm… Looks like the distribute translator might make things cost effective for us
14:24 luis_alen we can not afford replication, definitely :(
14:35 neofob joined #gluster
14:35 luis_alen In case we decide to go with distributed volumes only, is it possible to add replication later?
14:40 johnmark luis_alen: not sure, as I haven't done that.
14:53 Azrael808 joined #gluster
14:54 nightwalk joined #gluster
14:58 m0zes in 3.3 you can
15:02 aricg_ Im confused about this example from the quick start "  gluster volume create gv0 replica 2 node01.mydomain.net:/export/brick1 node02.mydomain.net:/export/brick1"
15:03 aricg_ all of the examples in the Gluster Admin pdf give the bricks different names,
15:03 aricg_ IE: gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
15:05 aricg_ should the brick names be the same or does it not matter to gluster what you name them?
15:05 m0zes doesn't matter.
15:05 aricg_ thanks,
15:06 stopbit joined #gluster
15:06 jdarcy Oh, for crying out loud.  The bogus "already part of a volume" bug is because realpath() fails on remote volumes and we're using the EXPLICITLY undefined result anyway.
15:07 m0zes :/ whoops
15:11 jdarcy I think I'll wear my "your distributed filesystem sucks" shirt today, with an arrow pointing at me.
15:18 johnmark lulz
15:18 johnmark jdarcy: easy, tiger ;)
15:25 Triade joined #gluster
15:26 jdarcy Nope.  First "cat" succeeds when it shouldn't.  Second "cat" fails  like it should.
15:27 jdarcy That's definitely current master plus your two patches.  Just built it myself.
15:27 jdarcy Sorry, wrong channel.
15:27 [{L0rDS}] joined #gluster
15:32 [{L0rDS}] Hi guys, i'm getting high memory usage using gluster native client. The memory usage only grows never shrinks. I'm using version 3.3.1. Is this still a bug?
15:34 tryggvil joined #gluster
15:41 nightwalk joined #gluster
15:44 puebele3 joined #gluster
16:03 nightwalk joined #gluster
16:05 ctria joined #gluster
16:05 daMaestro joined #gluster
16:08 lh joined #gluster
16:08 lh joined #gluster
16:13 nightwalk joined #gluster
16:14 chandank joined #gluster
16:14 puebele joined #gluster
16:21 lkoranda joined #gluster
16:23 ctrianta joined #gluster
16:29 Bullardo joined #gluster
16:32 puebele1 joined #gluster
16:32 sjoeboo question: i've got a 12x2=24 distributed replicated volume, currently empty...
16:32 sjoeboo normally for its specicif purpose, we would only have 2 or 3 folders int ehr oot of the volume
16:32 sjoeboo in the root* (yikes)
16:33 sjoeboo with one holding most of the data
16:33 sjoeboo any layout caveats with that? the (big) files inside that one dir won't all end up on teh same bricks or anything right?
16:47 lh joined #gluster
16:47 Mo_ joined #gluster
16:49 asaldhan joined #gluster
16:56 asaldhan left #gluster
17:03 andreask joined #gluster
17:35 asaldhan joined #gluster
17:36 asaldhan left #gluster
17:38 esm_ joined #gluster
17:43 JoeJulian sjoeboo: no problem. The layouts are per-directory anyway so it should work out just fine. Take a look at my most recent blog article if you want to find out how that works. http://joejulian.name
17:43 glusterbot Title: JoeJulian.name (at joejulian.name)
17:51 inodb joined #gluster
18:09 [{L0rDS}] Hi guys, i'm getting high memory usage using gluster native client. The memory usage only grows never shrinks. I'm using version 3.3.1. Is this still a bug?
18:10 JoeJulian Depends on how high it grows.
18:10 JoeJulian As it fills caches it grows until it hits the cache limits.
18:11 nightwalk joined #gluster
18:15 jason joined #gluster
18:16 l3iggs joined #gluster
18:24 jbrooks joined #gluster
18:29 l3iggs hi all
18:30 l3iggs can anyone help me out with this question?:
18:30 l3iggs http://community.gluster.org/q/roving-client/
18:30 glusterbot Title: Question: roving client (at community.gluster.org)
18:30 spn joined #gluster
18:30 l3iggs i'll copy it here
18:31 l3iggs I have a "replica 2" volume with two bricks. One brick is on a machine on my home LAN and the other brick is on a machine on my work LAN. I've bridged the server at work to my home LAN with a VPN.
18:31 l3iggs I would like to connect to this volume with a roving client that might either be on my work LAN or my home LAN.
18:31 l3iggs I find my setup works great when I am reading files from the volume. I achieve near LAN read speeds to the vomume when my client is either on my home LAN or my work LAN.
18:31 l3iggs But my writes to the volume are always take place at WAN speeds no matter if the client is on the work LAN or my home LAN.
18:31 l3iggs How can I speed up my writes to the volume?
18:34 nightwalk joined #gluster
18:42 nightwalk joined #gluster
18:42 Bullardo_ joined #gluster
18:58 rabbit7 left #gluster
19:11 ceocoder joined #gluster
19:12 nightwalk joined #gluster
19:12 ceocoder hi, is it safe to run self heal while fix-layout/data-migrate is in progress?
19:16 Daxxial_ joined #gluster
19:17 glusterbot New news from newglusterbugs: [Bug 877522] Bogus "X is already part of a volume" errors <http://goo.gl/YZi8Y>
19:18 wN joined #gluster
19:19 manik joined #gluster
19:23 JoeJulian l3iggs: The answer is: don't replicate over a wan. Replication writes are synchronous from the client to both servers. Maybe take a look at georeplication and see if that will satisfy your requirements.
19:24 JoeJulian ceocoder: Should be both safe and unnecessary (I think). The tree walk for rebalance should trigger self-heal.
19:25 H__ hope to get it to work when we upgrade from 3.2.5 ;-)
19:25 JoeJulian +1
19:25 JoeJulian Alrighty... I'm heading down to Portland. Have a good weekend everybody.
19:26 genewitc1 joined #gluster
19:26 genewitc1 Is there a good guide somewhere?
19:27 genewitc1 I have two servers, 1 volume 1 brick, i want to add the second server's storage to the gluster system
19:27 genewitc1 do i do the command on the first server or the second?
19:27 genewitc1 the second server can see the volume with gluster volume info
19:27 JoeJulian You'll have to peer probe from the first, but after that it doesn't matter.
19:28 JoeJulian You /should/ also peer probe the first from the second if you're using hostnames.
19:28 genewitc1 i am
19:28 H__ genewitc1: peer probe from both sides to get the naming right (helped me)
19:28 genewitc1 that worked
19:28 JoeJulian genewitc1 makes me happy. :D
19:29 genewitch joined #gluster
19:29 l3iggs JoeJulian: is geo-replication a two way sync?
19:29 genewitch i am working on making this automated
19:30 genewitch for cloud uses, so far i have the server init and storage config automated, and the peering is going to be done with a CGI
19:31 ceocoder JoeJulian:  thank you
19:34 nueces joined #gluster
19:34 genewitch when i try to add-brick glu2.domain.com:/gluster it says the brick is already part of the volume
19:34 genewitch but volume info shows only 1 brick
19:34 genewitch and that brick is on glu1.domain.com:/gluster
19:40 genewitch I don't get it >.<
19:43 H__ l3iggs: nope, it is 1-way :-/
19:44 H__ genewitch: time to check the uuid keys
19:46 l3iggs oh, this is strange, maybe i'm missing the point of this project then, is there any way to do a geographically distributed volume?
19:47 l3iggs seems that geo-replication is just for backup purposes
19:47 l3iggs correct?
19:49 eightyeight joined #gluster
19:49 genewitch l3iggs: you want two gluster clusters in remote locations to both be able to be written to and read from?
19:50 genewitch is there a latency issue with gluster if the servers are geographically distant?
19:52 l3iggs genewitch: i want one gluster cluster that is geographically distributed
19:52 genewitch l3iggs: i'm new at this, but unless there's some latency issue, why can't you just add a brick from a server regardless where it is?
19:53 genewitch not geo-replication, mind.
19:53 l3iggs i can do that and it works
19:53 genewitch so what the problem?
19:53 genewitch :-D
19:53 kkeithley It works, but there is latency. If you can live with the latency——
19:54 l3iggs the problem is that writes are only as fast as the connection to the slowest server
19:54 genewitch i've discovered that "eventually consistent" is more than adequate
19:54 dalekurt joined #gluster
19:54 genewitch l3iggs: well, obviously
19:54 l3iggs nope
19:55 genewitch if you have a local raid writes are only as fast as the slowest drive
19:55 l3iggs i expect writes to be cached by the fast server and transmitted to the slow server in the background
19:55 genewitch otherwise it's not redundant
19:56 genewitch why not just make the remote, slow server a client
19:56 genewitch would that work?
19:56 l3iggs the client roves
19:56 l3iggs one server is on mu work lan
19:56 l3iggs one server is on my home lan
19:57 genewitch are you doing this for having a shared filesystem between the two locations?
19:57 l3iggs the volume should operate at lan speeds when the client is on the home lan or the work lan
19:57 l3iggs yes
19:58 johnmark l3iggs: our synchronous replication waits for all writes to finish, which means it's not going to perform up to what you need. this time next year we will hopefully have a solution for your use case
19:58 genewitch l3iggs: i don't think gluster does that, it;s supposed to be reliable and redundant, not fast. there are FUSE file systems that will slowly move data from local cache, but it sounds like you want something like lsyncd
19:58 genewitch AKA dropbox
19:59 johnmark genewitch: speaking of... you can, of course, create a dropbox client with GlusterFS
19:59 genewitch johnmark: where the gluster stuff is the central location for the files?
19:59 genewitch or with the gluster clients?
20:00 l3iggs johnmark: thanks
20:00 genewitch lsyncd is pretty reliable over slow connections, and it runs at "local speeds" otherwise, since it just notices that something has changed on your filesystem and replicates that with rsync
20:01 genewitch anyhow i have to fix a PCI issue, i'll pick up where i left off with gluster in a bit, thanks for your help!
20:02 l3iggs genewitch: thanks for the lsyncd tip
20:03 duffrecords joined #gluster
20:03 l3iggs maybe i'll look into using that until gluster can do a geographically distributed volume
20:08 m0zes joined #gluster
20:14 duffrecords I'm getting the "[directory] or a prefix of it is already part of a volume" error when I try to create a new volume.  I understand that's something that happens when you remove a brick and add it back, and that you can fix it by removing extended attributes.  however, I haven't removed any bricks.  what else might be causing this?
20:14 glusterbot duffrecords: To clear that error, follow the instructions at http://goo.gl/YUzrh
20:14 duffrecords like I said, I tried that
20:14 duffrecords on the parent directory, that is
20:15 y4m4 joined #gluster
20:15 dalekurt joined #gluster
20:18 berend joined #gluster
20:28 JoeJulian "[12:15] <l3iggs> [19:49:59] oh, this is strange, maybe i'm missing the point of this project then" this clustered filesystem is primarily geared toward providing redundancy and scale. It concentrates more on consistency, availabilty, and partition-tolerance (all three of CAP) with the caveat that your infrastructure is responsible for performance..
20:33 wN joined #gluster
20:36 l3iggs JoeJulian: thanks!
20:40 JoeJulian Damn, I am not getting a decent network connection from Sprint today. I'm in the car with my son driving. :D
20:42 chandank Sprint network is due to upgrade in next 2 years.
20:43 dbruhn joined #gluster
20:43 chandank Ericsson will do.
20:43 chandank till than use something else :-)
20:44 dbruhn for 3.3.1 what are all of the different epel-# versions for
20:44 dbruhn I am assuming it's to line up with the major release of the OS, but wanted to confirm
20:45 dbruhn and sorry thats for RHEL
20:46 JoeJulian dbruhn: yep
20:46 jdarcy joined #gluster
20:46 dbruhn I am upgrading from 3.3.0 do I just grab the package and run rpm -i with the new package on each of the nodes?
20:47 JoeJulian el5 is for CentOS/RHEL/Scientific Linux 5 and el6 is, of course then, CentoOS/RHEL/SL 6
20:47 dbruhn Awesome, thanks for confirming
20:48 JoeJulian dbruhn: I would just install the .repo file in /etc/yum.repos.d and "yum upgrade gluster\*"
20:48 dbruhn will it upgrade them even if that's not how I installed them in the first place?
20:50 JoeJulian yes
20:51 dbruhn awesome, thanks for the info
20:51 JoeJulian I wonder if Sprint is just messing with me because I'm trying to download 400Mb before I get to Portland.
20:52 nightwalk joined #gluster
21:31 semiosis http://community.gluster.org/q/roving-client/ <-- zomg
21:31 glusterbot Title: Question: roving client (at community.gluster.org)
21:32 JoeJulian Yeah, that was l3iggs.
21:32 semiosis :(
21:33 * semiosis reads scrollback
21:34 JoeJulian I think we've gotten him straightened out though. He's going to look at something rsync based.
21:34 semiosis unison
21:34 semiosis also, git-annex
21:34 semiosis @lucky git-annex
21:34 glusterbot semiosis: http://git-annex.branchable.com/
21:35 JoeJulian lsyncd
21:35 JoeJulian ... is what he was going to try.
21:35 semiosis reading
21:36 semiosis looks like a great way to set up an infinite loop
21:36 semiosis two lsyncd's pointed at each other
21:36 semiosis but maybe if rsync detects no op is needed that will prevent the loop
21:37 semiosis what should be done about the question on CGO though?
21:41 duffrecords if I removed trusted.glusterfs.volume-id and trusted.gfid from the brick path and its parent directory and restarted glusterd and I'm still seeing the "[path] or a prefix of it is already part of a volume" error, what else can I do to troubleshoot it?
21:41 glusterbot duffrecords: To clear that error, follow the instructions at http://goo.gl/YUzrh
21:41 duffrecords I tried creating a volume inside /tmp, which has never been used as a brick path before, and saw the same error
21:48 jdarcy See also https://bugzilla.redhat.com/show_bug.cgi?id=877522
21:48 glusterbot <http://goo.gl/YZi8Y> (at bugzilla.redhat.com)
21:48 glusterbot Bug 877522: medium, unspecified, ---, kparthas, NEW , Bogus "X is already part of a volume" errors
21:48 glusterbot New news from newglusterbugs: [Bug 877563] Metadata timestamps ignored potentially causing loss of new metadata changes <http://goo.gl/UH1ZB>
22:01 nightwalk joined #gluster
22:04 tc00per Has anybody compared GlusterFS to XtreemFS and written a blog post recently about the pros/cons of each?
22:08 chandank Yes.
22:09 chandank There is a gluster developer, I dont remember his name, he has written a well informed blog regarding this.
22:10 twx_ not impressed with what i've seen of xtreemfs
22:10 tryggvil joined #gluster
22:10 chandank I read it couple of months back when I was evaluating which one to use. Performance and functionality wise both are same. Advantage of gluster is very smooth volume management and strong community support.
22:11 chandank Well about "Performance and functionality wise both are same." this is my perception only.
22:11 lh joined #gluster
22:17 semiosis chandank: http://hekafs.org/index.php/20​11/08/quick-look-at-xtreemfs/ ?
22:17 glusterbot <http://goo.gl/aWGYc> (at hekafs.org)
22:17 tc00per Thanks... read that one already... looking for something more recent.
22:19 pithagorians joined #gluster
22:24 JoeJulian or a prefix of it is already part of a volume
22:24 glusterbot JoeJulian: To clear that error, follow the instructions at http://goo.gl/YUzrh or see http://goo.gl/YZi8Y
22:24 JoeJulian or a prefix of it is already part of a volume
22:24 glusterbot JoeJulian: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
22:28 JoeJulian tc00per: http://pl.atyp.us/wordpress/index.php/2012/01/sca​ling-filesystems-vs-other-things/#comment-267214
22:28 glusterbot <http://goo.gl/ly6Yo> (at pl.atyp.us)
22:28 nightwalk joined #gluster
22:33 hattenator joined #gluster
22:44 tc00per Thanks JoeJulian... read that one too... :)
22:46 JoeJulian I think that was the one that chandank was referring to. The only "gluster developer" that I know that occasionally runs the other systems to see how we're doing is jdarcy.
22:52 tc00per I did find this one resource... http://bytepawn.com/readin​gs-in-distributed-systems/ ...that seems like a good collection of links to papers. Not one linked yet for GlusterFS though and author doesn't seem too impressed.
22:52 glusterbot <http://goo.gl/uLHJE> (at bytepawn.com)
22:53 semiosis tc00per: seems like it would take an academic publication to impress that author
22:55 tc00per True... and there are plenty of those... http://scholar.google.com/scholar?q=glust​erfs&amp;btnG=&amp;hl=en&amp;as_sdt=0%2C5 ... I'm looking for a more practical comparison though.
22:55 glusterbot <http://goo.gl/guIAK> (at scholar.google.com)
22:57 JoeJulian I disagree with practical comparisons for the most part. Comparisons are only valid if you have the same workload as the person that did the comparison. Even further, there are so many dynamics in play that what you really should do, imho, is define your spec and see which system suits it the best. Then test, test, test.
22:58 JoeJulian Or you can do what ebay is about to do and deploy your new backend during the busiest shopping week of the year and just expect it to go off without a hitch.
22:58 tc00per :)
23:03 tc00per Just trying to do as little as possible... :) And it's tough building parallel test environments with zero time and less money.
23:04 JoeJulian I hear that.
23:09 robo joined #gluster
23:09 tc00per left #gluster
23:17 duffrecords jdarcy: the bug at http://goo.gl/YZi8Y looks like it might be what I'm experiencing so I'll try to apply the patch.  however, I just remembered that the Gluster version we're using was already patched on June 27 to fix some NFS issue.  how can I tell whether that patch has made its way into the codebase yet?  this was several months ago, and all I have are the source files that I used to patch it
23:17 glusterbot Title: Bug 877522 Bogus "X is already part of a volume" errors (at goo.gl)
23:23 JoeJulian duffrecords: He just found the bug this morning.
23:24 semiosis well.... is it patched yet?
23:24 semiosis ;)
23:25 duffrecords JoeJulian: I was referring to the patch I applied in back June.  I can't find info about it
23:25 duffrecords *back in June*
23:27 duffrecords I thought I had the change number but it doesn't show up when I search for it on review.gluster.org
23:27 JoeJulian Ah
23:28 duffrecords basically, I want to apply Jeff's patch from today but I don't want to interfere with an old patch I applied in the summer
23:28 JoeJulian Well, I don't see anything with your name in the git log for release-3.3.
23:29 duffrecords no, the patch wasn't submitted by me.  I don't remember who it was
23:29 JoeJulian Sorry, I misread "applied" as "submitted". :/
23:30 duffrecords but I can see in my bash history I copied a couple of source files with what look like change IDs before compiling gluster
23:30 duffrecords so whatever it was I did back then (my memory is fuzzy) I want to preserve when I use Jeff's patch from today
23:51 dalekurt joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary