Camelia, the Perl 6 bug

IRC log for #gluster, 2012-12-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 mjrosenb file * | grep empty | sed -e s/:.*// | while read f; do getfattr -m . -d "$f"; done
00:00 mjrosenb also prints nothing.
00:09 mjrosenb JoeJulian: welp, I'm going to remove them, and see if they come back!
00:09 mjrosenb JoeJulian: things won't get horribly confused if I just nuke those while the server is running, right?
00:14 JoeJulian mjrosenb: Should be fine.
00:16 mjrosenb JoeJulian: ok, looks like that fixed it (for now)
00:17 mjrosenb listing that directory from a client did not force the empty flies to be regenerated
00:17 mjrosenb *files
00:28 vincent_vdk joined #gluster
00:30 elyograg got my production hardware up.  this may be the final layout: http://www.fpaste.org/TETu/
00:30 glusterbot Title: Viewing bricks & volumes by elyograg (at www.fpaste.org)
00:39 yinyin joined #gluster
00:55 khushildep joined #gluster
01:23 yinyin joined #gluster
01:33 kevein joined #gluster
01:59 arusso joined #gluster
01:59 sjoeboo joined #gluster
02:09 sunus joined #gluster
02:14 jrossi joined #gluster
02:15 jrossi Has anyone else run into geo replication killing the destination volume and the daemons.  I have reproduce this over and over again were I have to restart the daemons on the destination side but that go down again within 3 hours
02:16 mjrosenb what exactly is geo replication?
02:16 JoeJulian Any useful log information?
02:17 jrossi JoeJulian: not really - can not even find where it's dieing
02:18 JoeJulian mjrosenb: gluster runs a "marker" daemon that flags and queues files when they change to be rsynced to the remote site (in batches I believe).
02:18 mjrosenb JoeJulian: ahh, so offsite backups run by the bricks?
02:19 JoeJulian Sounds like as good of a description as any I've heard yet.
02:19 JoeJulian I plan on using it for exactly that after tax season (and my quiet period) is over.
02:23 mjrosenb so, I remember that there used to be a way of specifiyng the amount of data that it should cache
02:23 mjrosenb is there a way to do that still?
02:30 JoeJulian @options
02:30 glusterbot JoeJulian: http://goo.gl/dPFAf
02:46 mjrosenb oh cool, so the volume tells each of the clients how much it should cache, etc?
02:53 bharata joined #gluster
03:11 sripathi joined #gluster
03:24 __Bryan__ joined #gluster
03:43 sgowda joined #gluster
03:50 stuarta_ I have two fresh installs of gluster on centos. I can't seem to probe either of the machines.
03:50 stuarta_ I have the service running, I can telnet to 24007 on each other. SELinux is off
03:50 stuarta_ iptables is off
03:51 stuarta_ hostnames resolve, but in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log it says  Unable to find hostname
03:51 stuarta_ I asked about this earlier, but ran out of time
03:51 avati joined #gluster
03:53 mjrosenb stuarta_: that sounds like a dns issue.
03:53 m0zes sounds like dns, but if it resolves?
03:53 mjrosenb wait, is that all of the message "Unable to find hostname"
03:53 stuarta_ Yeah. no it lists the host name after
03:54 stuarta_ also, if i try the IP straight, same result
03:54 mjrosenb stuarta_: ok, I was going to say, does centos not have /bin/hostname :-p
03:54 stuarta_ I can post the logs if you want to look?
03:54 stuarta_ I was asking about 5:00 or so
03:54 mjrosenb stuarta_: i'm game, but I don't have much experience with this...
03:54 stuarta_ if you want to see what they asked and what I answered, I assume this has a history
03:54 stuarta_ ill post what i got
03:55 m0zes ideally, glusterd wouldn't be calling /bin/hostname (that should be a glibc call)
03:55 JoeJulian stuarta_: is ipv6 involved?
03:56 stuarta_ I think not. I didn't set anything up, but I not familiar with centos, maybe it has something weird going on?
03:56 mjrosenb ==joejulian... I've had lots of problems with ipv6 in the past.
03:56 JoeJulian It's an emerging issue, that's for sure.
03:56 m0zes my university campus *banned* ipv6 because it is too hard.
03:56 JoeJulian I've been trying to get all our stuff to be dual stack.
03:56 JoeJulian *head* *desk*
03:57 stuarta_ Here is the last 100 lines of each node: https://web.cs.sunyit.edu/~stuarta/ubu and https://web.cs.sunyit.edu/~stuarta/blackforest
03:57 JoeJulian banned? That's just dumb.
03:57 stuarta_ O that is /var/log/glusterfs/etc-glusterfs-glusterd.vol.log by the way
03:58 jiffe1 joined #gluster
04:00 mjrosenb Unable to find hostname: 150.... you really shouldn't be doing a dns lookup for an ip address...
04:00 stuarta_ I have tried both the hostname and ip address and neither works
04:00 JoeJulian They're both 3.3.1, right?
04:01 stuarta_ Should be. I installed them at the same time and followed the same instructions.
04:01 JoeJulian What's the deal with "asd" and "asdagasf"?
04:01 stuarta_ no they arent!
04:01 stuarta_ How could that happen?
04:01 stuarta_ One is 3.2.7
04:01 JoeJulian Didn't install the .repo
04:01 stuarta_ I just installed both of them today following the same instructions
04:01 stuarta_ So I missed a step?
04:01 JoeJulian yep
04:02 stuarta_ Damn. Ok, Ill give it another shot. Thanks so much!
04:02 JoeJulian Happy to help.
04:02 JoeJulian Not very helpful log files...
04:02 m0zes hooray, it is something relatively simple to fix. if only there were something in the log ;)
04:03 JoeJulian The clue, for anyone else who cares, was the "[rpcsvc-auth.c:324:rpcsvc_authenticate] 0-rpc-service: Auth too weak"
04:03 stuarta_ Yeah that would be nice.
04:03 * m0zes wants to upgrade from 3.2.7 to 3.3.1. probably over christmas break.
04:04 m0zes if only i had test hardware to play with ;)
04:04 JoeJulian Me too...
04:05 JoeJulian I'm still at the office at 8:05 because I can't do this on a test machine, so it HAS to work tonight.
04:05 stuarta_ It is working now. Thanks again.
04:05 JoeJulian yay
04:06 JoeJulian @learn Auth too weak as You have an incompatible version in the mix. Check all your gluster --version
04:06 glusterbot JoeJulian: The operation succeeded.
04:07 m0zes Auth too weak
04:07 JoeJulian @
04:07 JoeJulian @auth too weak
04:07 glusterbot JoeJulian: You have an incompatible version in the mix. Check all your gluster --version
04:07 m0zes just checking ;)
04:08 JoeJulian I thought about making it a messageparser, but I don't think most people would think to paste that warning line.
04:08 JoeJulian What I need to do is write a plugin that scans the pastebin link that's posted for known issues.
04:08 stuarta_ So gluster will do NFS, how about iscsi?
04:09 JoeJulian No. (or at least not yet).
04:09 stuarta_ Is that planned for the future?
04:11 m0zes not in the short term, afaict
04:11 JoeJulian There's some sort of block device translator. I'm not really sure what that's going to become.
04:11 JoeJulian It's probably just block devices for qemu-kvm.
04:13 mjrosenb oh right.
04:13 mjrosenb i need to go and submit my freebsd patches
04:13 JoeJulian +1
04:14 mjrosenb i got distracted and totally forgot about it
04:14 mjrosenb like 4 months ago
04:14 JoeJulian d'oh!
04:16 * m0zes is trying to use bacula to backup his glusterfs homedirs. 57TB takes a long time to backup, especially with 1 user have >48,000,000 ~4K files.
04:18 JoeJulian yuck
04:18 m0zes oh the wonder ideas scientists have.
04:18 m0zes s/wonder/wonderful
04:20 m0zes in doing this, I found out bacula has a *hardcoded* watchdog timer. after 6 days it will kill the socket and fail the job. I was estimating 10 days to finish a full backup of just that one user.
04:20 mjrosenb m0zes: ow.
04:21 mjrosenb so, having large directories and perf issues is a historical probelem.  Is the cause known?
04:21 m0zes he doesn't add or delete that many files on a per day basis, so my incremental backups will be fairly quick, just a bit of a pain to get the first full.
04:23 m0zes mjrosenb: from what I understand, it is both a network latency and a stat performance issue. this may be helped a little with the patch for in-kernel fuse http://git.kernel.org/?p=linux/kernel​/git/mszeredi/fuse.git;a=commit;h=81a​ecda8796572e490336680530d53a10771c2a9
04:23 glusterbot <http://goo.gl/T7EUC> (at git.kernel.org)
04:23 mjrosenb so he's just been adding 1,000 files a day for the last 60 years?
04:24 mjrosenb iirc, JoeJulian had a blog post about this issue
04:24 mjrosenb i don't remember if putting a giant cache on the client machines fixed the issues to any extent
04:25 m0zes no, he generated all those in about over 2-3 days, but is processing them now. my backups had stopped for about 2 weeks, and then I spent 2 more weeks trying to work around the socket issue until I found it was hardcoded.
04:26 m0zes today I decided to do a full backup for everything to get it all back in sync.
04:27 vpshastry joined #gluster
04:31 manik1 joined #gluster
04:32 yinyin joined #gluster
04:37 crashmag joined #gluster
04:39 __Bryan__ left #gluster
04:48 Guest80322 joined #gluster
04:48 Guest80322 Hello.
04:48 glusterbot Guest80322: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
04:49 Guest80322 anybody there who has expience in Gluster over aws?
04:50 Guest80322 i have performance issues in Gluster over EC2.
04:50 mjrosenb Guest80322: that sounds like it is unnecessary?
04:50 Guest80322 i ran GFS with m1.xlarge * 2 (EBS Optimzied) + 4 EBS with 400 PIOPS.
04:50 Guest80322 i got result in 100~150 IOPS with 3.0~3.5Mbps throughput.
04:50 Guest80322 It's too low.
04:51 Guest80322 I want to have some metnoring about glusterfs over ec2.
04:51 Guest80322 deployed with distributed + replicated.
04:52 kevein joined #gluster
04:52 Guest80322 is there anybody experience in gluster over aws?
04:54 GLHMarmot joined #gluster
04:55 Guest80322 do u know FINODELK?
04:59 ubuntu___ joined #gluster
05:01 yinyin joined #gluster
05:02 raghu joined #gluster
05:07 bulde joined #gluster
05:15 kevein joined #gluster
05:17 manik joined #gluster
05:17 GLHMarmot joined #gluster
05:21 hagarth joined #gluster
05:28 ngoswami joined #gluster
05:29 shylesh joined #gluster
05:29 shylesh_ joined #gluster
05:32 yinyin joined #gluster
05:36 berend Guest80322: everyone who tries gluster seems to get very low throughput.
05:36 berend Guest80322: not sure if its AWS, or maybe the setup is too simple.
05:36 berend Gluster doesn't like many small files either it seems.
05:47 shireesh joined #gluster
05:51 bala joined #gluster
06:13 bala joined #gluster
06:19 rastar joined #gluster
06:23 kevein joined #gluster
06:29 puebele joined #gluster
06:34 JoeJulian ?
06:34 JoeJulian everyone? low throughput?
06:35 vikumar joined #gluster
06:38 ramkrsna joined #gluster
06:38 ramkrsna joined #gluster
06:54 sripathi joined #gluster
07:14 manik joined #gluster
07:14 rgustafs joined #gluster
07:16 guigui1 joined #gluster
07:19 bulde joined #gluster
07:36 dobber joined #gluster
07:36 lkoranda joined #gluster
07:57 Nr18 joined #gluster
07:58 ctria joined #gluster
08:00 ekuric joined #gluster
08:00 glusterbot New news from newglusterbugs: [Bug 883785] RFE: Make glusterfs work with FSCache tools <http://goo.gl/FLkUA>
08:09 duffrecords anybody still awake in here?
08:13 vikumar joined #gluster
08:15 * ndevos just wakes up
08:15 sripathi joined #gluster
08:18 duffrecords my Gluster installation got seriously wrecked.  if all the data I need to recover is on it's own partition and I reinstall the OS and Gluster, then use that partition as the Gluster volume, do you think it will pick up where it left off?
08:21 ndevos duffrecords: you probably can do that, but when creating the volume you will need to remove one xattr from the brick directory first
08:22 duffrecords just from the brick directory itself, or all subdirectories?
08:29 JoeJulian or a prefix of it
08:29 glusterbot JoeJulian: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
08:29 JoeJulian duffrecords: ^^
08:32 duffrecords oh yes, I remember that one
08:32 JoeJulian That's what you'll have to do to the bricks to recreate your volume.
08:33 JoeJulian Other than that, everything should be good.
08:36 mjrosenb JoeJulian: what triggers glusterbot to say that? "prefix"?
08:36 JoeJulian That exact phrase
08:36 JoeJulian or a prefix of it
08:36 glusterbot JoeJulian: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
08:36 mjrosenb nice.
08:37 JoeJulian That's less likely to be accidentally typed and anyone that pastes the error message will trigger it.
08:37 duffrecords I'm backing up all the important data on these servers before attempting to reinstall everything.  these are VM disk images and if they are lost, there will be a lot of people looking for my head on a stick in the morning
08:39 andreask joined #gluster
08:40 JoeJulian If your bricks are their own disks, pull the disks out of the servers before reinstalling.
08:40 * JoeJulian does that.
08:40 JoeJulian It's hard to accidentally format the wrong disk if it's not in the machine.
08:41 ndevos +1
08:41 sripathi joined #gluster
08:42 duffrecords nope, the bricks are partitions on a RAID 10 array
08:44 andreask joined #gluster
08:51 bitsweat joined #gluster
08:51 tryggvil joined #gluster
09:06 Nevan joined #gluster
09:07 saz joined #gluster
09:09 gbrand_ joined #gluster
09:10 Nevan is there a guide for upgradeing from 3.0.5 to 3.3.1 or 3.2.9 ? ( which is more stable) ?
09:10 Oneiroi joined #gluster
09:12 bulde 3.0.5?
09:12 bulde that was surely before we had any docs
09:12 Nevan yup
09:12 cyberbootje joined #gluster
09:12 Nevan i cant find anything for that
09:13 Nevan its only a distributed system
09:13 bulde Nevan: http://www.gluster.org/community/documentatio​n/index.php/Gluster_3.0_to_3.2_Upgrade_Guide should help you
09:13 glusterbot <http://goo.gl/dil3D> (at www.gluster.org)
09:13 Nevan oh thx
09:13 bulde should be almost same even for 3.3.1
09:13 Nevan i would stay on that version, but we get more and more errors
09:14 Nevan with fast write and read opereations on the same file
09:14 Nevan the pipelines are getting to fast nowadays...
09:15 Nevan "Gluster recommends that you back up your data" thats funny...
09:15 Nevan i cant backup 300TB
09:16 * bulde understands
09:17 Nevan oh, that step 5 will take a while...
09:17 bulde yes, but it is *very important* because we want to make it available with new xattrs
09:18 bulde how large is your data set?
09:18 bulde (ie, num files, depth, size)?
09:19 Nevan the biggest glfs server farm is about 101TB on 5 servers
09:19 Nevan files...millions..
09:20 bulde hmm
09:20 manik joined #gluster
09:21 Nevan depth, up to 10 levels
09:21 bulde backend fs? and server OS? because if its ext3/4 on recent OSs, then you may hit into bug 838784
09:21 Nevan bricks about 4x 18tb and 1x 30tb
09:21 glusterbot Bug http://goo.gl/CO1VZ high, high, ---, sgowda, ASSIGNED , DHT: readdirp goes into a infinite loop with ext4
09:22 Nevan centos 5.7 xfs filessystem
09:22 bulde thats fine then
09:23 Nevan hmm another one has ext3, but thats a smaller one
09:23 Nevan we have 7 glusterfs systems running in total
09:23 Nevan 5 of them with xfs
09:25 Nevan ah that bug, yes, but it's still not fixed ?
09:30 glusterbot New news from newglusterbugs: [Bug 885008] extra unref of the fd might lead to segfault <http://goo.gl/rLh76>
09:34 DaveS_ joined #gluster
09:35 JoeJulian bulde: Yeah, what's the status on that? That's been months with no updates in gerrit or bugzilla.
09:46 duerF joined #gluster
09:48 sgowda joined #gluster
09:48 cyberbootje joined #gluster
09:49 cyberbootje1 joined #gluster
09:56 mooperd joined #gluster
09:58 mooperd joined #gluster
09:59 puebele1 joined #gluster
10:27 cyberbootje joined #gluster
10:40 yinyin joined #gluster
10:40 shireesh joined #gluster
10:40 bauruine joined #gluster
10:41 shireesh joined #gluster
10:52 sgowda joined #gluster
11:00 puebele1 joined #gluster
11:01 cyberbootje joined #gluster
11:20 rgustafs joined #gluster
11:21 Nr18 joined #gluster
11:40 yinyin joined #gluster
11:46 shireesh joined #gluster
11:54 vpshastry joined #gluster
11:57 andreask joined #gluster
11:58 yinyin joined #gluster
12:00 yinyin joined #gluster
12:05 nueces joined #gluster
12:11 khushildep joined #gluster
12:20 puebele joined #gluster
12:49 gbrand__ joined #gluster
12:49 crashmag joined #gluster
12:51 hagarth joined #gluster
12:56 cyberbootje joined #gluster
12:58 ctrianta joined #gluster
13:03 cyberbootje joined #gluster
13:09 Alpinist joined #gluster
13:23 Jippi joined #gluster
13:41 yinyin_ joined #gluster
13:46 saz joined #gluster
13:56 hagarth joined #gluster
14:04 * m0zes is mildly annoyed this bug has been open for 8 months. https://bugs.gentoo.org/show_bug.cgi?id=413417
14:04 glusterbot Bug 413417: was not found.
14:06 duerF joined #gluster
14:11 tryggvil joined #gluster
14:12 lkoranda joined #gluster
14:14 lkoranda joined #gluster
14:15 lkoranda joined #gluster
14:16 robo joined #gluster
14:22 tryggvil_ joined #gluster
14:25 nightwalk joined #gluster
14:25 lkoranda joined #gluster
14:29 ctrianta joined #gluster
14:34 bitsweat left #gluster
14:35 robo joined #gluster
14:36 rastar left #gluster
14:40 dustint joined #gluster
14:44 noob2 joined #gluster
14:58 Nevan joined #gluster
15:02 duffrecords I just reinstalled my OS, retaining the data on the brick, and installed gluster-server from semiosis's PPA.  however, when I run it it says "job failed to start" and there's nothing in /var/log/glusterfs
15:06 nightwalk joined #gluster
15:08 wushudoin joined #gluster
15:16 stopbit joined #gluster
15:17 hagarth joined #gluster
15:18 duffrecords ok, looks like libssl0.9.8 was a missing dependency
15:18 duffrecords but it still won't start
15:21 ctrianta joined #gluster
15:30 maxiepax joined #gluster
15:52 lkoranda joined #gluster
15:54 stuarta_ When I try to use the client to access my gluster, it doesn't mount and I get this error: W [rpcsvc.c:179:rpcsvc_program_actor] 0-rpc-service: RPC program version not available
15:54 stuarta_ However, if I NFS mount it, it works.
15:54 Oneiroi do you have glusterfs and fuse installed ?
15:54 stuarta_ Yeah
15:55 ndevos stuarta_: what version is the client, and what version is the server?
15:55 robo joined #gluster
15:55 stuarta_ Do they need to be the same?
15:55 Oneiroi yeh
15:55 stuarta_ Ok, that is why then. Thanks.
15:56 Oneiroi ok my issue … I have 3 openstack nova nodes, I want to setup gluster as a common file system, replica 3 is an obvious but least performing type; can anyone suggest a better config ?
15:57 Oneiroi version 3.3
15:57 m0zes Oneiroi: if need replication, create 6 bricks, 2 per server and use replica 2.
15:58 m0zes host1:/brick1 host2:/brick1, host3:/brick1 host1:/brick2, host3:/brick2 host2:/brick2
15:58 Oneiroi m0zes: indeed replication is needed, will give that a go see how it performs :)
15:58 daMaestro joined #gluster
15:59 Rammses joined #gluster
16:01 bennyturns joined #gluster
16:04 rammses_theold joined #gluster
16:07 guigui1 left #gluster
16:07 H__ Question: A rebalance action after adding server B did not spread data evenly over servers A+B, what can I do to make the setup HA now ?
16:08 Oneiroi H__:  sis you try forcing a self heal ?
16:08 Oneiroi s/sis/did/
16:08 glusterbot What Oneiroi meant to say was: H__:  did you try forcing a self heal ?
16:09 robo joined #gluster
16:10 aliguori joined #gluster
16:10 H__ Oneiroi: no, I did a rebalance (which took 1.5 months). Both servers have the same #TiB, but I saw that server A can hold both replica copies for a file.
16:10 mooperd_ joined #gluster
16:11 Oneiroi ouch must have a lot of objects :S
16:12 H__ There's continuous rsyncs scraping the entire tree, I think they trigger self-heal (this is 3.2.5). There's 1.5M directories and 17M files on it.
16:13 Oneiroi ah so more a case of blocking i/o perhaps
16:13 Oneiroi anyway getting off topic
16:13 lh joined #gluster
16:14 Oneiroi m0zes: hmmm following H__ 's comments I too wonder about a single host containing both replicas
16:14 m0zes H__: want to pastie your volume info?
16:16 m0zes Oneiroi: shouldn't happen. the order in which you add host/brick pairs affect the replica. with replica 2, the first 2 are a replica pair, the second 2 are a replica pair, and the third 2 are a replica pair. it will then distribute files to each of the 3 replica pairs.
16:16 Oneiroi m0zes: thanks for the clarification
16:17 H__ m0zes: http://dpaste.org/XXA4Z/
16:17 glusterbot Title: dpaste.de: Snippet #214633 (at dpaste.org)
16:19 Oneiroi m0zes: hmmm "Failed to perform brick order check. Do you want to continue creating the volume" going to continue, anything to worry about ?
16:19 Oneiroi ah nvm
16:19 Oneiroi typo in brick
16:22 m0zes H__: in your case, stor1:/gluster/b and stor1:/gluster/c are part of the replica pair. how did you add the second host?
16:22 tryggvil joined #gluster
16:25 tqrst joined #gluster
16:25 tqrst joined #gluster
16:27 dustint joined #gluster
16:28 H__ m0zes: gluster volume add-brick vol01 stor2:/gluster/b stor2:/gluster/c stor2:/gluster/d stor2:/gluster/e stor2:/gluster/f stor2:/gluster/g stor2:/gluster/h stor2:/gluster/i
16:29 m0zes H__: was it already a replica volume?
16:29 H__ I did that when stor1 space ran out, we added a second stor server and will ad another one in 3 months or so
16:30 xavih joined #gluster
16:30 H__ m0zes: yes, it was running on just stor1 with this -> http://dpaste.org/OG3JQ/
16:30 glusterbot Title: dpaste.de: Snippet #214635 (at dpaste.org)
16:32 Oneiroi ok this is b0rked … "mkdir: cannot create directory `images': Invalid argument"
16:33 copec Are there any guides for compiling gluster 3.3 on solaris 11.1
16:33 copec ?
16:34 m0zes H__: ahh everything makes sense now. your replica pairs are stor1: b/c d/e f/g h/i. gluster volume replace-brick stor1:/gluster/c stor2:/gluster/b commit... et al. then you would want to re-add the 'replaced' bricks with the correct pairs next to each other.
16:35 dialt0ne joined #gluster
16:35 m0zes I have no idea how to fix it in its current state.
16:35 H__ ehh ... can you rephrase that a bit ? First the replace-bricks, 1c<->2b and so on, and then ?
16:36 H__ do I need a third disk ? like 1c->3b, 2b->1c, 3b->2b ?
16:37 H__ and can all this be done online, keeping replica count intact ?
16:37 dialt0ne if i have a replicated volume on two peers with the nfs server on both nodes, can i have half the nfs clients connect to one peer and half to the other?
16:37 H__ dialt0ne: yes
16:38 dialt0ne excellent
16:38 dialt0ne that's good for aws availability zones - connect to the peer in the same AZ
16:38 H__ serving NFS grew out of bounds on my setup though, we had to move to gluster mounts, and even these die from time to time
16:39 m0zes h__: before you ran the add-brick for all bricks on stor2, you should have run a replace-brick to move one brick of each replica pair to stor2. if I understood your structure correctly you would have wanted stor1:/gluster/b and stor2:/gluster/b to have been a pair.
16:39 dialt0ne well, i have it setup with fuse mounts right now, but i'm looking to try other things
16:39 H__ yes
16:40 H__ m0zes: I just followed the gluster documentation, did the rebalance in 2 steps. Pity that these documents did not mention the replica consequences
16:41 ctrianta joined #gluster
16:42 m0zes in this manner you would have wanted to replace brick stor1:/gluster/c with stor2:/gluster/b and stor1:/gluster/e with stor2:/gluster/d. as you progessed it would have pull half the bricks from stor1 and replace them with half the bricks from stor2.
16:42 mooperd joined #gluster
16:42 H__ m0zes: so, to fix now I indeed have to have 3th brick, to do the 1c->3b, 2b->1c, 3b->2b
16:44 ndevos data juggling!
16:44 m0zes H__: I believe that would work.
16:44 m0zes unfortunately.
16:47 H__ unfortunately what ?
16:48 m0zes H__: I was referring to ndevos's comment about data juggling.
16:49 H__ ah yes. This will take another 2 months or so :-P
17:00 Humble joined #gluster
17:05 nueces joined #gluster
17:10 dbruhn With the replication does it only slow the writes or the reads too?
17:16 m0zes it slows down stat operations, actual reads not really.
17:19 Bullardo joined #gluster
17:22 dbruhn has anyone applied that readdirplus patch and seen improvements in performance?
17:25 m0zes dbruhn: I've tried. I am not going to run a 3.7-rc kernel, though.
17:29 hagarth joined #gluster
17:30 dbruhn mazes, how big is your system?
17:30 dbruhn s/mazes/m0zes
17:31 m0zes 2 servers, 4 volumes, 8 bricks 600TB raw
17:34 dbruhn are you in an I/O intense environment?
17:41 ramkrsna joined #gluster
17:42 y4m4 joined #gluster
17:42 m0zes dbruhn: I run an hpc cluster, it acts as my homedirs and scratch volumes for 1300 cores of compute.
17:43 m0zes fairly i/o instensive.
17:43 Mo__ joined #gluster
17:43 Mo__ joined #gluster
17:45 dbruhn I am looking at building a system that needs to be able to produce about 600/700 IO's/sec for every 1TB of data stored on it. Looking at starting with 96 - 600GB 15K SAS drives and QDR on the backend. Running a distributed mirror and ultimately ending up with 40TB of usable space, using 8 servers. Does this seem unreasonable?
17:50 m0zes I don't think that is too unreasonable. these are my glusterfs servers http://ganglia.beocat.cis.k​su.edu/?c=Beocat&amp;h=echo http://ganglia.beocat.cis.ksu​.edu/?c=Beocat&amp;h=electra if you want graphed stats.
17:50 glusterbot <http://goo.gl/CIQwB> (at ganglia.beocat.cis.ksu.edu)
17:50 xavih joined #gluster
17:50 dbruhn oh awesome!
17:51 dbruhn Does my math seem off on the disk counts, ect?
17:51 dialt0ne left #gluster
17:56 m0zes 96 disks, 12 per server? dist + replica 2 would give 28TB, right?
17:57 flakrat joined #gluster
17:57 flakrat joined #gluster
18:00 dbruhn sorry you're right, this system would be 24TB
18:01 dbruhn I have two system builds going right now, the other one will be a slower SATA system
18:06 flakrat JoeJulian, thanks for writing this up: http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ very informative and answered what I was about to type in here :-)
18:06 glusterbot <http://goo.gl/5ohqd> (at joejulian.name)
18:27 zaitcev joined #gluster
18:39 xavih joined #gluster
18:48 JoeJulian flakrat: Awesome. Happy to help.
18:59 cyberbootje1 joined #gluster
19:01 neofob is it recommended to have many bricks served by one server in a volume?
19:01 neofob or it is better with lvm'ed them into one giant brick?
19:01 eurower joined #gluster
19:01 eurower joined #gluster
19:02 cyberbootje1 joined #gluster
19:02 eurower left #gluster
19:02 JoeJulian I prefer the many brick approach, others like to build raid10 stacks. I wouldn't want to combine them with lvm or anything that increases risk of failure by combining the fault probabilities of multiple drives.
19:03 neofob then the client will open two ports for each brick, right?
19:04 JoeJulian No, it's still one port per brick, just multiple bricks per server.
19:05 neofob oh, i remembered that wrong; thanks
19:21 puebele1 joined #gluster
19:28 cyberbootje joined #gluster
19:29 cyberbootje1 joined #gluster
19:40 Teknix joined #gluster
19:41 gbrand_ joined #gluster
20:04 premera joined #gluster
20:07 premera Hello, is gluster 3.3 supported on Ubuntu 11.04 (Natty) ? I am getting #apt-get update "W: Failed to fetch http://ppa.launchpad.net/semiosis/ubuntu-gluste​rfs-3.3/ubuntu/dists/natty/main/source/Sources  404  Not Found"
20:07 glusterbot <http://goo.gl/3ktv9> (at ppa.launchpad.net)
20:08 nueces joined #gluster
20:10 bfoster joined #gluster
20:10 nhm joined #gluster
20:10 redsolar joined #gluster
20:10 clutchk joined #gluster
20:10 dec joined #gluster
20:10 jiffe98 joined #gluster
20:13 elyograg premera: paring that URL back to "dists" I only see precise and quantal.  I'm not involved in the project, so I couldn't say whether it might be added.  I don't think version 11 is one of the LTS versions, that may be the only ones they are planning to support.
20:14 premera thanks
20:18 arusso left #gluster
20:20 arusso joined #gluster
20:26 JoeJulian premera: IIrc, semiosis said that there are prerequisites missing from natty.
20:28 premera JoeJulian: thanks, looks like I need to bite the bullet and upgrade to 12.04
20:28 JoeJulian I believe several shots of fine whiskey go along with bullet biting, so there's your reward.
20:28 premera :-)
20:37 robo joined #gluster
20:38 xavih joined #gluster
20:39 nightwalk joined #gluster
20:58 nightwalk joined #gluster
21:13 dbruhn joined #gluster
21:21 nueces joined #gluster
21:22 dan_a joined #gluster
21:40 dbruhn joined #gluster
21:44 andreask joined #gluster
21:44 madphoenix joined #gluster
21:45 madphoenix Quick question: I have all of the bricks in my volume mounted with the user_xattr option to support metadata.  When I mount the gluster volume from a client, do I also need to mount with the user_xattr option?  I ask because I'm testing some restores of gluster data through the client, and our backup system is not able to write the trusted.gfid and posix_acl_access properties
21:55 JoeJulian You won't be able to (and wouldn't want to) restore the trusted.gfid, trusted.afr.*, or trusted.dht.* through the client. Be careful restoring brick backups through the client. You won't want to restore the .glusterfs tree, nor any mode 1000, 0 size files.
21:55 JoeJulian To restore the ACLs you have to mount the client with -o acl
21:56 madphoenix Right, ok thats pretty much what I figured.  I was not restoring the .glusterfs folder
22:19 itamar_ joined #gluster
22:53 nightwalk joined #gluster
23:06 Rammses joined #gluster
23:11 itamar__ joined #gluster
23:31 hattenator joined #gluster
23:37 stuarta_ My gluster implementation is very slow. It is fairly fast when I mount it locally, but when I mount it a single hop away, speeds go down to about 5 MB/s. I think it is my networking. Out of curiousity, what networking technologies are you all using?
23:53 andreask left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary