Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 sjoeboo joined #gluster
00:13 tryggvil joined #gluster
00:43 sjoeboo joined #gluster
01:10 hagarth joined #gluster
01:18 sjoeboo joined #gluster
01:36 mkultras joined #gluster
01:52 sjoeboo joined #gluster
02:27 sjoeboo joined #gluster
02:33 gm_ joined #gluster
02:39 hagarth joined #gluster
02:56 sjoeboo joined #gluster
03:00 overclk joined #gluster
03:02 overclk joined #gluster
03:42 sripathi joined #gluster
03:51 bharata joined #gluster
04:03 bulde joined #gluster
04:05 hagarth joined #gluster
04:12 shylesh joined #gluster
04:21 daMaestro joined #gluster
04:56 vpshastry joined #gluster
05:07 sgowda joined #gluster
05:08 lala joined #gluster
05:11 sripathi joined #gluster
05:23 bharata joined #gluster
05:31 lala_ joined #gluster
05:51 rastar joined #gluster
05:53 johnmark joined #gluster
05:54 raghu joined #gluster
06:03 bulde1 joined #gluster
06:10 Shireesh joined #gluster
06:20 glusterbot New news from newglusterbugs: [Bug 909798] Quota doesn't handle directory names with ','. <http://goo.gl/ZzHRS>
06:21 shireesh joined #gluster
06:23 satheesh joined #gluster
06:24 hagarth joined #gluster
06:24 shireesh joined #gluster
06:24 nightwalk joined #gluster
06:26 Ryan_Lane joined #gluster
06:28 jjnash joined #gluster
06:28 sgowda joined #gluster
06:32 sahina joined #gluster
06:34 aravindavk joined #gluster
06:37 shireesh2 joined #gluster
06:39 kanagaraj joined #gluster
06:41 shireesh joined #gluster
06:42 bharata joined #gluster
06:42 bala1 joined #gluster
06:50 rgustafs joined #gluster
06:52 Nevan joined #gluster
06:57 vimal joined #gluster
07:14 guigui1 joined #gluster
07:17 jtux joined #gluster
07:19 mohankumar joined #gluster
07:21 sgowda joined #gluster
07:23 sashko joined #gluster
07:32 bulde joined #gluster
07:45 ekuric joined #gluster
07:53 guigui1 left #gluster
07:56 shireesh joined #gluster
07:59 ctria joined #gluster
08:02 jtux joined #gluster
08:03 77CAANH0R joined #gluster
08:05 andreask joined #gluster
08:13 hybrid5121 joined #gluster
08:16 thtanner joined #gluster
08:21 tjikkun_work joined #gluster
08:23 hagarth joined #gluster
08:30 thtanner joined #gluster
08:33 Humble joined #gluster
08:46 anx joined #gluster
08:50 duerF joined #gluster
08:57 w3lly joined #gluster
08:59 bulde1 joined #gluster
09:20 PsychoMIME joined #gluster
09:24 sripathi joined #gluster
09:24 PsychoMIME Hi guys
09:25 PsychoMIME I have question
09:28 Staples84 joined #gluster
09:30 vpshastry joined #gluster
09:35 satheesh joined #gluster
09:39 ainur_russia Good day, the glusterfs community! I have one problem. I use virtual machines for qemu-kvm. qcow2 images of VM stored in the storage with glusterfs 3.2.7.  Situation. Network is down and VMs/glusterfs clients lose connect with storage. After network up, I rebooted VMs, but this process for something VMs was too long. In the end VMs runned, however after some time VMs work stopped. lsof shows that images files is many opened with glusterfs threads and
09:39 ainur_russia sometimes I cannot read image files with glusterfs mount pount. Question. How I can kill these threads of glusterfs? Reboot glusterd daemon only? But what is happened other normally working VMs? Thank you and I really sorry for my English.
09:47 bharata joined #gluster
09:49 mooperd joined #gluster
09:51 Norky joined #gluster
09:52 hybrid5121 joined #gluster
09:54 lh joined #gluster
09:54 lh joined #gluster
10:03 jtux joined #gluster
10:09 rastar joined #gluster
10:13 dobber joined #gluster
10:18 bulde joined #gluster
10:19 nightwalk joined #gluster
10:19 jjnash joined #gluster
10:25 bharata joined #gluster
10:27 rcheleguini joined #gluster
10:38 shireesh joined #gluster
10:44 satheesh joined #gluster
10:47 inodb joined #gluster
10:48 rastar joined #gluster
10:49 rastar joined #gluster
11:11 H__ my glusterfsd died on a replace-brick command. after that aborting the replace-brick failed. I stopped glusterd, started it again and then abort worked. A new start as well, running now ...
11:12 H__ and it died again :( (glusterfsd serving the read-from brick)
11:13 H__ its logfile says : signal received: 11
11:14 H__ what can the cause be ?
11:16 H__ Now replace-brick status says "Number of files migrated = 0       Current file=" and system is idle. How do I continue the replace-brick activity ?
11:39 sjoeboo joined #gluster
11:40 ctria joined #gluster
11:44 ramkrsna joined #gluster
11:44 ramkrsna joined #gluster
11:44 ramkrsna joined #gluster
11:44 ramkrsna joined #gluster
11:49 H__ a replace-brick start refuses but i found that replace-brick pause 'works' after which a start is accepted.
11:49 rastar joined #gluster
11:49 andreask joined #gluster
11:51 tryggvil joined #gluster
11:52 H__ and another sig11, manually start another glusterfsd, replace-brick pause and start
11:55 H__ the brick logs this on sig11 -> http://dpaste.org/hDSoF/
11:55 glusterbot Title: dpaste.de: Snippet #218994 (at dpaste.org)
11:56 H__ and another sig11
12:02 H__ the sig11 reproduces within about 90 seconds
12:09 edward1 joined #gluster
12:11 shylesh joined #gluster
12:11 vpshastry joined #gluster
12:13 shylesh joined #gluster
12:31 manik joined #gluster
12:48 joeto joined #gluster
12:57 joeto joined #gluster
13:07 rgustafs joined #gluster
13:13 grzany_ joined #gluster
13:16 grzany__ joined #gluster
13:28 tryggvil_ joined #gluster
13:30 balunasj joined #gluster
13:32 tryggvil joined #gluster
13:46 aliguori joined #gluster
13:50 mooperd joined #gluster
13:51 mooperd_ joined #gluster
13:53 disarone joined #gluster
13:57 hagarth joined #gluster
14:01 tjikkun_work joined #gluster
14:01 plarsen joined #gluster
14:02 jskinner_ joined #gluster
14:08 dustint joined #gluster
14:13 sjoeboo joined #gluster
14:24 guigui joined #gluster
14:27 hagarth joined #gluster
14:30 satheesh joined #gluster
14:44 jack_ joined #gluster
14:45 manik joined #gluster
14:54 rwheeler joined #gluster
15:00 stopbit joined #gluster
15:01 sjoeboo joined #gluster
15:02 overclk joined #gluster
15:03 hybrid5122 joined #gluster
15:04 w3lly1 joined #gluster
15:06 zoldar_ joined #gluster
15:06 LoadE joined #gluster
15:07 tjikkun joined #gluster
15:07 jgillmanjr joined #gluster
15:07 tjikkun joined #gluster
15:07 x4rlos joined #gluster
15:07 mynameisbruce_ joined #gluster
15:11 joeto joined #gluster
15:17 bennyturns joined #gluster
15:19 Humble joined #gluster
15:20 wushudoin joined #gluster
16:03 guigui joined #gluster
16:10 bennyturns joined #gluster
16:26 bugs_ joined #gluster
16:34 cw joined #gluster
16:34 cw Hi; I got two servers running one brick each, and I want to move their data to a new set of servers. My plan is to add the 2 new servers to the peers; set replica to 4 (so all 4 have all data) and then gluster volume remove-brick the two old servers
16:35 cw is that the recommended way of doing it?
16:35 cw or should I just peer detach them?
16:45 hagarth joined #gluster
16:48 Norky cw: I think replace-brick will do what you want
16:49 Norky see section 7.4. Migrating Volumes in http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
16:49 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
16:50 cw Norky: oh, thanks
16:50 cw the old servers is 3.2 though
16:51 Norky are you also moving from 3.2 to 3.3?
16:51 cw yes sir :)
16:51 daMaestro joined #gluster
16:52 Norky I am not sure, but I dont' think you can have different versions in the same cluster
16:52 cw I can't :)
16:52 cw I'm copying all data to a new tmp cluster, then I will wipe & reinstall the old, let it join the "tmp" cluster, and then move the bricks away from the "tmp" cluster
16:53 kkeithley I'm sure. 3.2.x and 3.3.x are not compatible
16:54 Norky okay, create a new 3.3 "tmp" clsuter, have a client mount both the 3.2 and 3.3 volumes, then copy from one to the other
16:54 sashko joined #gluster
16:55 Norky hmm, even that might be difficult
16:56 Norky I'm not sure a 3.3 client can mount a 3.3 volume, and vice versa
16:56 m0zes Norky: I used geo-replication to migrate servers. :/
16:57 Norky I believe geo-replication is rsync of the bricks, so that might work
16:57 Norky it would probably be less work to upgrade from 3.2 to 3.3 on the old machines, is that not an option for you?
17:00 cw Norky: 3.3 and 3.2 can't talk, but why would it be a problem; I migrate data using rsync :)
17:00 m0zes I needed to do my migration of servers "live" until the *final* 2-3 hour switch over. I used the geo-replication to stage the data, then when everything finished syncing kill it and finish the migrate
17:00 cw Norky: this seem like a less-downtime thing :) and it's only 500GB
17:01 Norky cw, what exactly woudl you be rsyncing from and to?
17:03 manik joined #gluster
17:04 cw Norky: the current running setup on 3.2 to a tmp cluster on separate servers running 3.3 :)
17:04 Norky I mean would you be rsyncing a FUSE-gluster mount to a FUSE-gluster mount, or are you thinking og rsyncing the bricks?
17:04 cw I'm syncing from the raw bricks to a fuse
17:05 cw because my 3.2 index is broken, most of the files are not accessible / split brained using fuse :)
17:05 cw due to an old bug in gluster
17:05 Norky oh.
17:05 Norky so you're trying to fix a third problem :)
17:05 cw so I don't want to carry and state over from the old :)
17:05 cw yes
17:06 cw it's a many faceted operation haha
17:06 Norky does 3.2 have the .glusterfs directory in the root of each brick? If so you'll want to avoid copying that...
17:06 cw I'm not copying that :)
17:06 cw I skip all files in root folder :)
17:06 cw find /data/www/ -maxdepth 1  -type d | parallel -v rsync -rv --exclude "images" {} web14:/mnt/www   :)
17:07 cw thanks for the tip though
17:07 Norky parallel is a home-grown parallel shell tool?
17:07 neofob joined #gluster
17:08 erik49_ joined #gluster
17:08 cw Norky: no, it's a unix tool :)
17:08 cw http://en.wikipedia.org/wiki/GNU_parallel
17:08 glusterbot Title: GNU parallel - Wikipedia, the free encyclopedia (at en.wikipedia.org)
17:09 Norky hmm, nifty, that's a new one on me
17:10 Norky normally I use tools that do similar things on remote systems
17:11 Norky I *think* what you propose will work
17:11 Norky to get you to a new tmp 3.3. cluster
17:12 Norky then as I said, replace-brick one server/brick at a time
17:12 cw I'm not sure why it wouldn't work - it's fresh, all files is copied from raw => fuse; the old servers will be reinstalled, join the tmp cluster, and then replace-brick the tmp to the real servers once at a time :)
17:14 johnmark Write up on the 3.4 alpha release: http://www.gluster.org/2013/02/​new-release-glusterfs-3-4alpha/
17:14 glusterbot <http://goo.gl/GuQav> (at www.gluster.org)
17:17 raghu joined #gluster
17:25 nightwalk joined #gluster
17:25 jjnash joined #gluster
17:31 elyograg is the quota feature in gluster scalable to a few thousand directories?  I haven't toyed with it yet, but I'm going to look at it as a way to avoid doing frequent 'du' calcuations on those directories, which would run very slowly and quite possibly affect general performance.
17:32 elyograg and can I put a quota on a parent directory when I've got quotas on it children?
17:33 gbrand_ joined #gluster
17:38 Mo___ joined #gluster
17:45 bulde joined #gluster
17:48 Ryan_Lane joined #gluster
17:50 plarsen joined #gluster
17:51 nueces joined #gluster
18:01 zaitcev joined #gluster
18:06 VSpike This is probably a really dumb question, but I'm creating a new cluster and I peer probe'd host1 from host2 and vice versa. Now trying to create volume from host1 and gluster volume create gv0 replica2 host1:/foo host2:/foo says "Host host1 not a friend"
18:06 VSpike gluster peer status from each shows the other, with a name
18:07 VSpike host, why doth thou not know thyself?
18:13 cicero do you mean replica 2 ?
18:13 cicero instead of replica2
18:14 cicero and gluster peer status will never show itself
18:14 cicero at least that's what i've found
18:15 VSpike cicero: ah yeah, sorry.. typo.. but was correct on the CLI
18:15 VSpike cicero: agreed, I'd expect it not to show itself. But how should you specify the local brick?
18:17 cicero so for me
18:17 cicero i have some local /etc/hosts override
18:17 cicero but essentially it should be listed in
18:17 cicero grep `hostname` /etc/hosts
18:18 VSpike yes, it is
18:18 cicero yeah i'm not terribly sure
18:19 VSpike Thanks anyway :) Somehow I made this work last time I did it, and I'm not quite sure how.
18:19 cicero perhaps something in the glusterd logs
18:19 cicero will illuminate the issue
18:21 bulde joined #gluster
18:28 plarsen joined #gluster
18:35 Staples84 joined #gluster
18:36 gbrand_ joined #gluster
18:40 mattf joined #gluster
18:50 w3lly joined #gluster
19:05 disarone joined #gluster
19:17 andreask joined #gluster
19:20 Alpinist joined #gluster
19:30 cw if I create a replica volume with 2 servers, with replica of 2 - and I add say a 3rd server, is it possible to change the volume replica setting to 3 ?
19:30 stat1x joined #gluster
19:33 xian1 joined #gluster
19:33 JoeJulian cw: yes, though what you'll be gaining from that is increased fault tolerance at the expense of decreased write performance (and lookup performance to a small extent).
19:33 cw JoeJulian: what's the command for that?  :)
19:34 JoeJulian add-brick
19:34 cw doesn't that add a new brick? does it automatically add it as a replica?
19:34 JoeJulian gluster volume add-brick replica 3 new:/brick
19:34 cw ah; so it's an optional parameter to it
19:35 cw not sure why I can't find it in http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
19:35 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
19:35 JoeJulian oops, forgot the volume name in that.
19:35 cw but it's in the gluster help :)
19:35 wushudoin left #gluster
19:35 cw another thing; can gluster 3.2 clients connect to 3.3 servers?
19:35 JoeJulian no
19:35 cw dammit :P
19:36 JoeJulian Yeah, I agree.
19:43 xian1 Howdy folks.  I've got a 3.3.1 distributed replicated setup w/ six hosts running on Debian Squeeze kernel 3.2, and it's kicking my butt.  I've been googling and reading docs, but the long and short of it is loads of gfid mismatches.  rsyncs to node0 from a master repository tend to lock up/hang, the cluster is hella slow to return directory listings and I get a lot of "unable to lock on even one child" and "Blocking inodelks failed".  Eventually cluster nod
19:44 xian1 Even a recommended reading list that might address any of the above would be appreciated.
19:46 JoeJulian xian1: You're not writing directly to the bricks, are you?
19:47 xian1 nope
19:48 JoeJulian So from the perspective of rsync writing to a client mount, the issue you're seeing is lockups?
19:48 xian1 bricks are mounted at /gluster/tier3_block; reads/writes go to fuse mount at /archive
19:49 xian1 yeah, the rsyncs lockup, unkillable until unmounting file system
19:49 JoeJulian Is this a locally mounted filesystem?
19:49 JoeJulian er, locally mounted volume?
19:49 xian1 yes.  the six storage nodes also are servers, the gluster file system is locally mounted at /archive
19:50 xian1 other systems also mount the volume, not as "read only", but they don't write data to the volume
19:50 JoeJulian Hmm, sounds like the ,,(thp) issue I found on the red hat kernels. That wasn't supposed to be the same in debian though.
19:50 glusterbot There's an issue with khugepaged and it's interaction with userspace filesystems. Try echo never> /sys/kernel/mm/redhat_transparent_hugepage/enabled . See http://goo.gl/WUNSx for more information.
19:50 JoeJulian Oh!
19:50 JoeJulian @ext4
19:50 glusterbot JoeJulian: Read about the ext4 problem at http://goo.gl/PEBQU
19:51 xian1 xfs
19:51 JoeJulian hmm
19:52 xian1 this is a large (30TB+) volume w/ millions of files.  I did not increase the inode size at fs creation time (acted first, then read docs).
19:52 rwheeler joined #gluster
19:53 xian1 I could wipe and restart from scratch, it's not a huge deal, but wondering if I should use the gfid tools to build a monster list and compare gfid mismatches.
19:53 JoeJulian I won't break anything, just slightly less efficient.
19:53 JoeJulian It
19:53 * JoeJulian can't type today.
19:53 xian1 that's ok.
19:53 xian1 so which is less efficient? :)
19:54 JoeJulian Debian has a different setting for THP, /sys/kernel/mm/transparent_hugepage/enabled
19:54 JoeJulian The smaller inodes are less efficient.
19:54 JoeJulian They require to disk seeks to get the additional metadata.
19:54 JoeJulian two
19:54 xian1 right, I gathered that would be a performance hit, won't do that again.
19:55 xian1 after a week of syncing data, I did the palm-to-forehead.
19:55 JoeJulian hehe
19:56 JoeJulian So check that thp setting. Default is "madvise". Try "never" and see if that makes any difference.
19:56 xian1 will that help it recover from its current wackiness during self heals?
19:57 JoeJulian Also, for rsync, you really want to use --inplace, otherwise rsync will create a tempfile which won't has to the same brick as the target. When it renames that tempfile, the hash will look at the wrong brick first which will also be a little less efficient.
19:57 JoeJulian s/has to/hash to/
19:57 xian1 hmm, ok, good to know.
19:58 glusterbot JoeJulian: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
19:58 * JoeJulian pummels glusterbot.
19:58 xian1 :)
19:59 JoeJulian I think that if you're experiencing that THP deadlock that I found, it could solve your self-heal issues as well.
19:59 xian1 I will try it out right away and report back
20:08 cw when I've done a "add-brick" on a replica volume, how do I get gluster to actually copy the data?
20:08 cw should it do it automatically if there is 3 servers and replica of 3 in the volume?
20:08 mattf left #gluster
20:09 JoeJulian I would expect it to start healing automatically if the replica count is increased, yes. If it's not, do "gluster volume heal $vol full"
20:12 cw oki :)
20:21 ctria joined #gluster
20:42 gbrand_ joined #gluster
20:56 duerF joined #gluster
21:07 bcipriano joined #gluster
21:08 bcipriano hello, anyone out there? i'm having some issues doing a rebalance and i'd hope to get some assistance here
21:37 JoeJulian hello
21:37 glusterbot JoeJulian: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:37 JoeJulian bcipriano: ^^^
21:37 JoeJulian That statement, btw, it true for almost all of IRC.
21:40 polenta joined #gluster
21:44 xian1 JoeJulian: Hey, I've tested the THP setting, and I've still got issues.  I've even rebooted the cluster to make sure we're starting with a nice, clean slate.  I've got a list of files and paths I'm testing against, I can see I have gfid mismatches in directory names—if I want to remove these dirs, do I need to go into the volume/.glusterfs directory and remove their counterparts?  I am still able to pound the cluster into a wedged state...
21:47 JoeJulian Not sure how it handles gfid mismatches on directories. Basically, though, it sounds like you're split-brained. The .glusterfs counterpart to a directory is a symlink to its parent. Yes, that symlink does have to be removed if the directory is removed.
21:47 cw whats the difference between "Number of Bricks: 1 x 4 = 4" and "Number of Bricks: 2 x 2 = 4" in replica?
21:48 cw oh, hm, one is Distributed-Replicate
21:48 cw how did that happen :O  can I convert it to just replicate?
21:48 JoeJulian 1x4 is replica 4
21:48 JoeJulian 2x2 is, of course, replica 2
21:49 cw yeah, I just realized the difference; not sure how I ended up with a distributed replicate
21:49 cw can I conver it to 1 x 4 ? :)
21:49 JoeJulian There's always a way. Have you read http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/ though?
21:49 glusterbot <http://goo.gl/B8xEB> (at joejulian.name)
21:50 cw I have; it's only for a transication phase, I'm upgrading from 3.2 to 3.3 - so right now my temp and prod cluster is merging data
21:50 cw once it's done, I will remove the two tmp bricks
21:50 cw so it's just 1 x 2
21:51 cw they way I migrated was to configure two new servers with 3.3; copy all data to that cluster; reset the production machines, and let them join the tmp cluster
21:51 cw and then let them selfheal back to 100% sync
21:51 tryggvil joined #gluster
21:51 cw I managed to upgrade with only 2min downtime across 60 machines using them :)
21:52 JoeJulian cool...
21:52 cw so I  know it's not good to have high replica, but I need it for this case :)
21:52 JoeJulian So the answer is, of course, that you now have distributed files.
21:52 JoeJulian brb..
21:53 cw and the answer to converting it to a 1 x 4 ? :)
21:53 dopry joined #gluster
21:53 dopry hey.. for some reason I cannot unsubscribe from emails from gerritt...
21:53 xian1 JoeJulian: ok thanks, I will now look into rectifying any gfid mismatches (I have a list of 47, so it shouldn't be too hard).
21:53 dopry I'm not watching any projects...
21:54 dopry who's in charge of gerritt?
21:55 badone joined #gluster
21:55 mooperd joined #gluster
21:55 rwheeler joined #gluster
21:58 inodb joined #gluster
22:01 y4m4 joined #gluster
22:03 JoeJulian hagarth, for the most part.
22:03 JoeJulian ~pasteinfo | cw
22:03 glusterbot cw: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
22:04 cw JoeJulian: https://gist.github.com/Jippi/954e1206f343e8113084
22:04 glusterbot <http://goo.gl/08ZG3> (at gist.github.com)
22:04 mkultras i upgraded to gluster 3.3 from 3.0.2 and now my mount needs remounted everytime i put any load on the server i just see transport endpoint not connected in the logs. my old vol file i used to edit manually, this time i created the volume using commands, should I try using my old .vol file or are they not compatible anymore?
22:05 JoeJulian mkultras: Does the volume work?
22:06 mkultras yeah it works for awhile, if i do a find /volume and touch all the files sometimes that makes it hang, every so often it hangs
22:06 JoeJulian Probably performing a self-heal.
22:06 mkultras yeah its doing self heals
22:07 JoeJulian If you don't let it finish healing, it's going to hang when it gets to a file needing healed during that find command once the background queue gets filled.
22:07 mkultras oh, i get it
22:07 mkultras good to know
22:08 JoeJulian cw: You're going to have to do a remove-brick to migrate the data onto the first two, then you can change the replica count to 4 by doing an add-brick again...
22:08 cw JoeJulian: alright :)
22:09 JoeJulian gluster volume remove-brick elastic web15:/export/elastic gluster02:/export/elastic
22:09 cw that was my guess too; hoped there was something smarter though :)
22:09 JoeJulian gluster volume remove-brick elastic web15:/export/elastic gluster02:/export/elastic start
22:09 JoeJulian that one...
22:09 JoeJulian Then monitor "gluster volume remove-brick elastic web15:/export/elastic gluster02:/export/elastic status" until it's finished.
22:10 JoeJulian Then "gluster volume remove-brick elastic web15:/export/elastic gluster02:/export/elastic commit".
22:10 JoeJulian Then you can wipe web15:/export/elastic and gluster02:/export/elastic to add them while changing the replica count.
22:10 JoeJulian Assuming you never did a rebalance, it'll probably go very quickly.
22:11 cw havent don't any rebalance, they are still healing :)
22:12 JoeJulian Do you blog? I would be interested in reading an article detailing your upgrade practice.
22:12 cw not actively no; it was just a hunch that it would work - and it did :)
22:13 cw don't mind expaining to you one-on-one a day that's not today if you are interested :)
22:13 JoeJulian Sure... not today for me either. I'm about to leave for a Dr. Appt.
22:14 cw I want to finish up my checks and estimate a self-heal time for everything to be back so I can get home and in bed :)
22:14 cw I hope 3.3.1 is more stable and whiny than 3.2.x
22:15 cw had to upgrade because it went all split brain and messy on me
22:15 cw some dirs just showd up as '????' in a ls :)
22:18 gbrand__ joined #gluster
22:24 y4m4 joined #gluster
22:26 cw can gluster provide an ETA of when it's done with selfhealing?
22:26 cw it's a 500GB brick it's healing, over gbit
22:26 cw 32GB ram server with raptor disks
22:27 cw but gluster doesn't seem to pull more than ~20-40 Mbit
22:27 H__ JoeJulian: My 3.2.5 glusterfsd always dies with a sig11 (within 90 seconds) on a replace-brick command. The brick log: http://dpaste.org/hDSoF/ I can restart the process by manually starting a new glusterfsd and doing a replace-brick pause and start but then it dies again within 90 seconds.
22:27 glusterbot Title: dpaste.de: Snippet #218994 (at dpaste.org)
22:29 semiosis cw: you may get better performance switching the self heal algorithm to full
22:29 H__ I've tried to find whether 3.2.6 and 3.2.7 have any relevant fixes but could not find the release notes anymore on gluster.org.
22:32 cw semiosis: can I do that on the fly?
22:32 semiosis cw: yep
22:33 cw without messing things up? :D
22:33 cw my experience from 3.2 was - touch something, and you will regret it
22:34 cw is it sufficient to just run gluster volume heal www full ?
22:35 cw I guess it was; it spiked to ~300-500 Mbit :)
22:36 semiosis well those are two very different commands
22:36 H__ semiosis: can you recommend a minimum but > 3.2.5 version for replace-brick capable of dealing with lots of directories ?
22:36 cw semiosis: oh?
22:37 semiosis cw: see 'gluster volume set help' for list of options, the heal algorithm is used to heal a file, only patching parts that differ vs. replacing the whole file
22:38 cw gluster volume set help returns nthing
22:38 semiosis cw: gluster volume heal full probably heals all files in the volume, i'm not sure
22:38 semiosis cw: what version are you using?
22:38 cw 3.3.1
22:38 semiosis odd
22:38 ctria joined #gluster
22:38 cw from the deb repos too
22:39 semiosis H__: no i can't... we usually just recommend using the ,,(latest) version
22:39 glusterbot H__: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
22:40 semiosis cw: well that doesnt make any sense, 'gluster volume set help' gives a list of volume options that can be tuned
22:41 cw http://note.io/157bMeq apparently not in the deb package?
22:41 glusterbot Title: Screenshot 2/11/13 11:40 PM (at note.io)
22:41 semiosis cw: could you pastie 'dpkg -l | grep gluster' please
22:42 semiosis cw: and 'gluster --version' too
22:42 cw https://gist.github.com/Jippi/b435a22dcf2478207189
22:42 glusterbot <http://goo.gl/IecvC> (at gist.github.com)
22:42 cw updated gist with --version
22:43 semiosis hrm, never seen that before
22:43 cw -> gluster volume status www   ( operation failed )
22:43 cw what the ..
22:44 semiosis yeah thats messed up
22:44 cw it says it's connected to all 3 expected nodes
22:45 cw gluster volume info works fine too
22:46 cw hmm, odd, on another of the servers it works as expected with help
22:46 semiosis well thats good
22:47 cw and now it works on the other server again
22:47 cw wtfs
22:47 semiosis volume info & peer status were in the original gluster cli in 3.1, volume status, heal, etc were added more recently, 3.3 maybe
22:47 semiosis set was in 3.1 but set help was added later iirc
22:47 cw all 4 servers are puppetized, they are identical - so it's weird
22:48 semiosis sounds like you may be running an older version of glusterd (which is the backend for gluster commands)
22:48 cw then why would it not work, and then suddently work?
22:48 semiosis idk
22:49 cw when I upgraded them I wiped all old configuration and paths, applied puppet config for 3.3.1, verified install, rebooted and peer probed them
22:49 semiosis that should be good
22:51 cw it's funky indeed :)
23:00 _br_ joined #gluster
23:02 _br_ joined #gluster
23:02 andreask joined #gluster
23:02 _br_ joined #gluster
23:06 mkultras i need to add a brick to a gluster 3.0.2 server, there is no gluster command on it that i could see, does someone have a link to the docs for 3.0.2 ?
23:07 cw I'm getting spammed in my brick log with the following: https://gist.github.com/Jippi/1a00af289d26767ab425  problem or to be expected?
23:07 glusterbot <http://goo.gl/jcMzk> (at gist.github.com)
23:09 cw semiosis; do you know? ^
23:09 semiosis idk
23:10 cw somehow [server3_1-fops.c:1747:server_setattr_cbk] 0-www-server: 32357: SETATTR <gfid:02b51d3a-71e8-4dec-b5c8-5daef3763a9d> (02b51d3a-71e8-4dec-b5c8-5daef3763a9d) ==> -1 (No such file or directory)  doesn't sound like a good thing :o
23:26 bfoster joined #gluster
23:33 mkultras im thinking for gluster 3.0.2 i just edit the .vol file to add another brick and reload the daemon, possibly remount afterwards
23:35 mkultras hey do the bricks have to sit on XFS ?
23:35 mkultras well i mean i know they dont *have* to as i have been doing ext3 but are they supposed to
23:38 semiosis mkultras: xfs is recommended, though not required.  there's a bug with glusterfs over ext3/4 on the newer linux kernels, see ,,(ext4)
23:38 glusterbot mkultras: Read about the ext4 problem at http://goo.gl/PEBQU
23:38 semiosis people have been using glusterfs over zfs more and more
23:38 semiosis s/been/also been/
23:38 glusterbot What semiosis meant to say was: people have also been using glusterfs over zfs more and more
23:40 kkeithley joined #gluster
23:50 mkultras semiosis: thanks , thats a good enough reason to go with xfs for me
23:50 semiosis yw, also recommended is inode size 512
23:50 semiosis when making the xfs filesystem
23:50 mkultras i'm holding video recordings too, so i guess if i want to one day have larger files thats good too
23:56 WildPikachu joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary