Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:18 jbrooks joined #gluster
01:11 kevein joined #gluster
01:14 yinyin joined #gluster
01:29 yinyin joined #gluster
01:38 greylurk joined #gluster
01:47 raven-np joined #gluster
02:17 raven-np joined #gluster
02:21 bharata joined #gluster
02:53 bharata joined #gluster
03:01 berend joined #gluster
03:02 berend hi guys, just did a stat call on a file, I get a really huge inode:
03:02 berend Inode: 10203965951217470782
03:02 berend this causes some random problems on 32-bit systems
03:02 berend because on some the inode is much smaller:
03:02 berend Device: 14h/20dInode: 1048546622  Links: 1
03:03 berend that's a different 32-client, mounting same volume, stat on same file.
03:04 berend On another client, I get exactly the same inode:
03:04 berend Device: 14h/20dInode: 1048546622  Links: 1
03:04 berend but not on this particular client.
03:04 berend So question 1: should the inode be the same for the same file?
03:05 berend 3rd client also gives the same huge inode
03:06 berend the 2 clients with huge inodes are actually a clone from the client where things are fine.
03:06 berend (the 2nd client with small inode is a completely different system)
03:10 yinyin joined #gluster
03:11 berend Hmm, see a post about this. Apparently I have to set nfs.enable-ino32.
03:12 berend But only works if you nfs mount, which I can't as gluster server is running on top of a true nfs server.
03:18 berend Hmm, what a pity, there is no client side -o enable-ino32 option
03:23 yinyin joined #gluster
04:13 eightyeight joined #gluster
04:21 NashTrash joined #gluster
04:24 NashTrash left #gluster
04:28 sripathi joined #gluster
04:59 deepakcs joined #gluster
05:01 ngoswami joined #gluster
05:13 chirino_m joined #gluster
05:18 yinyin joined #gluster
05:22 Ryan_Lane joined #gluster
05:28 rgustafs joined #gluster
05:35 sunus joined #gluster
06:17 bharata I am facing the same build errors as this pastebin - http://pastebin.com/036ZY364, Was there a solution to this ?
06:17 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
06:34 vimal joined #gluster
06:38 yinyin joined #gluster
06:45 emrah_ joined #gluster
06:45 emrahnzm joined #gluster
06:59 Nevan joined #gluster
07:00 cyr_ joined #gluster
07:22 yinyin joined #gluster
07:25 ramkrsna joined #gluster
07:25 ramkrsna joined #gluster
07:25 jtux joined #gluster
07:47 puebele1 joined #gluster
07:50 andreask joined #gluster
07:52 badone_ joined #gluster
07:58 guigui3 joined #gluster
08:00 Azrael808 joined #gluster
08:05 puebele joined #gluster
08:07 badone_ joined #gluster
08:10 dobber joined #gluster
08:29 tjikkun_work joined #gluster
08:34 badone joined #gluster
09:05 ndevos bharata: do you have libxml2-devel installed?
09:08 bharata ndevos, Yes I realized that thanks, (needed make distclean after libxml2-devel installation)
09:08 bharata ndevos, wonder why configure doesn't catch that
09:09 ndevos bharata: I've just opened configure.ac, and there is a check for libxml-2.0, but it seems non-fatal
09:10 ndevos I think the idea is to build the cli without support for xml-output, but that seems to be failing
09:12 bharata ndevos, ok
09:12 bharata Failing to mount a gluster volume (which is residing in host) from guest (http://www.fpaste.org/qUFa/) - Any hints ?
09:12 glusterbot Title: Viewing Paste #266797 (at www.fpaste.org)
09:17 ndevos hmm, no idea really, looks like the connection gets closed before any data was sent/received... re-check firewall etc?
09:18 raven-np joined #gluster
09:19 ndevos bharata: maybe mounting with --log-level=TRACE gives a hint?
09:21 bharata ndevos, disabling iptables on guest (where the mount is being tried) doesn't change a thing
09:23 bharata ndevos, no additional o/p with TRACE
09:24 ndevos bharata: my guess would be that the server closed the connection, not the client, have you checked the glusterd logs?
09:25 Norky joined #gluster
09:26 gbrand_ joined #gluster
09:31 bharata ndevos, This is some latest git version and surprisingly no log files at all in /var/log/glusterd/ anywhere and no /var/log/glusterfs directory at all :(
09:31 ndevos wow, thats new to me too
09:32 bharata I mean /var/lib/glusterd and not /var/log/glusterd
09:33 ndevos right, but logs are still missing...
09:33 bharata ndevos, right, may be I will start glusterfsd manually with --debug
09:35 ndevos bharata: and glusterd too? I think the client did not receive the .vol file yet, so it would not contact any bicks either
09:35 bharata ndevos, yes right
09:36 ctria joined #gluster
09:42 bharata ndevos, ok, I see the issue from glusterd log on the server side...
09:42 bharata ndevos, [2013-01-14 09:42:35.268778] E [rpcsvc.c:519:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
09:42 bharata deepakcs, You have some sort of setting turned on to bypass the above kind of error - do you remember that ?
09:43 greylurk joined #gluster
09:43 ndevos bharata: okay, that means you need to set "option rpc-allow-insecure yes" (or something like that) in the glusterd.vol
09:43 duerF joined #gluster
09:43 bharata ndevos, yes I guess so
09:43 ninkotech joined #gluster
09:44 ndevos bharata: http://lists.nongnu.org/archive/html​/gluster-devel/2012-12/msg00031.html for an email about that (but you should have that somewhere)
09:45 glusterbot <http://goo.gl/59SD1> (at lists.nongnu.org)
09:45 deepakcs bharata, one sec, i think i sent some options i had tried to users/devel list.. letme dig up
09:45 bharata deepakcs, ndevos points to your mail
09:46 deepakcs bharata, yes, but there i was seing 'unabel to fetch volfile" error
09:46 deepakcs intermittently i was seeing that non-priv port issue as well
09:47 deepakcs rpc-auth-allow-insecure shud bypass that error for u
09:47 deepakcs bharata, does setting of those 2 options work for u or not ?
09:48 bharata deepakcs, trying with option rpc-auth-allow-insecure on in glusterd.vol now
09:54 bharata deepakcs, ndevos Required both the options ON to get the mount working successfully
09:54 deepakcs bharata, both meaning the ones i mentioned in the mail or different ?
09:55 ndevos bharata: ok, good to know!
09:55 bharata deepakcs, the ones you mentioned in your mail
09:55 bharata ndevos, thanks
09:55 deepakcs bharata, cool, good 2 know :)
09:56 ndevos bharata: no problem, glad to hear I could help a little
10:00 guigui3 joined #gluster
10:01 DaveS_ joined #gluster
10:06 tryggvil joined #gluster
10:11 eightyeight joined #gluster
10:13 QuentinF Hi,
10:16 QuentinF I have a glusterfs on my servers and i've many data (sub directory etc ...) and with PHP, response time to access to data is so long
10:17 QuentinF any ideas for shorten response time ?
10:18 QuentinF Is it possible to index glusterfs' files ?
10:26 tjikkun_work QuentinF, maybe you should read this: http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/
10:26 glusterbot <http://goo.gl/uDFgg> (at joejulian.name)
10:26 QuentinF thx
10:34 kevein joined #gluster
10:41 longsleep Hi guys, i am trying to get glusterfs 3.3 client running on Ubuntu 8.04. The client is always crashing in the memory pool (see http://pastie.org/5682190#13,19 for GDB trace). Anyone got any hints?
10:41 glusterbot Title: #5682190 - Pastie (at pastie.org)
10:55 ctria joined #gluster
11:12 andreask1 joined #gluster
11:12 andreask joined #gluster
11:17 polfilm joined #gluster
11:20 red_solar joined #gluster
11:22 nullsign joined #gluster
11:24 eightyeight joined #gluster
11:28 18WACQ64W joined #gluster
11:29 guigui3 joined #gluster
11:30 joeto joined #gluster
11:32 guigui4 joined #gluster
11:36 andreask joined #gluster
11:41 raven-np joined #gluster
11:44 sripathi joined #gluster
11:47 tryggvil_ joined #gluster
11:53 x4rlos Anyone got advice from upgrading 3.2 -> 3.3 gluster? Its not currently in use - just has a brick associated between it, and another machine (that is turned off).
11:53 x4rlos I have just removed tha package in the past and then when i re-added it, it had reminance of old bricks left over.
11:54 x4rlos Should i remove the bricks and peers first, and then remove the package and then re-install - or will i encounter these problems either way?
12:07 gbrand__ joined #gluster
12:09 gbrand__ joined #gluster
12:15 cyr_ joined #gluster
12:25 franky joined #gluster
12:26 rwheeler joined #gluster
12:27 franky hi, is there a way to install glusterfs 3.4 from git git://github.com/gluster/glusterfs.git ? iam not able to install like normal software
12:50 kkeithley x4rlos: debian/ubuntu or fedora/rhel/centos? I'm fairly certain if you install the RPMs from my fedorapeople.org repo your volume files will be moved to the proper location (/var/lib/glusterd) and when you restart gluster your volumes will be there. Try it and see.
12:50 jim` joined #gluster
12:50 jim` lo, anyone know what the state of NFS ACL support is in gluster - found this: http://www.gluster.org/community/doc​umentation/index.php/Features/NFSACL
12:50 glusterbot <http://goo.gl/y7Ztd> (at www.gluster.org)
12:50 jim` but can't find any newer information
12:51 kkeithley franky: There are 3.4.0qa6 rpms at http://bits.gluster.org/pub/glu​ster/glusterfs/3.4.0qa6/x86_64/. To install from the github source repo you have to build first, then `make install`
12:51 glusterbot <http://goo.gl/htLwc> (at bits.gluster.org)
12:52 kkeithley jim: NFS ACL should be in 3.4. If you want to try it you can give the 3.4qa6 bits a spin around the block.
12:52 Norky is it a sensible idea to back up the (XFS) filesystems which constitute my glusterfs?
12:52 Norky four XFS bricks in a distributed (x2), replicated set up
12:53 jim` kkeithley : just what I needed, thanks
12:53 Norky presumably I only need backup half of the bricks
12:55 Norky I'm assuming that everthing on A is replicated to B, and everything on C is replicated to D... or is it more complex than that?
12:56 twx_ take backup of entire volume would avoid your considerations with which bricks to back up, amrite?
12:56 kkeithley Norky: You already have two copies. If you don't trust XFS then I suppose you should back up the bricks, or you could use geo-rep to replicate to another cluster that doesn't use XFS for the bricks.
12:56 ndevos Norky: the replicate pairs mostly look like A -> C and B -> D
12:56 Norky it is not that I mistrust GFS
12:58 Norky the backup system involves two (or more in some cases) tape drives. The backup software cannot backup a single FS to both drives at the same time, so if I back up the GlsuterFS, it uses the drives sequentially, and takes too long.
12:59 Norky if I backup one brick to one drive, and the other to another drive, that can run concurrently, which is abotu twice as fast (the bottle neck being the drive write speed)
12:59 puebele2 joined #gluster
12:59 Norky sorry, I meant "it is not that I mistrust XFS"
13:05 ndevos Norky: you can use https://raw.github.com/nixpanic/lsgvt/master/lsgvt to 'graphically' see how your volume is put together, it helps to identify which bricks you want to backup
13:05 glusterbot <http://goo.gl/7QGX8> (at raw.github.com)
13:10 manik joined #gluster
13:29 puebele joined #gluster
13:30 Norky thank you, ndevos
13:30 Norky I'm still not sure that doing it this way is sensible :)
13:32 nueces joined #gluster
13:32 dustint joined #gluster
13:36 hagarth joined #gluster
13:47 Oneiroi joined #gluster
13:54 raven-np joined #gluster
13:56 jtux joined #gluster
13:57 hagarth left #gluster
13:57 wN joined #gluster
13:59 manik joined #gluster
14:06 dblack joined #gluster
14:08 balunasj joined #gluster
14:11 alphacc joined #gluster
14:21 lh joined #gluster
14:21 lh joined #gluster
14:27 rwheeler joined #gluster
14:33 smellis anyone tried 3.4 yet?  I am having trouble finding docs or fumbling through it, (no /etc/init.d/glusterd)
14:34 theron joined #gluster
14:38 kkeithley smellis: /etc/init.d/glusterd is in the glusterfs-server rpm. I just checked the glusterfs-server-3.4.0qa6-1.el6.x86_64.rpm from bits.gluster.org and it's in there.
14:41 franky debian support?
14:43 jdarcy It's not even 10am on Monday, and I've already caused enough trouble for one week.
14:43 H__ jdarcy: nice mail though :)
14:43 smellis yeah, sorry I missed the signing error, needed nogpgcheck
14:44 kkeithley therapy. zorch.
14:45 kkeithley debian support? There are 3.3.1 debs for squeeze and wheezy at http://download.gluster.org/pub/gl​uster/glusterfs/3.3/3.3.1/Debian/
14:45 glusterbot <http://goo.gl/AwJsw> (at download.gluster.org)
14:46 jdarcy H__: Thanks.
14:51 plarsen joined #gluster
14:51 hagarth joined #gluster
14:52 stopbit joined #gluster
14:57 smellis hmm
14:57 smellis ls
14:57 smellis cd
14:57 smellis ls
14:57 smellis oops
15:06 ultrabizweb joined #gluster
15:09 raven-np joined #gluster
15:09 erik49 joined #gluster
15:27 wushudoin joined #gluster
15:29 aliguori joined #gluster
15:40 bugs_ joined #gluster
15:41 dbruhn joined #gluster
15:41 Azrael808 joined #gluster
15:42 jbrooks joined #gluster
15:42 puebele1 joined #gluster
15:49 obryan joined #gluster
15:50 JoeJulian berend: There was a mount option, "enable-ino32" in 3.3.0 submitted by ndevos... apparently it has since been removed though.
15:51 JoeJulian jdarcy: Yeah, I was about to add his address to sieve and just ignore him, but I figured my last post was probably the more correct approach.
15:52 haidz sillyness on the mailing list
15:52 haidz pssh
15:54 JoeJulian There's always got to be one....
15:54 jdarcy JoeJulian: Just saw that.  You're a better man than I am.
15:55 ndevos JoeJulian: that "enable-ino32" should be in 3.4.0, and maybe in some upcoming 3.3.x
15:55 JoeJulian I can afford to be. I'm not the one actually writing the code.
15:55 glusterbot New news from resolvedglusterbugs: [Bug 895093] UFO (swift) GET stalls on large files (>65535) <http://goo.gl/qMY3u>
15:55 johnmark JoeJulian: you beat me to it :)
15:56 JoeJulian :D
15:57 johnmark but I'm glad I'm not the only one issuing a beatdown this time
15:57 rwheeler joined #gluster
15:58 * semiosis tries to catch up on the ml drama
15:59 jdarcy semiosis: Just Stephan again.  Nothing new really.
16:00 jtux joined #gluster
16:00 JoeJulian It's the support hijacking that pisses me off more than anything. If someone wants help...
16:00 * JoeJulian breathes slowly....
16:02 puebele1 joined #gluster
16:02 jdarcy JoeJulian: Yeah, that's something I think everyone can get behind.  Being wrong, being repetitive, being a jerk - those don't justify banning.  But thread-jacking and making the list less useful for others is a different matter.
16:04 x4rlos kkeithley: Sorry for the late reply. Its on debian.
16:04 x4rlos (if your still here :-))
16:05 hagarth I have stopped responding to Stephan for the last few years, is there any value in attempting again?
16:06 JoeJulian Not unless he actually posts something constructive that advances this project.
16:07 hagarth agree with that
16:08 kkeithley x4rlos: I don't know what the debs. do in this situation. sorry.
16:08 wdilly joined #gluster
16:09 x4rlos kkeithley: no worries. Will try and remove everything before i uninstall, and then hope it doesnt run into the old attr (thanks JoeJulian: for resolution) issue.
16:11 wdilly hi folks, i am brand new to gluster as of today. I have setup a simple replication in a VM testbed of two bricks gluster1:/export/brick1 & gluster2:/export2/brick1 both under the volume, repvol0. I have a working client as well (glusterclient) Everything was working  until I decided to try and reboot gluster2 mid fileoperation, to test the self healing. The whole file shows up on gluster1, but only the partial file is showing on bo
16:12 wdilly iI waited 10 minutes to see if self heal would kick in, but no such luck, i also attemped to force the self-heal, and it says it completed, but the bricks are still in disparity, what am i missing?
16:12 wdilly thanks.
16:12 kkeithley wdilly: which version of gluster?
16:12 semiosis wdilly: what version of glusterfs?
16:13 semiosis :D
16:13 semiosis bbl, afk
16:14 wdilly kkeithley, i believe 3. something, how do i check that exactly?
16:17 wdilly gluster 3.3.2
16:17 wdilly gluster 3.3.1
16:18 daMaestro joined #gluster
16:18 JoeJulian You were writing to a client mountpoint when you tested the reboot?
16:20 wdilly yes, i was wgetting to the directory on glusterclient which is mounting gluster1:/repvol0
16:20 wdilly the replication was working very well, until i decided to reboot gluster2 mid wget
16:21 jvyas woohoo bigdata hartord gluster RHS meetup this jan/feb :) hope to see some of you guys there.
16:22 wdilly i was wgetting a 100MB file, and the file sucessfully downlaoded despite the reboot on gluster 2, if i `ls -lh` in the various directories, the file shows up as 100mb on gluster 1, but only 7.5 mb on glusterclient, and on gluster2
16:24 bennyturns joined #gluster
16:24 JoeJulian wdilly: Which distro?
16:24 emrah_ joined #gluster
16:24 wdilly centos 6.3 64 on all VMs
16:25 _NiC If I start of with three bricks on the same server set up with replication, is it easy to move two bricks to separate machines later?
16:26 elyograg hagarth: thanks for you help on bonded interfaces in F18.  disabling NetworkManager, enabling network, and chaning things like PREFIX0 back to PREFIX was what finally fied it.
16:26 elyograg hagarth: oops.  intended for haidz.
16:26 elyograg haidz: ^^
16:27 JoeJulian wdilly: Check "gluster volume status" and make sure it's good (paste to fpaste.org if you want me to double check)
16:27 JoeJulian wdilly: My guess is iptables.
16:28 johnmark JoeJulian: +1 re: Stephan
16:28 johnmark jvyas: sweet
16:28 JoeJulian _NiC: Yep, especially for replicated since you /can/ just replace-brick...commit and it'll self-heal everything over.
16:29 wdilly JoeJulian, i have disabled iptables entirely on all VMs
16:29 * johnmark will be back
16:29 wdilly JoeJulian: I will paste volume info, hold tight
16:29 JoeJulian volume status, not info
16:29 _NiC JoeJulian, great! :)
16:29 wdilly JoeJulian: got it
16:30 jbrooks joined #gluster
16:31 JoeJulian Hmm.... bharata's going to make me re-evaluate my systems... like I wasn't too far behind already.
16:31 wdilly JoeJulian: http://fpaste.org/cjuS/
16:31 glusterbot Title: Viewing Paste #266923 by wdilly (at fpaste.org)
16:32 JoeJulian ... ok, now I'm interested...
16:32 johnmark heh
16:32 * johnmark is salivating
16:32 johnmark *drool*
16:33 JoeJulian Not this early in the morning... please... ;)
16:33 * JoeJulian does not need to picture johnmark drooling lustily.
16:34 JoeJulian wdilly: gluster volume heal info
16:35 wdilly JoeJulian: "Volume info does not exist"
16:35 johnmark JoeJulian: sorry for that. don't mind me
16:35 * JoeJulian needs coffee...
16:35 JoeJulian wdilly: gluster volume heal repvol0 info
16:35 wdilly oops :)
16:37 wdilly JoeJulian: http://fpaste.org/wJ61/
16:37 glusterbot Title: Viewing Paste #266925 (at fpaste.org)
16:37 Azrael808 joined #gluster
16:38 wdilly JoeJulian: same thing when issued on gluster2
16:38 JoeJulian wdilly: gluster volume heal repvol0 full
16:39 JoeJulian That's odd. It thinks it's healed.
16:39 JoeJulian Are you sure it's not?
16:40 wdilly well, when I ls -lah on /mnt/gluster_repvol0 the 100mb file that wgetted sucessfully only shows up as 7.g MB
16:41 wdilly 7.5mb, on both glusterclient and in gluster2 (/export/brick1), however on gluster1, in /export/brick1 it shows up as the full 100MB
16:44 wdilly JoeJulian: check it out http://fpaste.org/6mOO/
16:44 glusterbot Title: Viewing Paste #266936 (at fpaste.org)
16:45 JoeJulian Which one was writing during the reboot?
16:45 schmidmt1 left #gluster
16:46 JoeJulian I assume gluster1 was, but just to confirm.
16:46 wdilly so i started the wget on glusterclient to /mnt/gluster_repvol0 midway through i reboot gluster2, i see the netowrk light continue to blink in vmware on gluster1
16:47 wdilly and of course, the wget finishes without hiccup
16:47 JoeJulian Oh, nm... I didn't notice the 3rd machine name.
16:47 JoeJulian I really should just go back to sleep... ;)
16:47 wdilly so wget was the one that remained on the whole time, and was getting written to by glusterclient, it just didnt heal once gluster2 came back online.
16:48 wdilly im sorry gluster1 was the one that remained on the whole time
16:48 cicero i've a question about gluster (2.x if matters); if i have glusterfsd running as root with bricks on ext3; if said bricks run out of non-reserved blocks, do writes over gluster still continue?
16:48 JoeJulian Can you paste /var/log/glusterfs/mnt-gluster_repvol0.log from glusterclient?
16:49 cicero my gut says yes, because glusterfsd is being run as root
16:49 wdilly JoeJulian: sure, one sec
16:49 JoeJulian cicero: It matters. It matters a LOT. 2.x is long dead. Please use the ,,(latest) version.
16:49 glusterbot cicero: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
16:49 cicero JoeJulian: i am using 3.3 in new deployments
16:50 cicero JoeJulian: unfortunately the servers running 2.x are very much legacy and being phased out
16:50 JoeJulian cicero: But... iirc writes will continue on NEW files.
16:50 cicero ah ok
16:50 cicero much appreciated
16:50 semiosis Ryan_Lane: ping... now seems like a good time to ask about your self-heal daemon crashes
16:51 wdilly JoeJulian: this looks like it is going to explain a lot: http://fpaste.org/fzUy/
16:51 glusterbot Title: Viewing Paste #266938 (at fpaste.org)
16:52 wdilly JoeJulian: so it does see that there is a disparity at least.
16:53 cicero JoeJulian: though now that i've brought it up -- where are the best docs for a 2.x->3.x upgrade path?
16:54 cicero i've searched..
16:54 cicero also in a perfect world where both FUSE clients could exist i would just transition that way... but no dice.
16:55 JoeJulian cicero: Good question. Since you actually had to know what you were doing to implement 2.x, I think the expectation is that you'll know what's meant when I say to just create a new volume in 3.3.x with the same replication and your bricks in the same order.
16:55 cicero ah
16:55 H__ joined #gluster
16:55 cicero yes
16:57 JoeJulian cicero: Just to make sure it's clear how it works before you do it, look at /var/lib/glusterd/vols/$somevol/$somevol-fuse.vol and see how the cli builds the vol file.
16:57 cicero got it
16:57 cicero if i copied the bricks wholesale to brand new servers running 3.3.x, would i a) be able to connect with 2.x clients and b) reuse the same gluster metadata?
16:58 JoeJulian a) no, b) sort-of, essentially yes.
16:58 cicero ok
17:00 JoeJulian So, wdilly, your file ended up split-brained. This shouldn't have been able to happen in your described scenario. Were there other reboot tests earlier in your testing?
17:01 wdilly yes, i attempted the same thing earlier, wgetting on glusterclient, and powering down gluster1
17:02 wdilly the wget failed, and i removed the partial file from glusterclient. using rm
17:02 wdilly it propogated to both gluster1 and 2, so i tried seeing what would happen if i powered down gluster2 mid transfer.
17:02 JoeJulian Ah, so that's the answer then. The self-heal wasn't completed before rebooting gluster2... <- what I typed before you threw that last wrench into what I was going to say...
17:03 cicero JoeJulian: thx for the info as always
17:03 JoeJulian cicero: You're welcome. Any time.
17:05 wdilly JoeJulian: Okay, what is the best method to dermining if there is a heal in process / whether or not a volume is in stable condition, i suppose its on to learn how to fix a split-brain for me, good practice
17:05 JoeJulian Aack! I just saw that I misspelled Kernel on the mailing list. I really should avoid email before coffee!
17:06 JoeJulian "gluster volume heal $volume info" will tell if there are heals pending.
17:07 * JoeJulian needs to add a search to his blog.
17:07 JoeJulian http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
17:07 glusterbot <http://goo.gl/FPFUX> (at joejulian.name)
17:08 manik joined #gluster
17:09 wdilly JoeJulian: amazing, i was already on that blog and didnt even notice it was yours.
17:09 JoeJulian :)
17:09 wdilly JoeJulian: thanks for the help, will come back when i get stuck on something else, thx
17:09 JoeJulian I see my bitcoin ad provider has apparently gone out of business...
17:11 JoeJulian I had made almost the equivalent of $0.02 with that ad block. :)
17:17 Norky joined #gluster
17:19 xmltok joined #gluster
17:20 wdilly Hey JoeJulian: is it strange that on gluster1 when issuing "gluster volume heal repvol0 info split-brain" it lists 0 for number of entries?
17:20 wdilly http://fpaste.org/GgKI/
17:20 glusterbot Title: Viewing Paste #266945 (at fpaste.org)
17:22 JoeJulian probably... check "heal-failed" as well as "split-brain"
17:23 wdilly gluster volume heal repvol0 info heal-failed ? this also shows 0 for number of entries
17:25 nueces joined #gluster
17:25 JoeJulian did you ever do that heal...full ?
17:31 wdilly i did attempt to do a heal, but i think it didnt find anything that needed healing..
17:31 wdilly this was before i came to the chat room
17:32 Ryan_Lane semiosis: thanks
17:32 Ryan_Lane none of my self-heal daemons are running
17:33 Ryan_Lane all of them seem to seg fault when they start
17:33 Ryan_Lane it leaves a backtrace in the log file
17:36 JoeJulian wdilly: Hmm, I think I see. If so, I'll see if I can repro and file a bug report as that should be reported in the split-brain output.
17:36 glusterbot http://goo.gl/UUuCq
17:37 Ryan_Lane here's my backtrace: http://fpaste.org/ak8r/
17:37 glusterbot Title: Viewing Paste #266950 (at fpaste.org)
17:40 wdilly JoeJulian: Okay, sounds good
17:41 wdilly JoeJulian: is there a way to just say hey listen brick2, brick1 is what you need to become, so do it now.
17:41 JoeJulian Not yet, no.
17:41 Mo__ joined #gluster
17:41 elyograg that would be an awesome cli command to have.
17:41 JoeJulian A client-side way of healing split-brains is something I've been pushing for for a while now.
17:42 schmidmt1 joined #gluster
17:42 m0zes joined #gluster
17:43 * jdarcy looks wistfully at http://review.gluster.org/#change,4132
17:43 glusterbot Title: Gerrit Code Review (at review.gluster.org)
17:43 elyograg oracle release java 7u11.  we can probably get back to having java in browsers again.
17:45 JoeJulian wdilly: Could you please paste getfattr -m . -d -e hex /export/brick1/100Mio.dat
17:45 wdilly JoeJulian: you should know i have since deleted the 100mio.dat file
17:45 wdilly JoeJulian: but yes i will, which machine should i issue it from?
17:45 JoeJulian Ryan_Lane: I thought you were going with 3.3.1?
17:46 Ryan_Lane I will be as soon as I get a chance
17:46 JoeJulian wdilly: nm... wanted it for analyzing that split-brain. It would have been needed from both.
17:46 wdilly i guess, i cant, now that i have deleted that file. oops.
17:47 JoeJulian Ryan_Lane: I /think/ this fits the issues I was seeing that seemed to be rpc related and are fixed in 3.3.1.
17:47 Ryan_Lane ok. I'll upgrade and see if they stop dying
17:47 JoeJulian I never got around to filing bugs on them and since they went away....
17:51 JoeJulian jdarcy: "a script in .../extras/sb-mount to mount the volume without
17:51 JoeJulian AFR, using only the set of N'th replicas" - Interesting.... Could that be used to just rm the broken copy from such a split-mount?
17:53 jdarcy JoeJulian: The "slice" mounts are read-only right now, but the patch also includes a way to say which copy you prefer and have the servers DTRT.
17:53 jdarcy JoeJulian: I could not condone using a hacked version of the script by itself to resolve split brain.  ;)
17:54 JoeJulian hehe
17:55 JoeJulian Funny, before I clicked on the link I was thinking about how to show split afr in different directories for doing this. I hadn't thought about making them just two separate mounts.
17:56 JoeJulian (or three in my case)
17:59 rwheeler joined #gluster
18:01 kkeithley joined #gluster
18:16 emrahnzm left #gluster
18:23 Ryan_Lane hm. seems the packages from the glusterfs ppa don't specify they replace the glusterfs package
18:23 Ryan_Lane semiosis: when I remove the old package and add this package is all of my configuration going to be missing?
18:23 Ryan_Lane on the server, that is
18:24 Ryan_Lane or does it use the same configuration location?
18:25 JoeJulian 3.3.0+ uses the same location.
18:25 Ryan_Lane great. thanks.
18:25 kkeithley joined #gluster
18:27 xmltok joined #gluster
18:30 wdilly So JoeJulian: I tried my test again, but this time, shuttind down gluster2, The file transfer halts, on glusterclient, even though gluster1 is still online. When I bring gluster2 back online the transfer continues, is there a way to set the replication up so that the transfer continues without gluster2 needing to be online, and will simply replicate to it once it comes back online?
18:30 andreask joined #gluster
18:31 wdilly or is this what geo-replication is?
18:33 JoeJulian wdilly: it should (and does for me) continue. How did you shut down gluster2? I suspect you didn't do a shutdown but rather you did the equivalent of a hard power-off in vmware.
18:34 wdilly JoeJulian: i used init 0
18:35 JoeJulian That should have worked... Did you install from the ,,(yum repo) or are using Fedora 18?
18:35 glusterbot kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
18:36 JoeJulian @42
18:36 JoeJulian @whatis 42 seconds
18:36 glusterbot JoeJulian: Error: No factoid matches that key.
18:38 wdilly i am using centos 6.3 across everything, and i downloaded gluster using this guide http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
18:38 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
18:38 JoeJulian @ping-timeout
18:38 glusterbot JoeJulian: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
18:41 JoeJulian Do you have /etc/rc.d/rc0.d/K80glusterfsd ?
18:41 wdilly JoeJulian:on gluster1 and 2: glusterfs-server-3.3.1-1.el6.x86_64 glusterfs-3.3.1-1.el6.x86_64 glusterfs-fuse-3.3.1-1.el6.x86_64 glusterfs-geo-replication-3.3.1-1.el6.x86_64 is
18:42 wdilly JoeJulian: Yes i do
18:43 Kins joined #gluster
18:43 JoeJulian hrm... then it should have stopped before /etc/rc.d/rc0.d/K90network which means the client should have had the TCP connection closed properly...
18:43 semiosis Ryan_Lane: you can probably just upgrade from 3.3.0 to 3.3.1
18:44 semiosis though tbh i've not tried :/
18:44 Ryan_Lane semiosis: tried. it has problems
18:44 Ryan_Lane since the old package isn't listed as something the new one replace
18:44 Ryan_Lane *replaces
18:44 Ryan_Lane it's just another package, so there's file conflict errors
18:45 Ryan_Lane it's necessary to remove the old, then add the new
18:45 wdilly JoeJulian: Okay, i found something out
18:46 wdilly JoeJulian: When I do the wget from gluster1, it will continue to download despite, g2 going down, however, this is not the case when downloading via glusterclient.
18:46 semiosis Ryan_Lane: thx for the feedback i'll fix that
18:46 Ryan_Lane cool
18:46 Ryan_Lane thanks
18:47 wdilly going to bring back up g2 and see what happens,
18:47 Ryan_Lane may be hard to figure out where to stick it. though i'd imagine that both the client and server should be listed as replacing the old one
18:47 semiosis actually checked the control file and it already has what i thought would be the fix
18:47 semiosis packages of the same name should automatically replace older versions
18:47 semiosis and the name change, libglusterfs0 -> glusterfs-common is already marked as replaces
18:47 Ryan_Lane right, but this is the old glusterfs official package
18:48 Ryan_Lane that had server, client and common combined into a single package name
18:48 JoeJulian wdilly: You're using the native mount?
18:48 semiosis Ryan_Lane: oohhhh
18:48 semiosis hmm ok
18:49 wdilly JoeJulian: can you elaborate?
18:49 JoeJulian mount -t glusterfs ...
18:50 wdilly i have this entry in my /etc/fstab of glusterclient: gluster1:/repvol0 /mnt/gluster_repvol0   glusterfs      defaults,_netdev 0 0
18:51 JoeJulian mmkay... netstat -t shows connections to both servers?
18:52 wdilly yes
18:53 wdilly netstat from gclient: http://fpaste.org/mpMs/
18:53 glusterbot Title: Viewing Paste #266981 (at fpaste.org)
18:54 aliguori joined #gluster
18:54 JoeJulian I'm going to go get an espresso... bbiab.
18:55 wdilly ok :)
18:57 semiosis Ryan_Lane: what was the exact package name you upgraded from?
18:58 Ryan_Lane glusterfs
18:58 semiosis version?
18:58 Ryan_Lane ii  glusterfs                         3.3.0-1                           clustered file-system
18:59 semiosis ok thats what i thought, but i dont see it on download.gluster.org :(
18:59 Ryan_Lane it's been removed
18:59 Ryan_Lane now it just has a file that says to use the ppa
18:59 semiosis johnmark: can you help get me a copy of that?
18:59 Ryan_Lane do you just need the .deb?
18:59 JoeJulian That happened when download.gluster.org crashed.
19:00 semiosis Ryan_Lane: do you happen to still have the .deb?
19:00 Ryan_Lane yep
19:00 semiosis great
19:00 semiosis how can i get a copy?
19:00 Ryan_Lane one sec
19:01 Ryan_Lane http://apt.wikimedia.org/wikimedia/pool/ma​in/g/glusterfs/glusterfs_3.3.0-1_amd64.deb
19:01 glusterbot <http://goo.gl/THAaM> (at apt.wikimedia.org)
19:01 semiosis thanks :)
19:01 Ryan_Lane yw
19:01 semiosis going to install that on my test vm & make sure the ppa package replaces it
19:02 Ryan_Lane sweet. thanks
19:02 JoeJulian Ok, yes it's my own juvenile brain doing this, but Dan Allen, whom I know and respect, posted this on his blog about his book. http://www.mojavelinux.com/blog/archive​s/2012/01/seam_in_action_translations/
19:02 glusterbot <http://goo.gl/3pLxY> (at www.mojavelinux.com)
19:02 JoeJulian Did he NOT read the title aloud quickly?
19:03 semiosis ahahaha
19:03 hurdman left #gluster
19:03 wdilly JoeJulian: further testing reveals, that if the wget is initated on either g1 or the client, when g2 is already off, it works, fine and when brought back online, g2 is brought up to speed, but on the client, the transfer is halted if g2 is brought down mid transfer
19:11 eurower joined #gluster
19:11 nueces joined #gluster
19:11 kkeithley upgraded my desktop from f16 to f18 — didn't hurt too much
19:12 kkeithley but what a pain in the neck
19:13 eurower left #gluster
19:14 * semiosis never upgrades
19:14 semiosis i have several partitions i rotate through with each new distro release
19:15 elyograg fedora can't seem to make up its mind which kernel they want for 18. Installed, got 3.7.1.  later did distro-sync, that was reduced to 3.6.11.  just did another distro-sync now, it's installing 3.7.2.
19:17 haidz elyograg, yum clean all ?
19:18 elyograg I don't think I did that, though it is something I do from time to time.
19:18 gbrand_ joined #gluster
19:21 kkeithley semi-clean install — I have two lvs in an lvgroup that I use for the root fs that I alternate between. The base install was 3.6.something, then yum updated it to 3.7.2. The painful part was convincing anaconda to use the other root lv, and in the midst of all that to not touch my home lv and my virtual machine pool lv, and then get them all back afterwards.
19:22 kkeithley harder than it ought to be
19:36 semiosis Ryan_Lane: new package with 'replaces' uploaded to ppas, but one caveat... you'll need to run 'service glusterd stop && killall glusterd glusterfs glusterfsd' before doing the upgrade
19:36 tryggvil joined #gluster
19:36 Ryan_Lane no problem. I generally do that anyway
19:37 Ryan_Lane semiosis: thanks!
19:37 semiosis it will probably be several hours before the source is built & binary packages are available on launchpad
19:37 semiosis yw
19:37 Ryan_Lane that's cool. this saves me quite a bit of work, so I can wait for it :)
19:37 semiosis great
19:38 wdilly if i want to add a brick to my volume, is it generally okay to clone the virtual machine of another brick and then remove the brick, create a new one, and add it into the volume then?
19:39 semiosis heh, even when i do apt-get purge the /var/lib/glusterd files remain in place
19:39 semiosis must be because they're made by glusterd & not noted in the package manifest
19:40 semiosis idk if thats a good thing or a bad thing
19:40 semiosis but it is a thing
19:40 semiosis wdilly: probably not
19:41 semiosis cloning a glusterfs server results in a duplicate uuid (/var/lib/glusterd/glusterd.info) which is going to be a problem
19:41 semiosis see the ,,(rtfm) section about expanding glusterfs volumes
19:41 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
19:42 semiosis um ok, i mean admin guide pdf, iirc
19:47 wdilly okay thanks semiosis
19:49 semiosis Ryan_Lane: good news, launchpad started building the packages already, they should be live in the repos in just a few minutes (assuming the builds go as well as they did on my machine that is)
19:49 Ryan_Lane \o/
19:50 semiosis precise & quantal only by the way... you didnt need lucid, did you?
19:50 * semiosis cringes
19:54 JoeJulian elyograg: I checked at the time when you mentioned that downgrade and it didn't happen to me. <shrug>
20:00 johnmark Ryan_Lane: semiosis takes his pay in cases of beer ;)
20:00 Ryan_Lane :)
20:00 Ryan_Lane semiosis: I do need lucid, yes
20:00 Ryan_Lane lucid is LTS
20:00 Ryan_Lane precise is LTS
20:01 Ryan_Lane for server those are likely always the most important :)
20:01 Ryan_Lane semiosis: are you going to be at FOSDEM? then I could provide payments in beer. heh
20:02 semiosis no didnt plan ahead well enough for that one
20:03 semiosis uploading lucid to launchpad now
20:08 Gugge are there any known problems running gluster on btrfs?
20:13 JoeJulian Gugge: None that I've heard of. I've been told that btrfs isn't production stable, so that should be the only known problem.
20:22 kkeithley Didn't SLES make btrfs the default fs? So it must be good. ;-)
20:22 JoeJulian hehe
20:29 semiosis Ryan_Lane: lucid builds successful, pending publishing momentarily
20:30 Ryan_Lane cool
20:31 m0zes joined #gluster
20:55 johnmark semiosis: wow. you're a prince among men
20:55 semiosis johnmark: heh thx
20:56 semiosis that was relatively easy... add one line to three places in a file & upload
21:05 gprs1234 joined #gluster
21:33 raven-np joined #gluster
21:44 wdilly Hi everybody, back for more questions, i have a test bed replication volume setup going, and i added two new bricks to the existing two. All the files propogated to the new disks, but when i do an ls -l, the filesize on the new bricks is 0, what might be causing this issue?
21:47 elyograg wdilly: did you rebalance the volume after adding new bricks?
21:47 semiosis maybe heal is pending for those files but not yet complete
21:48 wdilly elyograg: no! i will do this.
21:48 semiosis when you added two new bricks, did you change it to 4-way replication or did you keep it at 2 way and make it into a distributed-replicated volume?
21:50 elyograg that's an important question. i didn't think of it.
21:50 wdilly semiosis: it became a distributed-replicated volume, however, that wasnt my intention, but i am basically just poking around at the moment. if i wanted to make it a for way replication would would i do?
21:50 semiosis iirc you need to specify 'replica 4' in your add brick command but i'm not sure about that
21:51 wdilly semiosis: i simply did gluster volume brick-add svr1:/path/brick2 svr2:/path/brick2
21:51 semiosis it was a much desired feature but since it's become available i haven't really heard many people using it :/
21:51 semiosis i mean the ability to change replica count on a live volume
21:52 wdilly i was expecting it to turn into a 4 way replication, but, it became distributed-replicated, which is fine, i will do the 4 way replication later.
21:52 wdilly i suppose, i should have known when it rejected just adding 1 brick, no reason you should be able to have 3 way replication
21:52 duffrecords joined #gluster
21:53 semiosis if you'd said replica 3 it probably would have let you add a single brick
21:53 semiosis but if you dont say replica x it assumes you want to add bricks for distribution, not replication, and those need to be added in multiples of replica count
21:55 duffrecords one of our four Gluster nodes went offline this morning.  fortunately, UCARP switched the VIP over to another Gluster box seamlessly and nobody outside of the systems team noticed.  now that I've got the faulty box back online, if I run "service glusterd start" should I expect it to start self-healing?
21:58 semiosis yes pretty much
21:58 semiosis duffrecords: what version of glusterfs?
21:58 msgq joined #gluster
21:59 schmidmt1 We're spec'ing out a set of gluster machines. We'd have 3 with 2 bricks each set up as a distributed replicated volume. Each machine will have a quad core xeon 3.2 GHz and somewhere around 10GB of memory. Is this sufficient for around 100 clients each reading and writing to one file at a time?
22:00 wdilly elyograg: and all, i did the rebalance "gluster volume rebalance repvol0" start and get a confirmation in the affirmative, i check the status of the rebalance on both nodes (each has 2 bricks) and it says completed, however filesize on the newly created bricks is still 0. is something up here?
22:00 duffrecords semiosis: 3.3.1
22:01 msgq hey all, I have been attempting to get gluster working today on AWS EC2 -- ubuntu 12.04 precise with 3.3.1 Gluster. I have been looking for some documentation to see if there are caveats on this. I am receiving a couple errors when trying to create the volume,Failed to perform brick order check. Do you want to continue creating the volume?  (y/n) y and Host xxx not a friend
22:01 msgq any help would be much appreciated... both servers show each other as peers
22:02 semiosis msgq: sorry to hear you're having trouble, glusterfs on ubuntu/ec2 works fine so probably just something wrong with your config... should be easy to fix
22:02 msgq semiosis, good to know :-) ... actually i think you just updated the repository a few minutes before I downloaded it i think.
22:03 semiosis msgq: try mapping the hostname to 127.0.0.1 on each of your servers
22:03 semiosis i recommend using dedicated FQDNs mapped with CNAMEs to your EC2 public-hostnames
22:03 msgq ah i see, i was wondering about that if the IPs changed
22:03 semiosis by dedicated fqdns i mean like 'gluster1.my.domain.net' CNAME ip-12-34-56.compute-1.amazonaws.com or whatever
22:04 msgq and to the public IP's vs the private?
22:04 semiosis and then on the machine to be gluster1, add gluster1.my.domain.net as an alias of 127.0.0.1 in its /etc/hosts file
22:04 semiosis msgq: i always map things by cname to the public-hostname so i can take adv. of ec2's split-horizon dns
22:05 semiosis inside ec2 it resolves to the local-ipv4 but outside from the public net it resolves to public-ipv4
22:05 semiosis which is convenient
22:05 msgq got it...
22:05 msgq ok Ill reconfigure and give that a shot
22:05 msgq thank you for the pointers!
22:05 semiosis yw
22:05 semiosis also ,,(canned ebs rant)
22:05 glusterbot http://goo.gl/GJzYu
22:06 semiosis if you're wondering to ebs or not to ebs
22:06 msgq k, we run everything EBS, but I will definitely read the docs
22:06 semiosis (imho, EBS yes yes!)
22:06 msgq awesome then I think we are set on that front
22:06 semiosis though i am slowly transitioning away from ebs for everyhting but my gluster bricks
22:08 msgq well our major issue is having multiple webservers and syncing files across... we wrote custom logic to deal with it but its not performing well enough now
22:08 msgq so Im hoping gluster will help with this problem
22:09 semiosis php?
22:09 semiosis ,,(php)
22:09 glusterbot php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
22:09 msgq yea php
22:09 tryggvil joined #gluster
22:09 msgq shit
22:09 msgq hehe
22:09 semiosis so that's one caveat, just to be aware of... there's performance optimizations you should do to make php run well on glusterfs
22:10 semiosis but it can run well on glusterfs if you do them
22:10 msgq ok, thank you so much for the info!
22:10 semiosis yw
22:12 andreask joined #gluster
22:18 rwheeler joined #gluster
22:20 duerF joined #gluster
22:25 haidz msgq, you can use something like apc
22:25 haidz this will reduce the hits to gluster significantly, thereby making it useful for a webserver
22:26 semiosis +1
22:26 haidz if you decide to write logs to it.. make sure you write them to different filenames so they dont "corrupt" each other
22:26 semiosis also autoloading
22:27 haidz semiosis, autoloading?
22:27 semiosis all the modern frameworks use (or can use) autoloading instead of require/include calls
22:27 haidz ah yes
22:27 semiosis so if a page only needs to include 3 files thats all it includes, not every file in the framework
22:27 haidz right
22:28 semiosis that with apc is awesome
22:28 haidz yep
22:28 haidz im pretty sure kohana is doing this for us
22:28 semiosis i think so
22:28 * haidz dislikes kohana but it is what it is
22:33 semiosis haidz: is there a php framework you prefer instead?
22:38 tc00per joined #gluster
22:42 schmidmt1 Any ideas on required specs for 3 machines being written to by about 100 nodes?
23:00 andreask joined #gluster
23:03 gprs1234 Hi All, I am in the process of reading up on gluster and planning my first deployment. Wondering about the current state of ext4 as underlying filesystem for gluster. I haven't seen much movement on https://bugzilla.redhat.com/show_bug.cgi?id=838784 since October, but would like to use ext4 to keep the option of converting to btrfs down the line.  I will be on 2.6.32-279, which I see is affected, but have there been any recent workaro
23:03 glusterbot <http://goo.gl/CO1VZ> (at bugzilla.redhat.com)
23:03 glusterbot Bug 838784: high, high, ---, sgowda, ASSIGNED , DHT: readdirp goes into a infinite loop with ext4
23:06 haidz semiosis, i don't care for frameworks. They tend to pop up on CVE reports. I'm a systems guy now so im kinda out of the game.
23:06 semiosis gprs1234: xfs is recommended for glusterfs, use inode size 512 (or 1024 if using UFO)
23:07 semiosis ext4 wont work today with the latest kernels, so that's a nonstarter unless you want to stick with an old kernel
23:07 semiosis when btrfs is ready for prime time, you can replace your xfs bricks with btrfs bricks one at a time
23:07 semiosis i recently did that going from ext4 to xfs and kept my volume & clients online through the migration
23:08 semiosis and it was possible to roll back because i wasnt modifying data in place
23:08 semiosis (very important imho)
23:08 haidz semiosis, nice.. im going to have to go to xfs as well.. Redhat doesnt distribute xfs though
23:08 haidz (with the regular RHEL image.. its part of redhat storage)
23:08 semiosis wow amazing
23:09 semiosis you can't just yum install xfsprogs
23:09 semiosis or something like that?
23:09 semiosis epel?
23:09 haidz it was removed from the image
23:09 haidz im sure epel has it
23:09 semiosis you mean xfs was removed from the rh kernel image?  that's nuts
23:10 haidz semiosis, i dont think it was removed from the kernel image.. i think the userland packages were moved out of the standard repo to a separate redhat storage repo
23:10 semiosis oh
23:10 semiosis well then just install the epel xfsprogs & be happy
23:11 haidz yeah
23:12 haidz semiosis, xfs isnt in epel
23:13 semiosis bummer
23:14 gprs1234 is xfs available in centos6.3?
23:15 polfilm joined #gluster
23:15 semiosis why isnt there an obviously official & friendly web site that lets you find packages in centos/epel?  or am i just missing it?
23:15 haidz gprs1234, i think centos is good to go.. they repackage everything
23:16 haidz semiosis, http://dl.fedoraproject.org/​pub/epel/6/x86_64/repoview/
23:16 glusterbot <http://goo.gl/mqm8O> (at dl.fedoraproject.org)
23:16 elyograg gprs1234: I can say unequivocally that it is.  if you do not create any xfs filesystems when you instlal, then you need to manually 'yum install xfsprogs' later.
23:16 semiosis haidz: ah ok
23:17 semiosis haidz:  i'm just so used to packages.ubuntu.com & packages.debian.org that i was surprised to not find similar for fedoria/centos in a minute of googling
23:17 gprs1234 excellent .. will go with xfs and -i 512
23:18 gprs1234 another question, and this was partially answered on the mailing list: Even with upcoming support for variable sized bricks, are there merits to carving my current 3TB (best bang for buck) disks into 1TB partitions or PVs (to be concatenated as LV) and use the partitions/LVs as my bricks so I am covered when larger TB disks are available?
23:20 semiosis gprs1234: depends what you're storing on the bricks
23:20 semiosis avg vs. max file size, type of workload, etc should influence volume architecture imho
23:23 haidz gprs1234, 3TB disks increase seek time, so if latency isnt an issue, by all means go with them
23:24 haidz gprs1234, i do LVM on my bricks.. i do it to separate different products into their own volumes such that they dont share one big volume.. then I extend them individually as needed
23:24 haidz gprs1234, logical volumes then become the bricks
23:26 haidz gprs1234, i wouldn't do lvm if i didnt need to.. it will degrade performance. The more spindles you have the more IO you'll get out of it. If you have a high IO workload, you'll want more spindles.. also if you have a high throughput you'll want more spindles
23:26 haidz gprs1234, you'll top out at about 30MB/s on 1 3TB disk (for a single stream.. additional streams degrade performance)
23:28 gprs1234 great points with regards to performance, however my first implementation is just for a home nas and to get my hands dirty with gluster. subsequent deployments will definitely have latency considerations
23:28 haidz gprs1234, then yeah.. 3TB disks no lvm is my recommendation. Lots of space to store porn
23:29 haidz err. i mean video
23:30 gprs1234 for the present, the majority of the files will be digital photos ~5MB and video (heh, no porn, there's enough of that on the interwebs) so ~4GB on the upper end of filesize
23:30 haidz sounds good.. should work well
23:30 gprs1234 plus docs, etc and home dirs
23:31 haidz home dirs might be an issue with performance... if you're working in there you'll have tmp files being generated every time you edit something
23:31 haidz you'll definitely notice it
23:32 gprs1234 hmm ... good to know
23:32 gprs1234 now lvm would still be useful so i don't have one huge volume (starting with two gluster servers each with one 3TB drive)
23:33 gprs1234 so any lvm tips - eg. do you do a whole disk as a PV
23:33 haidz a whole disk MUST be a pv
23:33 haidz err
23:33 gprs1234 or carve up unit sized partitions as PVs and then make an LV
23:33 haidz take that back
23:33 gprs1234 the latter, only for the ease of mixing disk sizes
23:33 haidz you can partition smaller than a whole disk.. but if its all for gluster it doesnt much matter
23:34 haidz you can mix disk sizes.. in different volumes
23:34 semiosis gprs1234: do you really need redundant servers?
23:35 haidz if you think you'll have mixed disks, then probably go with LVM so you can limit their size
23:35 haidz and easily expand existing volumes
23:35 haidz semiosis, no home is complete without redundant servers :)
23:35 haidz i personally use a synology NAS.. raid 1
23:36 semiosis haidz: redundant disks sure, but redundant servers?
23:36 wdilly so, i cloned one of my gluster vm's and removed the virtual disks, recreated them, and build new filesystems upon them, yet, when i try to add the bricks into my volume i get the  error that the brick is already part of the volume... what gives?
23:36 gprs1234 ok, so say 1TB partitions as PVs, then 1PV->1LV->1brick or nPV->1LV->1brick
23:36 semiosis unless you're going to be fixing the down server while someone else needs to keep using the surviving one i'd say its overkill
23:36 haidz semiosis, sure why not
23:37 gprs1234 basically, should i let gluster distribute, or give gluster one big brick and use lvextend,lvresize,resize2fs when i grow?
23:38 gprs1234 semiosis: i'll come back to my reasoning for redundant servers in a sec
23:38 wdilly the uuid of all the filesystems are unique...
23:38 haidz gprs1234, you really dont need to partition on top of doing LVM.. just set your LVM size to the size you want your brick to be
23:38 gprs1234 true, makes sense
23:38 semiosis wdilly: what's that error?  could you paste it?
23:40 gprs1234 ok, and then i carve up LVs and pass them as bricks to a gluster volume letting gluster distribute; OR make a single LV as a single brick gluster volume and use lvextend,lvresize,resize2fs to grow?
23:40 semiosis or a prefix of it is already part of a volume
23:40 glusterbot semiosis: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
23:40 * semiosis got impatient.
23:40 wdilly semiosis: sure: http://fpaste.org/9CO6/
23:40 glusterbot Title: Viewing Paste #267051 (at fpaste.org)
23:40 semiosis wdilly: see glusterbot's message
23:40 haidz gprs1234, http://pastebin.com/c7J1tmN4
23:40 semiosis oh sorry thats different :/
23:40 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
23:41 semiosis wdilly: if you cloned a server, and now have two servers at different hostnames with same uuid, that's going to be a problem
23:42 wdilly im sorry for my basic misunderstand, but arent uuid's associated with filesystems?
23:42 semiosis each server in the pool has a uuid, stored in /var/lib/glusterd/glusterd.info
23:42 semiosis must be unique
23:43 wdilly semiosis: and if i blew away original filesystem off of cloned vm, and totally rebuilt it, they would be new uuids when i do `blkid /dev/sdx1` with all filesystems associate with bricks they are all unique
23:43 semiosis servers can change hostnames/ips but they should always have the same uuid
23:43 haidz sorry glusterbot
23:43 haidz gprs1234, http://pastebin.com/DuXX4RZJ
23:43 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
23:46 JoeJulian uuid are simply a unique identifier. The can be associated with anything that needs to be uniquely identfies.
23:46 wdilly well sure enough semiosis they are in fact the same (to be expected) can i just change the uuid in that file and be okay?
23:47 semiosis hmm, idk
23:47 wdilly im gonna try it!
23:47 gprs1234 haidz: so are those volumes 6 way mirrors? each brick seems to be from a diff host?
23:48 gprs1234 but volume type is distribute-replicate 2x3 ..
23:48 haidz its 6 hosts with replica 3
23:48 haidz its an amazon... so i have it distributed across 3 availability zones
23:48 haidz s/an/at/
23:48 glusterbot What haidz meant to say was: gprs1234, you really dont need to partition on top of doing LVM.. just set your LVM size to the size you watt your brick to be
23:48 gprs1234 gotcha .. thx! that clears that up
23:50 gprs1234 now as to my thought process for redundant servers, and maybe i'm way off base here: i plan to use two VMs (kvm) as the gluster servers. initially both will be on a single physical server and i'll present a 3TB disk to each. plan to setup a replicated volume and then later add capacity as required to scale up, not out
23:51 gprs1234 later, i might move that VM to another physical host on the home lan, but would also like to be able to ship that second host to an alternate location and switch from replicate to geo-replicate
23:52 haidz ah.. so basically just a sandbox to play with it
23:52 haidz sounds good
23:52 semiosis have fun!
23:52 gprs1234 so i'd have a geo-replicated copy of my data as one gluster volume, and then might add a second disk to each gluster server for my friend to geo-replicate his data back to my home
23:53 gprs1234 well, maybe not just sandbox .. i would like to think of this as my 'production' nas replicated offsite
23:54 gprs1234 (but yeah, at the same time getting to know the product for use at work)
23:54 gprs1234 now using the VMs as gluster servers shouldn't be too ambitious (especially with limited clients) since folks are already doing this on EC2?
23:55 gprs1234 load-wise

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary