Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 raven-np joined #gluster
00:03 stopbit joined #gluster
00:22 VSpike joined #gluster
00:30 jag3773 joined #gluster
00:52 raven-np joined #gluster
01:14 meshugga joined #gluster
02:03 bharata joined #gluster
02:26 hagarth joined #gluster
02:28 bharata_ joined #gluster
02:51 shylesh joined #gluster
02:55 syoyo_ joined #gluster
03:02 pipopopo_ joined #gluster
03:10 overclk joined #gluster
03:15 m0zes_ joined #gluster
03:24 cauyrt joined #gluster
03:27 dustint joined #gluster
03:30 andrei__ joined #gluster
03:40 lala joined #gluster
03:46 abyss^ joined #gluster
03:50 bulde joined #gluster
04:00 raven-np1 joined #gluster
04:02 lala_ joined #gluster
04:08 Ryan_Lane joined #gluster
04:08 sahina joined #gluster
04:12 abyss^_ joined #gluster
04:14 raven-np joined #gluster
04:18 hagarth joined #gluster
04:20 vpshastry joined #gluster
04:58 sripathi joined #gluster
05:22 y4m4 joined #gluster
05:29 sahina joined #gluster
05:31 sgowda joined #gluster
05:33 deepakcs joined #gluster
05:34 satheesh joined #gluster
05:35 raghu joined #gluster
05:35 ngoswami joined #gluster
05:36 venkat joined #gluster
05:45 bala1 joined #gluster
05:51 mohankumar joined #gluster
06:14 mohankumar joined #gluster
06:18 hagarth joined #gluster
06:41 cauyrt_ joined #gluster
06:54 rgustafs joined #gluster
06:55 mohankumar joined #gluster
07:02 vimal joined #gluster
07:05 Nevan joined #gluster
07:12 guigui3 joined #gluster
07:24 vikumar joined #gluster
07:33 shireesh joined #gluster
07:38 hagarth joined #gluster
07:38 sripathi joined #gluster
07:47 ekuric joined #gluster
07:50 raven-np joined #gluster
08:00 dopry joined #gluster
08:03 mohankumar joined #gluster
08:05 amccloud_ joined #gluster
08:06 amccloud_ How do I get rid of a rogue volume that won't delete?
08:06 amccloud_ Can't stop it or delete it.
08:16 amccloud_ basically shows up in volume list but anything I try to do on it says it doesn't exist.
08:20 Staples84 joined #gluster
08:24 mohankumar joined #gluster
08:25 JoeJulian amccloud_: Look in /var/lib/glusterd/vols
08:25 JoeJulian You'll want to stop glusterd before trying anything in there though.
08:30 ctria joined #gluster
08:31 amccloud_ JoeJulian: Would anything in there hang around even if i complete purge glusterfs-server|client|common from the machine? (because I did that and reinstalled gluster-server)
08:34 JoeJulian Sure. The state files in /var/lib/glusterd are not part of the package.
08:40 dobber joined #gluster
08:40 bulde joined #gluster
08:40 ramkrsna joined #gluster
08:40 ramkrsna joined #gluster
08:46 duerF joined #gluster
08:47 andreask joined #gluster
08:57 gbrand_ joined #gluster
09:00 sgowda joined #gluster
09:06 vpshastry1 joined #gluster
09:13 mohankumar joined #gluster
09:27 hagarth joined #gluster
09:33 bauruine joined #gluster
09:37 cauyrt joined #gluster
10:01 bulde joined #gluster
10:02 manik joined #gluster
10:05 mooperd joined #gluster
10:13 sripathi joined #gluster
10:17 bulde1 joined #gluster
10:23 inodb_ joined #gluster
10:29 raghu joined #gluster
10:30 w3lly joined #gluster
10:34 H__ joined #gluster
10:39 ricky-ticky joined #gluster
10:41 ricky-ticky Hi, people. Can someone help me with gluster volume remove-brick problem?
10:41 lh joined #gluster
10:50 sahina joined #gluster
10:52 raven-np joined #gluster
11:00 andrei__ joined #gluster
11:06 rastar joined #gluster
11:06 rastar1 joined #gluster
11:12 hagarth joined #gluster
11:21 bulde joined #gluster
11:30 RicardoSSP joined #gluster
11:30 RicardoSSP joined #gluster
11:36 vpshastry joined #gluster
11:40 raven-np joined #gluster
11:41 dustint joined #gluster
11:55 vpshastry joined #gluster
12:15 Alpinist joined #gluster
12:18 andreask joined #gluster
12:20 cw joined #gluster
12:23 plarsen joined #gluster
12:25 bauruine joined #gluster
12:38 redsolar joined #gluster
12:45 inodb_ joined #gluster
12:49 gbrand__ joined #gluster
12:50 ngoswami joined #gluster
13:05 andrei__ hi guys. does anyone know when 3.4 glusterfs is out?
13:24 ctrianta joined #gluster
13:29 raven-np joined #gluster
13:32 edward1 joined #gluster
13:36 aliguori joined #gluster
13:41 shylesh joined #gluster
13:48 plarsen joined #gluster
13:50 lala joined #gluster
13:59 rcheleguini joined #gluster
14:05 hagarth1 joined #gluster
14:12 johnmark andrei__: er, when it's ready? heh
14:13 johnmark the alpha just came out, I'm betting it will be at least a month
14:16 mooperd joined #gluster
14:17 kkeithley johnmark: fyi, the new fedora and el6 rpms for 3.4.0alpha — with lvm2-devel turned on — are on d.g.o
14:18 johnmark kkeithley: w00t - you're a peach
14:18 luckybambu joined #gluster
14:26 m0zes_ joined #gluster
14:30 sripathi joined #gluster
14:30 _NiC joined #gluster
14:34 ctrianta joined #gluster
14:35 w3lly joined #gluster
14:40 manik joined #gluster
14:43 morse joined #gluster
14:44 aliguori joined #gluster
14:46 luis_alen joined #gluster
14:48 inodb_ joined #gluster
14:55 mkultras joined #gluster
14:56 mkultras good mornin
14:58 w3lly joined #gluster
14:58 manik joined #gluster
15:01 sripathi1 joined #gluster
15:02 stopbit joined #gluster
15:04 luis_alen left #gluster
15:09 aliguori joined #gluster
15:11 bugs_ joined #gluster
15:16 bashtoni joined #gluster
15:16 bashtoni Is it possible to set up a replicated gluster volume when only one of the two servers is available?
15:17 bashtoni (Obviously I'm not expecting the replication to work until the other server is there...)
15:21 meshugga joined #gluster
15:22 w3lly joined #gluster
15:23 kkeithley you set it up with the server/brick you have, add the second server/brick when it becomes available
15:26 overclk joined #gluster
15:28 andrei__ thanks guys
15:28 jbrooks joined #gluster
15:29 semiosis :O
15:31 bashtoni kkeithley: But I can't set a replica level of less than 2
15:31 kkeithley right, you don't set replica 2 until you add the server/brick
15:32 bashtoni Well that was easy then ;)
15:32 bashtoni Thought I had to specify replica level
15:32 bashtoni Thanks
15:33 aliguori joined #gluster
15:33 aliguori joined #gluster
15:41 clag_ joined #gluster
15:45 daMaestro joined #gluster
15:46 abyss^_ I have strange question, gluster support nfs and when I use nfs (gluster) connection the r/w operation are faster than I used native client... So what for I should use native client if nfs (gluster) give me the same stuff (lb, failover etc) but works faster?;)
15:53 sjoeboo joined #gluster
15:55 ekuric joined #gluster
15:55 mynameisbruce_ anybody know why i cant use sanlock with glusterfs shared storage?
15:55 plarsen joined #gluster
15:56 mynameisbruce_ i want to implement libvirt sanlock setup to ensure that a vm is only started on one machine
15:56 mynameisbruce_ but everytime i start libvirt with lockfile on glusterfs shared storage i get an error
15:58 mynameisbruce_ 2013-02-13 15:57:46.342+0000: 2811: error : virLockManagerSanlockSetupLockspace:277 : Unable to query sector size /mnt/distreplvol/sanlock/__LIBVIRT__DISKS__: No such device
15:59 mynameisbruce_ it works if the directory for sanlock file is not a glusterfs volume
15:59 jclift joined #gluster
16:03 neofob joined #gluster
16:08 zetheroo joined #gluster
16:08 mynameisbruce_ glusterfs git branch 3.4
16:08 mynameisbruce_ qemu 1.3
16:08 mynameisbruce_ libvirt 1.0.2
16:08 bdperkin_gone joined #gluster
16:08 mynameisbruce_ sanlock 2.6
16:09 ndevos doesnt sanlock require a block-device?
16:11 bdperkin joined #gluster
16:11 mynameisbruce_ hmm what can i do?
16:11 mynameisbruce_ dont want to setup an drbd with gfs2/w lockmanager to run a lockmanager :D
16:12 mynameisbruce_ can i use other lockmanager?
16:15 JoeJulian abyss^_: nfs doesn't give you the "same stuff". It does not have the same fault tolerance and it has more overhead thus less throughput. It also uses the kernel FSCache so lookup results may be stale.
16:17 plarsen joined #gluster
16:22 mynameisbruce_ are there any other lock mechanism to run vms on glusterfs?
16:22 mynameisbruce_ i think sanlock is the only lockmanager supported by libvirt
16:23 mynameisbruce_ if sanlock needs an blockdevice ... i could use the bd xtralator....but bd in gluster does not provice redundancy ...right?
16:25 zetheroo left #gluster
16:26 andreask sanlock only needs a shared filesystem, not a blockdevice
16:28 lala joined #gluster
16:29 mynameisbruce_ so glustefs should work....right?
16:30 mynameisbruce_ let me try ext3/4 instead of xfs
16:31 mynameisbruce_ maybe libvirt cannot stat sector size for xfs....whatever
16:31 VSpike Can anyone help with this question? http://community.gluster.org/​q/geo-replication-behaviour/ I have asked here a few times and I don't have an answer yet.
16:31 glusterbot <http://goo.gl/68R56> (at community.gluster.org)
16:34 sripathi joined #gluster
16:36 johnmark I don't think you can write to a slave
16:36 johnmark VSpike: ^^AAAAAAAABBBB^
16:36 johnmark woah, didnt' intent to write that
16:36 VSpike :)
16:38 VSpike When you say you can't... you mean bad things will happen if you do, or you actually won't be able to?
16:38 johnmark I don't think you'll be able to
16:39 VSpike Guess I could try... but I didn't want to in case I broke it hard
16:39 VSpike johnmark: if you were writing to a regular directory you clearly could write to it - but you're saying if it's a gluster volume that's a slave, glusterd will prevent you?
16:40 johnmark if you're writing to a gluster volume, I don't see any way that it will let you write to the slave
16:41 johnmark because GlusterFS directs writes to the master node
16:41 johnmark but you should try it and see what happens
16:41 johnmark in a test environment :)
16:41 VSpike I was going to say ... will you help me fix it if it goes horribly wrong? :)
16:42 ladd joined #gluster
16:42 gbrand__ joined #gluster
16:44 balunasj joined #gluster
16:55 luckybambu joined #gluster
16:55 luis_alen joined #gluster
16:56 satheesh joined #gluster
16:59 luis_alen Hello, guys. I've a replicated volume set on amazon ec2. The data in there is only static content (jpg, gifs, htmls, and so on) of a web application cluster. I see a high cpu load on the gluster client but not on the server. Gluster mount process takes up 40% of cpu time and a lot of memory. Is this normal? When I designed this architecture I expected the high cpu load to hit the server and not the client…
17:00 luis_alen *I have a replicated volume...
17:01 VSpike johnmark: it certainly lets you write to it. I went to one of the hosts on the slave cluster and did mount -t glusterfs localhost:/gv0 tmp
17:01 zaitcev joined #gluster
17:01 VSpike johnmark: then i just did cd temp; touch foo
17:01 VSpike s/temp/tmp
17:02 VSpike luis_alen: I see the same. http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/
17:02 glusterbot <http://goo.gl/uDFgg> (at joejulian.name)
17:03 VSpike luis_alen: might be useful. It seems to be stat-ing files that creates the load on the client.
17:03 VSpike AFAICT
17:04 VSpike johnmark: master geo-replication status still shows OK, and the file is still there
17:06 satheesh1 joined #gluster
17:06 luis_alen VSpike: hmmmm. The php app itself is not running on top of gluster. Also, all session data is stored on a database. The only content that relies on gluster this web server serves is static content… jpg, gif, css, and so on...
17:06 VSpike johnmark: it looks to me like changes on the master are pushed the slave asap. changes on the slave are not detected and overwritten, but any change to master will overwrite a change on the slave
17:07 VSpike luis_alen: does the webserver maybe stat that when the client asks for cache info?
17:07 VSpike If you can set longer expiry times it might help
17:08 andrei__ joined #gluster
17:09 VSpike johnmark: removing a file on master will remove it on slave, for example. but removing a file on slave will not cause it to be replaced until master is modified
17:09 luis_alen VSpike: yeah, the webserver probably does that. Expiry times are long enough, I think. More than 2 weeks...
17:10 glusterbot New news from newglusterbugs: [Bug 910836] peer probe over ipv6 on ipv4/ipv6 network fails. <http://goo.gl/loI05>
17:12 VSpike In my scenario, the gluster also holds the files for a site, and I want to use geo-replication to update the backup location. I will make the webservers at the backup site mount the slave volume RO normally...
17:12 VSpike if the primary site goes offline, I'll remount that RW and then the backup site holds the current state. Question is, how do I stop the primary site overwriting it when it comes back online?
17:13 VSpike IOW, can I suspend geo-replication from the slave side somehow?
17:13 VSpike Disable the ssh login for the user perhaps
17:14 johnmark VSpike: great question. Hopefully someone here can answer
17:14 VSpike :)
17:14 VSpike geo-replication seems like a not very well known area
17:15 luckybambu Georep makes me want to cry sometime
17:15 luckybambu I don't know about suspending georep
17:15 luckybambu I have a hard enough time getting it to run all the time, lol
17:15 luckybambu Functionally if you disable SSH for the user, it will break georep.
17:16 VSpike I'd think mv /home/geouser/.ssh/authorized_keys /home/geouser/.ssh/keys_saved will do it
17:16 luckybambu VSpike: You're serving the static files out of gluster though?
17:16 VSpike Yeah. Then when I want to move back, I'd have to rsync the data back again, then start georep
17:17 VSpike luckybambu: I have the whole of WP in there at the moment, although I plan to move most of it out again soon
17:17 VSpike I think only the uploads directory actually needs to be there
17:17 luckybambu Georep should move them both directions, also.
17:17 VSpike Georep is one-way at the moment, afaict
17:18 VSpike luis_alen: a CDN might be the easiest solution to your problem :)
17:19 luckybambu Ah, mistaken, it is still one way
17:19 luckybambu Checked with a coworker
17:19 luckybambu Serving sites out of Gluster you may be sad though...
17:19 VSpike I have a *lot* of caching :)
17:20 luckybambu TBH I would use rdist unless you have a giant site
17:21 luckybambu We use git to keep our codebases and client configs synced everywhere, and do basically file archival on Gluster
17:21 VSpike It's not ideal I agree
17:21 VSpike Unfortunately, Wordpress is stupid
17:21 luckybambu What do you mean?
17:21 luis_alen VSpike: yeah…I think I will need to put gluster aside on that
17:22 VSpike It wants to treat its whole installation as writeable, and the filesystem contains part of the state - the DB the other part
17:22 luckybambu Ah
17:23 VSpike But now I've studied it a bit more, I realise if I lock down a lot of things anyway, then the only part that really, really needs to be writeable is wp-content/uploads
17:23 luckybambu I would mount wp-content/uploads on Gluster, but not the rest
17:23 VSpike And I get the benefit of restricing what site admins can do
17:23 inodb_ joined #gluster
17:24 VSpike That's my plan - the only joker is that you never really know where plugins might want to write. But I've observed the current set and none of them write anywhere on the FS AFAICT
17:24 luckybambu Likely due to my usage (hundreds of millions of files and a rack full of Gluster), but I would never trust it to be fast enough to serve a site ;)
17:25 VSpike And by making everything RO, I effectively prevent site admins from adding new plugins, meaning I get to test stuff on beta first and they have to come through me for a new plugin.. so that's win/win
17:25 luckybambu VSpike: why aren't you just setting the permissions on their accounts to not add plugins?
17:26 VSpike luckybambu: if they are a Wordpress "Administrator", they can do anything, like edit the theme, add or remove plugins, etc.
17:26 luckybambu So tell them not to?
17:26 VSpike because the permissions are not granular enough, a couple of people need to be admins to do what they need to do
17:26 VSpike luckybambu: haha... these are sales and marketing people. They can't help themselves :)
17:27 luckybambu ah
17:27 VSpike Seriously, I'm sure they won't if I tell them not to. But if they also *can't* ... well, that has to be good, right?
17:27 VSpike Plus, better performance.. so yay
17:27 VSpike What's the benefit of rdist over rsync?
17:28 amccloud_ joined #gluster
17:28 VSpike I was going to use rsync to keep all the static files in step once I move them off gluster ... never come across rdist before
17:28 luckybambu Rdist is a program to maintain identical copies of files over multiple hosts.
17:29 luckybambu I'm assuming if you have Glutser you have more than 2 web servers also?
17:30 VSpike Yes.
17:31 VSpike 1 more
17:32 hagarth joined #gluster
17:32 johnmark luckybambu: considering that PHP sites running on GlusterFS have had performance issues, I would follow the advice of "buyer beware"
17:33 steinex joined #gluster
17:33 luckybambu johnmark: Yep, that's what I was saying a little earlier to him
17:33 johnmark luckybambu: that's sound advice
17:33 johnmark however, mounting over NFS can give better performance due to more aggressive caching
17:34 VSpike I made it work acceptibly well, but it needed a ton of cache
17:34 luckybambu Hell, our code all runs on 15k sas drives locally and we have io issues sometimes
17:34 johnmark there's also the concept of "negative lookup caching" which I'm trying to remember if that went into the recent alpha
17:34 VSpike I want to look at using NFS, but in order to get the same failover I'd need to use ucarp too, so I'd want to prove it in a testbed and make sure I know how to set it up right
17:34 johnmark but it makes PHP perf much much better
17:34 johnmark VSpike: I hear you
17:34 luckybambu Your OS should also allocate memory for file caching anyway
17:35 steinex wie kann isch denn hier dat man ma was sieht
17:35 luckybambu TBH if you're running code on Gluster you really should really reconsider your architecture
17:35 VSpike luckybambu: That's not going to help with stat operations though, surely?
17:36 * johnmark wishes he could remember how to apply negative lookup caching... looking for jdarcy's blog
17:36 luckybambu Not too sure exactly how it works, all I know is it happens
17:37 johnmark ah - https://github.com/jdarcy/glupy
17:37 glusterbot Title: jdarcy/glupy · GitHub (at github.com)
17:37 johnmark and there's a negative lookup cache example in python in that directory
17:37 luckybambu Basing your codebase on caching though is like building a foundation out of dirt… works great until it gets wet
17:38 andrei__ joined #gluster
17:38 luckybambu You will have a bad time if your cache stops working
17:38 johnmark haha - sounds like you're speaking from experience
17:38 luckybambu Our code used to be served over NFS
17:38 johnmark aha... until it wasn't? heh
17:38 luckybambu My cohort and I are currently in the process of fixing
17:39 luckybambu Brought down ~30 machines once when we had an NFS issue and all clients complaining
17:39 johnmark OUCH!
17:39 luckybambu Caching issues, all sorts of things
17:39 johnmark that sounds... painful
17:40 VSpike luckybambu: that's what made me nervous - people said NFS on Gluster was more problematic than the native client. Add in the complexity of ucarp etc and it seems to be askign for problems
17:40 VSpike I'd really want to be sure I'd proven it before deploying it
17:40 luckybambu Yes… TBH Gluster is awesome for some things, but serving code not really.
17:41 VSpike I had to deploy this system without really having a chance to prototype or prove it, so it was just a case of "make it work, somehow". But I learned a lot in the process ... and I think I got away with it
17:41 luckybambu Same with any network based fs
17:41 VSpike But generally, thank god for memcache. And APC. And Cloudfront.
17:43 JoeJulian Sorry, VSpike, I gave up on responding to questions on that helpshift site. I'm a bit OCD about language so when a question is actually a statement, it just gets under my skin. Since you're not (by a long shot) the only person who puts 2 words in as a "question" I just gave up on checking in with that tool.
17:43 jiffe98 anyone have trouble automounting a local gluster export on boot?  I have '127.0.0.1:VMS   /shared         glusterfs       defaults,_netdev,backupvolfile-server=10.0.189.229    0       0' setup in fstab, it boots fine, it just doesn't mount this entry
17:43 jiffe98 this is on debian squeeze
17:43 luckybambu jiffe98: does your system recognize _netdev?
17:43 JoeJulian VSpike: The answer, afaict, is that the file will remain changed until it's changed on the master, at which time the slave will be overwritten.
17:44 luckybambu _netdev doesn't seem to be working with the latest version of centos 6 and gluster 3.3 now
17:44 jiffe98 luckybambu: it appears in the mount manual
17:44 x4rlos jiffe98: Known issue.
17:44 sjoeboo_ joined #gluster
17:44 jiffe98 hmm, I see
17:44 x4rlos jiffe98: you can 'fix' it.
17:45 VSpike JoeJulian: hah yes. I see what you mean. Sorry. Writing a good title/summary for a question is always the hardest part
17:45 VSpike JoeJulian: that confirms what I found by experimentation, which is good :)
17:45 x4rlos Does anyone know how well gluster plays with rsync snapshots?
17:46 x4rlos in terms of the sym links and such.
17:46 luckybambu x4rlos: expand?
17:46 x4rlos If we had two fileservers at site A and site B - and they used snapshots for daily weekly and monthly backups - how does it handle the sym links?
17:47 VSpike JoeJulian: What is the best way to suspend geo-replication from the slave side? I was thinking i could rename authorized_keys for the user account used for geo-rep.
17:47 luckybambu x4rlos: could have potential issues depending on your backing filesystem but i haven't run into symlink problems
17:47 VSpike x4rlos: do you mean rsnapshot?
17:48 x4rlos luckybambu: you use it in this way?
17:48 x4rlos VSpike: yes :-
17:48 x4rlos )
17:48 luckybambu i don't use it in this way, but i have symlinks on my gluster.
17:48 luckybambu and symlinks to symlinks
17:48 Ryan_Lane joined #gluster
17:48 VSpike Those are hardlinks, I think
17:48 JoeJulian VSpike, johnmark: The negative lookup caching is handled with the fuse patches. I'm not sure what kernel has those.
17:48 x4rlos VSpike: hmm. Not good.
17:49 johnmark JoeJulian: ah, ok
17:49 johnmark didn't realize that required a patched FUSE
17:50 JoeJulian jiffe98: Are you running the most recent build from the ,,(ppa)? semiosis found a bug with that in the last week.
17:50 glusterbot jiffe98: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
17:50 semiosis JoeJulian: ?!?!
17:50 x4rlos haha.
17:50 johnmark doh
17:50 w3lly1 joined #gluster
17:50 VSpike x4rlos: yes, definitely hardlinks. Not sure how well gluster or fuse would cope with those
17:51 x4rlos VSpike: Badly i think :-)
17:51 semiosis oh wait that was to jiffe98, yeah there was an issue with mounting from localhost at boot in fstab, some of the upstart stuff was not being installed by the package
17:51 luckybambu works fine with hardlinks
17:51 luckybambu i think in 3.3 they even self heal
17:51 jbrooks joined #gluster
17:51 VSpike There ya go
17:51 hagarth joined #gluster
17:51 x4rlos luckybambu: Testing to be done me thinks.
17:51 JoeJulian VSpike: renaming authorized_keys seems like a reasonable solution. Or iptables, or moving gsyncd...
17:52 jiffe98 debian use upstart?
17:52 awickham joined #gluster
17:52 JoeJulian x4rlos: When using rsync /to/ a gluster volume, always use --inplace. Backing up the volume there's no such needs.
17:53 JoeJulian semiosis: The not-mounting-at-boot thing.
17:53 JoeJulian x4rlos: rsync handles hardlinks just fine.
17:54 JoeJulian And now I'm finally caught up... ;)
17:54 semiosis jiffe98: debian uses upstart but in a different way than ubuntu, not all upstarts are equal, ubuntu debian & redhat all use upstart but they're difference
17:54 semiosis different*
17:54 x4rlos JoeJulian: Cheers.
17:55 jiffe98 semiosis: gotcha
17:56 x4rlos jiffe98: fyi (and someone may shoot me for suggesting) I fixed this on wheezy. Think i removed the start mountnfs option.
17:56 x4rlos Cant quite remember what i did now though.
17:56 JoeJulian fedora, I think 14 or 15, used upstart, but only for one release. EL < 7 uses sysvinit
17:58 JoeJulian RHEL 7 is, to the best of my knowledge, going to use systemd
17:58 x4rlos rc.local mount -a prob a workaround?
17:58 manik joined #gluster
17:59 Mo____ joined #gluster
17:59 luckybambu HIGH
17:59 luckybambu SIGH*
17:59 luckybambu RHEL 7 already. Didn't even realize.
18:00 JoeJulian x4rlos: http://irclog.perlgeek.de/g​luster/2013-01-07#i_6304711
18:00 glusterbot <http://goo.gl/fzxn5> (at irclog.perlgeek.de)
18:01 gbrand__ joined #gluster
18:01 x4rlos JoeJulian: Thanks! :-) Off home ready for the footy, will have a proper gander tomorrow, and set up a testbed :-)
18:01 x4rlos gn all!
18:06 flrichar joined #gluster
18:09 ctria joined #gluster
18:11 bauruine joined #gluster
18:21 wushudoin joined #gluster
18:22 WildPikachu joined #gluster
18:24 abyss^_ JoeJulian: ok. Thank you for your explain :)
18:25 JoeJulian You're welcome.
18:31 maek joined #gluster
18:32 maek If I have 2 boxes hosting the actual glusterfs and I want to have 10 or so clients connect is the best practice to load balance the connection to those 2 gluster servers and make my mount points for the client point to a vip?
18:33 elyograg maek: if you're using the FUSE client, you don't have to do that.  If you are using gluster's NFS integration, then you would have to provide a vip.
18:35 maek elyograg: i think Im using fuse. how does that prevent me from having to do that if I point to 10.10.10.10:/gluster-export in my fstab?
18:35 maek elyograg: i could be doing it wrong also
18:35 maek the 2 "servers" will also mount as clients the gluster export, I have them mounting from localhost
18:35 maek but the 10 clients is where im stuck
18:36 JoeJulian @mount
18:36 glusterbot JoeJulian: I do not know about 'mount', but I do know about these similar topics: 'If the mount server goes down will the cluster still be accessible?', 'mount server'
18:36 JoeJulian @mount server
18:36 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
18:36 elyograg maek: the 10.10.10.10 is used *only* at mount time.  if it happened to be down when you rebooted or mounted the volume, then it would fail, but once it's mounted, it talks to all servers in that volume.  if you're worried about mount time, then you can use round-robin DNS.
18:37 maek when using fuse?
18:37 elyograg yes.
18:37 maek great thank you!
18:37 JoeJulian ~mount server | maek
18:37 maek 1 more question
18:37 glusterbot maek: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
18:38 maek if all 12 of thsee boxes NEED the gluster data accessible to work should I make them all servers?
18:38 wushudoin left #gluster
18:38 maek so in the worst case they can look to themselves for the data?
18:38 maek or am I missing the point if I go that route?
18:38 maek JoeJulian: thanks also :)
18:39 JoeJulian maek: http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/
18:39 glusterbot <http://goo.gl/B8xEB> (at joejulian.name)
18:39 disarone joined #gluster
18:40 elyograg maek: that would be one way to skin the cat.  to provide nfs access, i've got two hosts in addition to my brick servers that are gluster peers.  those servers have a VIP between them.
18:40 elyograg none of it is produciton yet, but it seems to work in tests.
18:41 JoeJulian Some people have reported lockup issues with mounting nfs from a local server, though, so be aware of that possibility and test, test, test.
18:41 maek so the gluster peers are clients of the bricks and then export out the gluster fs via nfs and are viped?
18:41 maek localhost:/chef-checksum        /mnt/chef-checksum      glusterfs       defaults 0 0
18:41 maek I assume that means im using fuse?
18:45 JoeJulian yes
18:46 H__ JoeJulian:  I've been trying to contact you about replace-brick. Sorry to spam about it, but did you receive my message ?
18:46 maek here is what I dont understand. Is a replica the data that lives on a brick or is it a N copy of that data?
18:50 JoeJulian H__: I never had the need to try replace-brick until 3.3.0 (which didn't work right). Wasn't successfull until 3.3.1.
18:51 H__ ok, that's all I need, thanks :)
18:51 andreask joined #gluster
18:51 JoeJulian maek: A replica is a whole file that's stored on the number of bricks specified when you create your volume using the "replica N" keyword.
18:51 JoeJulian @replica
18:51 glusterbot JoeJulian: I do not know about 'replica', but I do know about these similar topics: 'check replication', 'geo-replication'
18:51 JoeJulian @brick order
18:51 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
18:52 H__ is there a website that lists all glusterbot's knowledge ?
18:52 luckybambu the gluster admin manual is pretty good
18:52 JoeJulian No, wouldn't be a bad idea though.
18:53 H__ does it then perhaps offer its list in a private chat ?
18:54 maek ah so its not 2 copies on 1 host
18:55 maek its 2 copies spread around in this case groups of 2 hosts/bricks?
18:55 bennyturns joined #gluster
18:59 andrei__ joined #gluster
19:03 xavih_ joined #gluster
19:03 xavih_ left #gluster
19:09 disarone joined #gluster
19:17 JoeJulian H__: You can query glusterbot directly. "factoids search *" should list all of them.
19:18 JoeJulian maek: Assuming you don't list two bricks on the same host as a replica when you define your volume.
19:18 gbrand_ joined #gluster
19:25 H__ JoeJulian: what's the reason for the --inplace advice when using rsync to a glusterfs ? (I see it causes errors in the log, but why do these exist in the first place ?)
19:25 jclift_ joined #gluster
19:25 amccloud__ joined #gluster
19:27 manik joined #gluster
19:28 JoeJulian Efficiency. rsync creates a temp file, .fubar.A4af, and updates that file. Once complete, it moves it to fubar. Using the hash algorithm, the tempfile hashes out to one DHT subvolume, while the final file hashes to another. gluster doesn't move the data if a file is renamed, but since the hash now points to the wrong brick, it creates a link pointer to the brick that actually has the data.
19:28 JoeJulian H__: See also http://joejulian.name/blog​/dht-misses-are-expensive/
19:28 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
19:29 semiosis @learn rsync as normally rsync creates a temp file, .fubar.A4af, and updates that file. Once complete, it moves it to fubar. Using the hash algorithm, the tempfile hashes out to one DHT subvolume, while the final file hashes to another. gluster doesn't move the data if a file is renamed, but since the hash now points to the wrong brick, it creates a link pointer to the brick that actually has the data.  to avoid this use the rsync --inplace option
19:29 glusterbot semiosis: The operation succeeded.
19:29 semiosis very well put JoeJulian
19:29 JoeJulian :)
19:29 semiosis @rsync
19:29 glusterbot semiosis: normally rsync creates a temp file, .fubar.A4af, and updates that file. Once complete, it moves it to fubar. Using the hash algorithm, the tempfile hashes out to one DHT subvolume, while the final file hashes to another. gluster doesn't move the data if a file is renamed, but since the hash now points to the wrong brick, it creates a link pointer to the brick that actually has the
19:29 glusterbot semiosis: data. to avoid this use the rsync --inplace option
19:30 H__ "gluster doesn't move the data if a file is renamed", ok, got that
19:30 semiosis renames aren't even atomic iirc
19:30 JoeJulian Yeah, that would be a horribly inefficient use of bandwidth.
19:30 H__ "but since the hash now points to the wrong brick" ? eh ? Doesn't that mean that all renames go wrong ?
19:31 JoeJulian Not necessarily, but generally yes.
19:31 JoeJulian If you rename a lot, you can optimize by doing a rebalance.
19:31 H__ soo, we shouldn't be doing any mv's on a glusterfs ?!
19:31 nueces joined #gluster
19:31 JoeJulian If you can avoid it, sure.
19:31 semiosis everything still works, just not at optimal efficiency
19:32 JoeJulian It works, it just adds an extra network round trip.
19:32 amccloud_ joined #gluster
19:32 H__ yet I see errors in the brick logs on this
19:33 JoeJulian Do they have an "E"?
19:33 semiosis are they really *errors* ?
19:33 H__ yes
19:33 JoeJulian fpaste
19:33 H__ i assume " E " means real error
19:33 semiosis usually
19:33 semiosis but not always
19:33 H__ let me find some, i'm at home now, 1 moment
19:33 semiosis are you able to access the files despite those messages?  or does access really fail?
19:35 H__ here's one : http://fpaste.org/O54h/
19:35 glusterbot Title: Viewing Paste #277212 (at fpaste.org)
19:36 H__ semiosis: well, accessing it won't go as it's one of these rsync temp files while copying
19:36 JoeJulian Yeah, that's definitely an rsync rename.
19:37 amccloud__ joined #gluster
19:38 JoeJulian It's probably a bug. It would be nice if you would file a bug report on that. It also seems to just be spurious noise.
19:38 glusterbot http://goo.gl/UUuCq
19:38 JoeJulian As in: Of course you can't setattr on that, it's not there anymore. Duh!
19:39 H__ so gluster needs an extra check on all bricks to figure out where the renamed file is, after that all is peachy. do i get that right ?
19:39 JoeJulian yes
19:39 H__ and the setattr is from gluster itself, right ?
19:39 JoeJulian probably
19:39 JoeJulian Can't really tell without a trace log.
19:40 H__ well, i did nto ask rsync to sync xattr's, so it must be
19:40 H__ that's a fine race indeed. ok, now i understand what's going on
19:43 H__ when upgrading from 3.2.5 to 3.3.1, all clients and servers need to be upgraded at the same time, right ?
19:45 H__ I'll check if it still happens at 3.3.1 and then file a bug. I assume you don't want 3.2.5 bugs :-P
19:45 glusterbot http://goo.gl/UUuCq
19:46 semiosis ,,(3.3 upgrade notes)
19:46 glusterbot http://goo.gl/qOiO7
19:47 H__ thanks. point 2 answers that. all must go together.
19:48 H__ any time estimates on "glusterd –xlator-option *.upgrade=on -N" ? That might be a loooong running command ?
19:51 an joined #gluster
19:51 JoeJulian The upgrade happens within a second. glusterd remains running as the management service.
19:52 H__ as in : 1M directories, 15M files . ok so no delays there. that's good news !
19:56 H__ can one downgrade from 3.3.1 to 3.2.5 in case of unwanted side effects ?
19:56 JoeJulian yes
19:57 erik49_ joined #gluster
19:59 H__ cool. I'm going to try to automate the upgrade at at least 2 test setups before I touch production
20:05 sjoeboo_ joined #gluster
20:15 xavih_ joined #gluster
20:15 xavih_ left #gluster
20:25 amccloud_ joined #gluster
20:28 amccloud_ joined #gluster
20:33 sjoeboo_ joined #gluster
20:37 Staples84 joined #gluster
20:45 gbrand_ joined #gluster
20:56 thtanner joined #gluster
21:06 andreask joined #gluster
21:06 badone joined #gluster
21:07 balunasj joined #gluster
21:21 dustint_ joined #gluster
21:30 fidevo joined #gluster
21:31 ode2k joined #gluster
21:32 ode2k Does anyone know if there has been any progress on - DHT: readdirp goes into a infinite loop with ext4  ?
21:32 ode2k Redhat bug - https://bugzilla.redhat.com/show_bug.cgi?id=838784
21:32 glusterbot <http://goo.gl/CO1VZ> (at bugzilla.redhat.com)
21:32 glusterbot Bug 838784: high, high, ---, sgowda, ASSIGNED , DHT: readdirp goes into a infinite loop with ext4
21:33 ode2k I'm using CentOS 6.3 64-bit with kernel 2.6.32-279.el6.x86_64 & Gluster 3.3.1-1 ...
22:00 ode2k Is there a way to install the proposed 'patch' without it being in a final release? I'd prefer to not have to downgrade my kernel or gluster versions
22:12 _br_ joined #gluster
22:16 _br_ joined #gluster
22:20 luis_alen joined #gluster
22:25 cauyrt joined #gluster
22:27 mooperd joined #gluster
22:31 JoeJulian Yes, but from what I've been told it's not complete and doesn't actually fix the problem.
22:31 JoeJulian ode2k: The bug report is current, so that's where that is (much to my own personal frustration).
22:38 ode2k JoeJulian: Would it be better to downgrade my kernel, or downgrade gluster?
22:39 elyograg i'd think most people would say kernel there... because hopefully you're not sharing machines with another service that wants the newer kernel.
22:39 JoeJulian There's no version of gluster that would ever have worked with the changes that were done in ext4.
22:40 ode2k OK Thanks.. next question, where can I find the kernel-2.6.32-267.el6 for CentOS 6.3 or do I need to compile it from src?
22:41 ode2k It's not available in any of the repos that I've searched through
22:41 jag3773 joined #gluster
22:45 elyograg I don't see that specific version anywhere, but there's http://vault.centos.org/6.2/updates/x86_64/Pa​ckages/kernel-2.6.32-220.23.1.el6.x86_64.rpm
22:45 glusterbot <http://goo.gl/jsCsj> (at vault.centos.org)
22:46 JoeJulian Doesn't look like 267 was ever actually pushed to a repo.
22:47 balunasj joined #gluster
22:48 ode2k Hmm.. ok. Do you think going all the way back to the 220 for 6.2 would cause any major issues? I'm only running glusterfs and Zoneminder (video surveillance capture system) on it, besides the necessary linux services.
22:49 elyograg that's impossible for anyone but you to say for sure, but I would not expect any problems, especially if you were running an earlier kernel version with no problems.
22:50 JoeJulian I'd be very surprised if there was a problem.
22:50 ode2k OK, maybe I'll try it out tomorrow... Thanks for your help and have a great night!
22:52 elyograg does anyone know when a keystone-based gluster-swift might emerge?  management is keen to see us able to access storage via an S3 interface.
22:55 JoeJulian http://www.mail-archive.com/glust​er-devel@nongnu.org/msg08925.html
22:55 glusterbot <http://goo.gl/u39LK> (at www.mail-archive.com)
22:59 luis_alen left #gluster
23:00 cauyrt joined #gluster
23:03 inodb_ joined #gluster
23:06 NeonLicht joined #gluster
23:06 NeonLicht Hello.
23:06 glusterbot NeonLicht: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
23:25 mkultras joined #gluster
23:25 maek left #gluster
23:33 cw joined #gluster
23:36 ransoho joined #gluster
23:38 inodb_ left #gluster
23:47 cauyrt joined #gluster
23:53 cauyrt left #gluster
23:56 _br_ joined #gluster
23:57 _br_ joined #gluster
23:58 _br_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary