Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 purpleidea ~puppet | quique
00:16 glusterbot quique: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
00:16 purpleidea quique: all the states are in the gluster source and i documented them in my puppet module too
00:17 purpleidea https://github.com/purpleidea/puppet-gl​uster/blob/master/manifests/init.pp#L26
00:17 glusterbot <http://goo.gl/xxkBBD> (at github.com)
00:19 chirino joined #gluster
00:28 * semiosis gets busy
00:28 dtyarnell joined #gluster
00:32 purpleidea semiosis: where i'm from that means you're having sex
00:33 semiosis do you announce that on IRC where you're from?
00:33 semiosis :)
00:33 purpleidea and we generally don't announce that on irc. usually you just go afk.
00:33 mibby Hey semiosis can I pick your brain quickly for a sec about gluster in EC2?
00:33 purpleidea hehe
00:33 semiosis then i'm either... not where you're from, or not having sex (or both)
00:33 purpleidea mibby: he's getting busy
00:33 semiosis logically speaking
00:33 mibby ;)
00:33 purpleidea mibby: generally you just ask your question...
00:34 mibby haha yeah I know just feeling a bit formal today....
00:35 semiosis mibby: sure, any time
00:36 purpleidea semiosis: btw: when you're not busy it would be really cool if you posted performance and latency data about your specific setup somewhere.
00:36 mibby gluster in EC2 I'm generally cool with (thatnks for your help semiosis several weeks back). Im in AU and will be builidng a gluster server in each of the 2 availability zones. I would like to have(at least) a 3rd server in another region. Do I just set that up as another replica? I know the link to lets say US-West will be slower than to another AZ but will it work fine?
00:37 semiosis purpleidea: i use ,,(joe's performance metric)
00:37 glusterbot purpleidea: nobody complains.
00:37 purpleidea semiosis: haha okay, but did you ever collect data?
00:38 semiosis purpleidea: not that data.  i ran tests of our application and measured how fast we could process and serve media
00:38 purpleidea semiosis: i gotta learn better glusterbot
00:38 semiosis mibby: the latency will be too much for AFR (gluster replication). maybe you should consider geo-replication
00:39 purpleidea @teach glusterbot how-do-i look in the puppet-gluster module https://github.com/purpleidea/puppet-gluster
00:39 mibby I heard that geo-replication is one way? Can you mix afr WITH GEO-REPLICATION?
00:39 mibby oops mixed my caps...
00:39 purpleidea lol
00:40 semiosis yes it is one way.  i dont think you can mix the way you want to.  what you can do is have a volume with replication (normal, AFR replication) and geo-rep that (one way) to another site
00:40 cjh left #gluster
00:41 mibby will the geo-replicated server help with quorum? I'm suspecting not
00:43 semiosis no
00:43 semiosis you can use quorum with replica 2, but it will go read-only if you lose just one server
00:48 khushildep_ joined #gluster
00:48 mibby hmmm ok... I'm trying to keep the environment RW in the event of either a single AZ failure (easy) or a single Region failure (current challenge)
00:50 purpleidea mibby: more regions, and more hosts!
00:51 semiosis you could geo-rep to another afr replicated volume, but you'll have to manually divert traffic to the other region
00:51 semiosis a hot-spare, if you will
00:52 mibby the routing wont be too much of a problem, I can use Route53 to handle that....
00:52 mibby purpleidea: wont the high latency break AFR across multiple regions?
00:53 semiosis problem i ran into with that was sql... using RDS and there's no way to replicate & fail over to another region
00:53 semiosis so i would have to restore a backup, which takes long
00:54 semiosis mibby: to be clear, afr will work, you just probably wont be satisfied with the performance.  but give it a try!
00:54 Technicool joined #gluster
00:55 mibby I think RDS recently released some additionality replication functionality to/from external mysql instances
00:56 mibby I can sort of live with the performance, just need to be sure it will be stable and that the additional latency wont break things (corrupt files, etc, etc)
00:56 semiosis woo! i only asked them for that 18 months ago
00:56 purpleidea haha
00:56 mibby http://aws.typepad.com/aws/2013/09/migrat​e-mysql-data-to-amazon-rds-and-back.html
00:56 glusterbot <http://goo.gl/rfcz1o> (at aws.typepad.com)
00:57 semiosis thanks for the link i wasnt finding it
00:57 mibby np's
00:58 semiosis mibby: ah, in true AWS fashion, looks like they make it easy to get your data in... no mention of getting it out (even to another AWS region)
00:59 semiosis oh wait, here it is, at the end: http://docs.aws.amazon.com/AmazonRDS/latest/User​Guide/MySQL.Procedural.Exporting.NonRDSRepl.html
00:59 glusterbot <http://goo.gl/a1DMbg> (at docs.aws.amazon.com)
00:59 semiosis this might work
01:00 mibby so if I had 2 hosts in region1-az1, 2 hosts in region1-az2, and 2 hosts in region2-az1... if region1 went offline would I still have quorum with the 2 hosts left in region2
01:00 semiosis no
01:00 mibby :(
01:01 semiosis you could do two independent, identical clusters, in two regions, and geo-rep from one to the other
01:01 purpleidea mibby: gluster works better with *more* nodes too... semiosis you have lik 30+ right?
01:03 mibby quorum needs 50% or more bricks to be available?
01:03 semiosis purpleidea: not quite
01:03 semiosis mibby: yes, though it's configurable
01:04 duerF joined #gluster
01:07 mibby but 50% is the default?
01:07 semiosis right.  see 'gluster volume set help' and look at the quorum options
01:09 mibby ok so in AU where we just have 2 AZ's I can have 2 hosts in each AZ and still safely achieve quorum and RW access?... might have to shelve the multi-region thoughts for the time being
01:09 mibby ^^ if a single AZ goes down
01:10 semiosis quorum with replica 2 (for 2 AZs) would mean you would lose write access (read-only) if one AZ went down
01:10 purpleidea mibby: https://github.com/purpleidea/puppet-gluster/​blob/master/manifests/volume/property/data.pp
01:10 glusterbot <http://goo.gl/UKQeYP> (at github.com)
01:13 mibby puppet/chef is a medium term goal for our environment, not enough time to get it up in the short term unfortunately.. interesting thought though
01:13 semiosis @puppet
01:13 glusterbot semiosis: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
01:14 semiosis fwiw, i would strongly encourage you to puppetize before going to prod in ec2
01:14 mibby who's module is better? ;)
01:15 semiosis my mttr is minutes, thanks to puppet.  most of my servers get replaced automatically by autoscaling, then puppet does all the setup
01:15 sprachgenerator joined #gluster
01:16 semiosis you could achieve that with AMIs too though, but imo thats too messy
01:16 semiosis purpleidea: does anyone use your module in prod?
01:17 semiosis i'm not really maintaining my module.  it does exactly what i need(ed in 2010) it to do, nothing else.  purpleidea's has more features.  what will work best for you only you can figure out
01:18 semiosis my module was designed to aid manual work.  it sets up the servers and software, but stops short of actually configuring gluster, that part requires a human
01:19 semiosis mostly it models all the resources, sets up monitoring & mounts for bricks, log collection (using an unpublished logstash module i wrote), and the same for clients
01:19 khushildep joined #gluster
01:20 mibby cool
01:20 purpleidea semiosis: afaict yes. and you can use my module to do the manual human stuff :P
01:21 purpleidea semiosis: btw what happened to those videos?
01:21 semiosis mibby: so, try purpleidea's module first, it will probably be better for you than mine :)
01:22 mibby is my understanding messed up here... will 'replica 4' work across 4 hosts - 2 hosts in each of 2 AZ's?
01:22 semiosis purpleidea: videos?
01:22 jag3773 joined #gluster
01:22 purpleidea semiosis: conference videos
01:22 semiosis we need to bug johnmark.  i dont have contact info for the guy who ran the camera.  i think his name is jeff...
01:23 semiosis mibby: you could do that, but probably dont want to.
01:23 semiosis mibby: you probably want a distributed-replicated volume, with distribution between the servers in the same AZ, and repl between the AZs
01:24 semiosis call the AZs left & right... left-a replicates with right-a, left-b with right-b... files distributed evenly between a & b, and a whole copy in each AZ
01:25 semiosis can do this with 4 bricks and just two servers... you can have many bricks on a server, and i recommend this
01:26 semiosis for example, if you need 5 TB, and files are only a few MB each, i would do 10 bricks per server of 500GB each, or something like that
01:26 semiosis you'll need to load test to see how many bricks can saturate a server, and then how many servers (with that number of bricks) to get the aggregate perf you need
01:27 mibby assuming i can't use the gluster client, what happens if left-a goes down?
01:27 semiosis fwiw, in 2010 i found no better performance from instances bigger than large
01:27 mibby i was thinking using m1.medium's and seeing how i go
01:27 semiosis yeah, that size didnt exist in 2010 :)
01:27 semiosis try it
01:28 semiosis using nfs client if the server goes down your client is dead, need to remount with another server
01:31 mibby my use case are Windows IIS servers connecting to gluster (maybe using Route53 for DNS round robin and dead host detection) to serve a few TB's of property images (we're a real estate company) to end users. The IIS servers will use ELB.
01:32 purpleidea mibby: yikes
01:32 mibby yeah
01:32 purpleidea semiosis: mibby: how about a few HA vm's, running nfs server for the IIS clients, and mounting using gluster fuse? not sure how much overhead that adds. try joe's metric out for size. i'd at least test it.
01:32 mibby need IIS. There's custom code that does image resizing, redirection to secondary/tertiary sites in case the image isn't there
01:33 purpleidea semiosis: also, whenever someone says "a few TB's", i tend to say, too small for gluster...
01:34 mibby well we currently have ~2TB of images (millions and millions and millions of small files) that is just ever increasing and we can't really delete them
01:36 semiosis bbiab
01:37 mibby well looks like my current design thoughts have just gone out the window.
01:38 purpleidea mibby: millions and millions of small files sounds like a bad idea. i think you'll need some sort of caching in between if you want this to work with gluster. maybe need caching even without gluster
01:39 roo9 imo, reevaluate the assumption that you need a traditional filesystem for that application
01:39 roo9 gridfs ?
01:49 itisravi_ joined #gluster
01:52 mibby im trying to forklift as much as possible due to my timeframe and dont really have much developer access. Eventually we want to get all these files into S3, but that can't happen for several months, and I've only got 2-3 weeks to get 'something' up and running....
01:53 mibby hmmpf.... not even lunchtime.... time for the pub methinks...
01:55 roo9 eh if its only temporary, throw up a single server or three?
01:58 roo9 I may have missed why you think you need gluster
02:02 Shyam joined #gluster
02:04 Shyam left #gluster
02:08 glusterbot New news from newglusterbugs: [Bug 1009134] sequential read performance not optimized for libgfapi <http://goo.gl/K8j2w2>
02:08 semiosis finally making progress with debhelper :O
02:08 semiosis they changed it on me!
02:11 semiosis \o/
02:11 roo9 even though its my 2nd favorite package distribution, apt/dpkg is crazy complex.
02:25 semiosis wow i think i really nailed it this time
02:26 semiosis purpleidea: ^^
02:29 kshlm joined #gluster
02:32 purpleidea semiosis: i am apparently really dense on the internet. what did i miss?
02:32 semiosis fixed the mount at boot time problem in my packages
02:32 semiosis uploading to ppa now :D
02:33 purpleidea semiosis: oh, cool what was the fix?
02:33 semiosis new debian rules format, so my dh_installinit wasn't running
02:33 semiosis this defeated me twice before, and i just gave up
02:33 purpleidea semiosis: congratulations on still being a packager
02:33 semiosis paintainer :)
02:33 purpleidea yeah! i learned that term at the con :P
02:34 purpleidea semiosis: all of us, you, me, joe, etc just had some itch to scratch (you packaging, me config management, joe, selling hair products) and somehow we get stuck doing gluster :P wtf right?
02:35 semiosis yep
02:35 purpleidea semiosis: can we start a rumor and teach glusterbot that joe is a shampoo expert or something?
02:35 semiosis he's not?
02:35 purpleidea semiosis: no he is ;)
02:36 semiosis thought so
02:40 semiosis he figured out you can stop repeating *before* the bottle is empty!
02:41 harish_ joined #gluster
02:47 saurabh joined #gluster
02:59 rjoseph joined #gluster
03:11 lanning joined #gluster
03:19 shubhendu joined #gluster
03:20 mibby roo9: i need a HA shared filesystem. Gluster seemed a good choice.
03:26 samppah purpleidea: good morning (re: your puppet-module).. have you done any performance testing with and without iptables?
03:29 purpleidea samppah: um, are you serious?
03:30 purpleidea hehe samppah i use ,,(joe's performance metric)
03:30 glusterbot nobody complains.
03:31 purpleidea samppah: but to answer your question, i haven't. i consider using an appropriate firewall to be vital. if you don't want to use the firewall aspects of my puppet module you just have shorewall => false in the main class (which is actually the default)
03:35 samppah purpleidea: heh, yes i am serious :)
03:36 purpleidea samppah: i doubt you will see statistically different performance with or without iptables, but i could be wrong i guess
03:37 samppah purpleidea: i'm just thinking about allowing different ports on different vlan/ip range so it's kind of necessary to use firewall
03:37 purpleidea samppah: oh that was you asking about that the other day, right?
03:37 samppah yeah :)
03:37 purpleidea samppah: do you remember my answer?
03:38 samppah purpleidea: umm at least some of it.. i hope i still have it in my backlog
03:38 purpleidea samppah: i think that's the only way to do what you want, but i suspect you have a bit of an x y problem. i don't remember your full question sorry :P
03:40 samppah heh, okay :)
03:41 purpleidea samppah: use shorewall...
03:42 purpleidea it will make solving your problem the way i mentioned (the other day) way easier...
03:43 CheRi joined #gluster
03:45 samppah purpleidea: i'll definetly look into it.. i'm just worried about possible performance issues
03:45 itisravi joined #gluster
03:46 samppah and the problem i have is that i want to isolate traffic between volumes.. ie. hypervisors can access only VM storage and web servers can only access web storage etc
03:46 samppah so i'm thinking about using different vlans for VM and web storage
03:47 kanagaraj joined #gluster
03:48 davinder joined #gluster
03:49 samppah purpleidea: oh well, it's time to go again.. thank you for help :)
03:51 Shyam joined #gluster
03:52 Shyam left #gluster
03:53 shyam joined #gluster
03:55 sgowda joined #gluster
04:02 mohankumar joined #gluster
04:05 nasso joined #gluster
04:05 emil_ joined #gluster
04:13 ppai joined #gluster
04:17 shireesh joined #gluster
04:21 CheRi joined #gluster
04:22 lalatenduM joined #gluster
04:25 purpleidea samppah: np.
04:36 asias joined #gluster
04:41 ababu joined #gluster
04:46 davinder joined #gluster
04:53 dusmant joined #gluster
04:59 raghu joined #gluster
04:59 CheRi joined #gluster
05:13 hagarth joined #gluster
05:13 Cenbe joined #gluster
05:15 sgowda joined #gluster
05:18 satheesh1 joined #gluster
05:19 rjoseph joined #gluster
05:23 toad joined #gluster
05:26 bala joined #gluster
05:28 dtyarnell joined #gluster
05:41 ngoswami joined #gluster
05:44 bulde joined #gluster
05:45 bulde1 joined #gluster
05:53 davinder joined #gluster
05:55 vpshastry joined #gluster
06:04 rgustafs joined #gluster
06:04 nshaikh joined #gluster
06:07 psharma joined #gluster
06:09 saurabh joined #gluster
06:12 vimal joined #gluster
06:14 vpshastry joined #gluster
06:17 ababu joined #gluster
06:21 vpshastry joined #gluster
06:21 jtux joined #gluster
06:21 toad- joined #gluster
06:24 mohankumar joined #gluster
06:25 hagarth joined #gluster
06:31 VerboEse Hi all. I would like to speed up my newly created gluster. I want to use it as underlying mirror for virtual machines (proxmox), but I'm not able to even create a VM, as it always comes to timeout. IO usage goes up for some time while creating the disk image. Is it possible to either switch to async? How goes that with VMs and live migration to the other machine? This could be more a question for a mailing list, but maybe someone has a short answer ...
06:31 ndarshan joined #gluster
06:37 kPb_in joined #gluster
06:38 anands joined #gluster
06:39 saurabh joined #gluster
06:39 samppah VerboEse: sorry, i don't know about proxmox.. what glusterfs version and what distro you are using?
06:39 shruti joined #gluster
06:39 samppah there are some limitations about direct io and it may cause problems
06:40 samppah you need recent fuse if using fuse to mount it
06:40 mooperd joined #gluster
06:43 ababu joined #gluster
06:49 VerboEse I have 3.4.0 on debian 7.1
06:50 vshankar joined #gluster
06:55 vpshastry joined #gluster
06:58 ctria joined #gluster
06:58 ekuric joined #gluster
07:01 aravindavk joined #gluster
07:03 mooperd__ joined #gluster
07:04 ricky-ticky joined #gluster
07:06 eseyman joined #gluster
07:17 VerboEse Hm. Per default performance.flush-behind should be ON anyway. As far as I understand http://goo.gl/BkkX06, this setting implies a asynchronous write between the nodes?!
07:17 glusterbot Title: Gluster 3.2: Setting Volume Options - GlusterDocumentation (at goo.gl)
07:18 saurabh joined #gluster
07:19 andreask joined #gluster
07:21 VerboEse How could I find and plot the data of the diagnosics-setting? Will there be a logfile where I could find these?
07:23 bulde joined #gluster
07:25 ndarshan joined #gluster
07:29 VerboEse Maybe I should try this: http://permalink.gmane.org/gmane.co​mp.file-systems.gluster.devel/4257 - I'm running 2.6 kernel, too. Seems to be a good mehtod to analyze at least.
07:29 glusterbot <http://goo.gl/zLtGDd> (at permalink.gmane.org)
07:33 rjoseph joined #gluster
07:39 hagarth joined #gluster
07:44 ndarshan joined #gluster
07:49 VerboEse Ah, well. ther _is_ no matching fork binary, as I'm using proxmox "pve" kernel, which has some patches for OpenVZ. That's the reason I only have a 2.6 kernel, though debian 7.1. only ships with a perf binary for 3.2.
07:51 vpshastry1 joined #gluster
08:02 chirino joined #gluster
08:26 chirino joined #gluster
08:32 nshaikh joined #gluster
08:32 sgowda joined #gluster
08:32 shubhendu joined #gluster
08:46 chirino joined #gluster
08:49 mtanner_ joined #gluster
08:49 bulde joined #gluster
08:49 ekuric joined #gluster
08:50 vpshastry joined #gluster
08:52 MinhP joined #gluster
08:52 rjoseph joined #gluster
08:53 basic` joined #gluster
08:54 X3NQ joined #gluster
08:54 basic` joined #gluster
08:57 manik joined #gluster
08:58 dusmant joined #gluster
08:59 ababu joined #gluster
09:12 kanagaraj joined #gluster
09:13 purpleidea did i miss something in 3.3 about changing the replica count on an existing volume? there is a replica option for the add brick command... how does this all work?
09:21 d-fence joined #gluster
09:27 tryggvil joined #gluster
09:29 Alpinist joined #gluster
09:35 khushildep_ joined #gluster
09:42 glusterbot New news from newglusterbugs: [Bug 1012863] Gluster fuse client checks old firewall ports <http://goo.gl/3UsZxe>
09:50 ndevos purpleidea: you can use that option to move your distribute-only volume to a distribute-replicate volume, for example
09:55 nshaikh joined #gluster
09:57 tryggvil joined #gluster
09:59 tziOm joined #gluster
10:01 ndarshan joined #gluster
10:02 purpleidea ndevos: any more info? am i right in assuming to do so if you have 12 bricks initially, and you want to N=3 you need to add 24 bricks?
10:09 nexus joined #gluster
10:10 dseira any of you have tested glusterFS as a shared storage for ESXi?
10:11 dseira I'm having several performance issues and I need some help to try to configure correctly the volumes
10:12 shubhendu joined #gluster
10:19 ndevos purpleidea: yes, that is correct, if you change the replica count from 1x to 3x and have 12 bricks, you need to add 2x 12 = 24 bricks
10:19 ndarshan joined #gluster
10:30 CheRi joined #gluster
10:39 purpleidea ndevos: so this is mostly a corner case feature for now... thanks for the info
10:42 vpshastry joined #gluster
10:42 vpshastry left #gluster
10:43 ndevos purpleidea: I think the feature is used regulary by people starting small with a test-environment, and then needing to extend it and increase reduncancy
10:43 ndevos if you have 12 bricks already, you probably have taken care of redundancy too
10:45 yinyin joined #gluster
10:50 shyam joined #gluster
10:57 jtux joined #gluster
11:08 dusmant joined #gluster
11:08 purpleidea ndevos: fair enough :)
11:11 B21956 joined #gluster
11:12 mooperd joined #gluster
11:29 raz joined #gluster
11:29 raz quick question
11:30 raz does gluster store the files as normal files in the host filesystem?
11:30 raz like, could i just go in on any cluster brick and copy a file out manually?
11:34 davinder joined #gluster
11:36 dseira raz: yes, you can see all the files looking directly into the brick
11:36 raz dseira: with their normal filenames or hashed?
11:37 dseira raz: normal filenames
11:38 raz nice :)
11:38 raz <- torn between gluster and ceph here for a 1PB deployment
11:38 raz between trying which one first, that is
11:39 dseira raz: I haven't tried ceph, I've only tested glusterFS
11:40 l0uis raz: if you're interested in a posix FS then i wouldn't use ceph. cephfs doesn't seemt o be getting any real TLC
11:41 mooperd__ joined #gluster
11:41 duerF joined #gluster
11:41 raz l0uis: that's actually not a big factor for us. we just need a blog storage for a media library (lots and lots of videos).
11:42 l0uis ah k. yeah the ceph object store is a diff story
11:42 raz just the common requirements matter, above all reliability, should be painless to add storage bricks..  all that jazz
11:42 l0uis nod
11:43 kkeithley If you want an object store then I recommend looking at Gluster+Swift.
11:43 dseira yes, this is really good
11:43 raz well we don't *need* the object store either
11:43 raz we're weird like that, we just don't care ;)
11:44 raz we already have httpd's reading and writing the goods from the local fs
11:44 raz so we can just make that a gluster mount :)
11:44 kkeithley I can't tell you the names of the animation studios, but you can guess who they are; they all use Gluster.
11:45 raz hehe
11:45 raz that's a nice testimonial :)
11:45 dseira which gluster mode is better for writes performance of big files? (replication or replication/stripping)
11:47 l0uis dseira: i suspect they are equivalent, but i'm no expert
11:48 l0uis dseira: ultimately write performance will be governed by the client in a replicated setup since it has to write to multiple bricks
11:50 dseira l0uis: yes, but maybe using the stripped/replicated mode can be more fast than the replicated only because the writes are distributed in differents bricks
11:51 dseira l0uis: I'm an amateur in gluster
11:51 dseira l0uis: XD
11:51 l0uis dseira: If your client has a 1 gig ethernet connection. you're looking at 500mbps max to each brick in a 2x replica
11:51 kkeithley dseira: using stripe generally doesn't help performance. stripe != raid0
11:52 kkeithley even if it kinda looks like it.
11:52 l0uis if youre bricks are on nodes with 1g connections, you can see how your client writing at 500 mbps will be the cap
11:52 l0uis now, if your nodes are heavily loaded, perhaps there is some benefit in spreading the write load, but i highly doubt it
11:53 dseira l0uis: the network is not the problem, I'm testing in a 10G network
11:53 l0uis dseira: same math, add a 0
11:53 l0uis your bounded by client write since the client is writing to N replicas
11:53 dseira yes, but I only want a 2 replicas
11:54 l0uis so your clients will be writing a 5Gbps to each of the two replicas. Your replicas have 10G connections.
11:55 dseira kkeithley: and for big files with is better?
11:55 l0uis So you, again, can see, that regardless of striping, if you are replicating you will see at most client_bandwidth / replica_count in write performance
11:55 l0uis i believe striping is recommended for very very large files
11:56 kkeithley Use stripe when you have files that are larger than a brick is the most obvious use case
11:56 kkeithley @stripe
11:56 glusterbot kkeithley: Please see http://goo.gl/5ohqd about stripe volumes.
11:56 kkeithley dseira: ^^^
11:56 dseira the files are virtual machines files
11:57 l0uis replicated stripes would likely increase read performance in certain cases
11:57 l0uis anyhow, gotta get some coffee...
11:57 dseira i will take a look at the link
11:58 dseira me too, XD
12:07 itisravi_ joined #gluster
12:11 edward2 joined #gluster
12:26 harish_ joined #gluster
12:32 glusterbot New news from resolvedglusterbugs: [Bug 811311] Documentation cleanup <http://goo.gl/4xmWEF>
12:46 ekuric joined #gluster
12:46 mbukatov joined #gluster
12:47 ekuric joined #gluster
12:57 andreask1 joined #gluster
13:00 jclift_ joined #gluster
13:03 glusterbot New news from resolvedglusterbugs: [Bug 1012945] nfs+afr+logging: "unlock failed on 1 unlock" seems meaningless <http://goo.gl/clJ3vL>
13:05 mooperd joined #gluster
13:06 rwheeler joined #gluster
13:11 chirino joined #gluster
13:14 rcheleguini joined #gluster
13:19 ctria joined #gluster
13:25 mooperd joined #gluster
13:32 social Hi, thank you for 3.4.1! I'd love to ask, do you have idea about notifiying gluster mount when package changes?
13:32 social it would be great if it was possible to freeze open fds, reload gluster mount and continue but I don't think that is atm possible
13:34 failshell joined #gluster
13:36 mooperd joined #gluster
13:42 jclift_ semiosis: Did you get a change to write up the peer rejection wiki page thing?
13:58 johnmark semiosis: howdy. let me know when you have a spare moment
13:58 semiosis johnmark: sup?
14:00 jdarcy joined #gluster
14:00 semiosis Peanut: i think i fixed the mount on boot issue.  try the updated packages in the ppa I uploaded last night
14:00 lpabon joined #gluster
14:01 kaptk2 joined #gluster
14:02 semiosis johnmark: i'll be around for the next 15 min then heading to the office
14:02 failshel_ joined #gluster
14:03 shyam left #gluster
14:07 johnmorr left #gluster
14:09 bugs_ joined #gluster
14:11 mooperd joined #gluster
14:13 neofob what is the gluster workshop like this year in edinburgh?
14:13 tryggvil joined #gluster
14:15 jclift_ johnmark: ^^^
14:17 anands1 joined #gluster
14:18 ctria joined #gluster
14:19 l0uis semiosis: new updates seem to be working nicely
14:19 manik joined #gluster
14:20 semiosis \o/
14:20 l0uis semiosis: the mount is slightly delayed, but i suppose you can't win em all :)
14:20 XpineX_ joined #gluster
14:20 semiosis yes i had to add a sleep before the mount to give glusterd time to spawn the brick daemon
14:20 semiosis if you're mounting from localhost that's necessary
14:20 l0uis k, no big deal, i can live with it
14:21 Peanut semiosis: I had to give up on getting /gluster mounted on Ubuntu, so now I have a rather fun construct in /etc/rc.local that first pings the other node, and only attempts to mount if that node is reachable.
14:21 semiosis 5 sec was borderline in my tests on an ideal VM so i made it 10 sec to be sure it would work for everyone
14:22 semiosis Peanut: whats the harm in trying the mount & letting it fail (if the server is unreachable)?
14:22 l0uis semiosis: doh, looks like you forgot to make the log directory :)
14:23 Peanut semiosis: well, the point is more to delay the mount attempt until at least I'm sure I have network and connectivity.
14:23 semiosis l0uis: no way!  log dir creation used to be in the -server package, but i moved it to -common
14:23 semiosis pretty sure i tested that
14:23 l0uis semiosis: oh oh i didn't check common :)
14:23 l0uis semiosis: sorry :)
14:23 * l0uis looks
14:23 semiosis :)
14:24 johnmark neofob: what is it like? have you been to other gluster community days?
14:24 Peanut I'll be in the UK, tempted to go to the London one...
14:24 l0uis semiosis: yes its there :). sorry! last time I looked it was in server :D
14:24 johnmark semiosis: yeah, was just going to ping you re: your case study and jzb's email
14:25 johnmark Peanut: excellent. Am updating the event pages today
14:25 johnmark Oct. 24 in Edinburgh, Oct. 29 in London, Oct. 31 in Frankfurt
14:25 semiosis johnmark: sorry for the delay getting to that, is it particularly urgent?  been super busy since i got back & still catching up on things (and it's already friday!)
14:26 semiosis l0uis: ok good, i was worried for a sec.
14:26 * ndevos wonders if he should go to the Frankfurt one...
14:26 chirino joined #gluster
14:27 semiosis afk 30
14:30 kaptk2 joined #gluster
14:33 hagarth ndevos: are you coming over to Edinburgh?
14:33 ndevos hagarth: no, I don't think so
14:33 hagarth ndevos: ok
14:33 ndevos hagarth: I'd love to, you're there right?
14:35 hagarth ndevos: yeah, I will be there.
14:35 johnmark hagarth: awesome!
14:35 johnmark hagarth: didnt' know you would be in Edinburgh. Can you stay the following week and be in London?
14:36 johnmark ndevos: ^^^^ you too
14:36 ndevos hagarth, johnmark: I would need to find an excellent excuse for that...
14:36 johnmark semiosis: no worries. I figured that was the case
14:36 johnmark just don't want to let it drop :)
14:36 johnmark ndevos: :) let's work on that
14:36 johnmark ndevos: I can also use you in Frankfurt on 10/31
14:37 satheesh joined #gluster
14:37 neofob johnmark: no, i haven't been to the other community days
14:37 manik joined #gluster
14:37 ndevos johnmark: I think we should be able to work something out, send me some details by mail and I'll go through it early next week
14:37 hagarth johnmark: I would have liked to but given the travel in November I'd have to be back here.
14:37 quique anyone have experience with gluster nodes in aws?  i'm trying to figure out how to replace nodes in a pool when they go down.
14:39 johnmark hagarth: ok, but you'll be in Edinburgh for sure
14:39 * johnmark makes notes
14:39 johnmark ndevos: cool. Will send to you later today
14:39 johnmark neofob: whare are you located?
14:39 hagarth johnmark: yes
14:39 ndevos johnmark: thanks!
14:39 johnmark hagarth: excellent :) I will put you to work
14:40 johnmark hagarth: do you have a linuxcon talk, too?
14:40 neofob virginia, i'm going to linuxcon in edinburgh this oct
14:40 hagarth johnmark: yes, on tuesday
14:40 johnmark neofob: that's awesome. Also trying to put something together for LISA in Nov.
14:40 Jodin joined #gluster
14:40 johnmark hagarth: ok. and our community day is on Thursday. Can you give a talk on either QEMU or OpenStack integration?
14:40 johnmark I think jclift is doing the general dev talk
14:41 hagarth johnmark: whatever you need me to ;)
14:41 johnmark hagarth: that's what I like to hear :D
14:41 johnmark heh
14:41 Jodin Hi, I have a problem with dangling symlinks on a Gluster node of mine, can someone help me?
14:42 klaxa|web joined #gluster
14:42 sprachgenerator joined #gluster
14:43 quique right now  in aws if i take down a node and bring one backup with the same hostname, the other nodes show it as connected, but don't write to it
14:54 Jodin What should I do if my .gluster folder is full of dangling symlinks on my brick?
14:55 FilipeCifali joined #gluster
14:56 klaxa|web does glusterfs keep information about volumes in some databases that are not purged upon removal? i'm running into problems with volumes that don't exist anymore
14:57 yinyin joined #gluster
14:57 klaxa|web https://gist.github.com/klaxa/6729917
14:57 glusterbot Title: gist:6729917 (at gist.github.com)
14:57 FilipeCifali Hello, any1 experienced a volume fail w/ no logs in add-brick replica 2 over Gluster 3.4?
15:04 FilipeCifali I'm thinking about downgrading to 3.3 to try :(
15:06 ndk joined #gluster
15:07 failshell joined #gluster
15:08 failshell joined #gluster
15:19 klaxa|web nvm, this blog-post helped: http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ should have googled more intensively beforehand
15:19 glusterbot <http://goo.gl/YUzrh> (at joejulian.name)
15:20 ndevos klaxa|web: ah, if you would have pasted that one-line-error-message in the channel, glusterbot would have explained it all to you
15:21 klaxa|web well, i don't believe pasting error messages into irc channels contributes to productivity at all
15:21 klaxa|web but thanks for the hint
15:22 ndevos if it is a one liner it is quicker than pointing to a pastebin/gist, opening that is more work ;)
15:22 klaxa|web true, but it clutters up the log and makes things hard to read
15:22 klaxa|web ah well, doesn't matter anymore~
15:22 FilipeCifali Can I try one time? Haha
15:22 FilipeCifali volume add-brick: failed:
15:24 FilipeCifali I'm driving in circles...
15:25 johnmark doh
15:25 johnmark klaxa|web: well, I'm glad you found something helpful
15:26 johnmark klaxa|web: the author of tha tblog post is JoeJulian, who shows up here pretty often
15:30 FilipeCifali Is gluster volume add-brick replica 4 (from replica 2) supposed to work?
15:30 FilipeCifali I'm failing into adding 2 bricks for a replica 2 already working bricks
15:31 FilipeCifali Adding <new bricks> in pair after the replica 4
15:31 ndevos FilipeCifali: you really want 4 copies of your data?
15:32 FilipeCifali no, I want 2 copies, replica 2, in 4 bricks, it makes more sense
15:33 ndevos then you just do a add-bricks without 'replica 4'
15:33 FilipeCifali but I seem to fail to make that
15:33 FilipeCifali gluster> volume add-brick gv0 brick3:/export/sdc1/brick brick4:/export/sdc1/brick volume add-brick: failed:
15:34 FilipeCifali and I get [2013-09-27 15:32:53.225218] I [cli-rpc-ops.c:1687:gf_cli_add_brick_cbk] 0-cli: Received resp to add brick [2013-09-27 15:32:53.225395] W [cli-rl.c:106:cli_rl_process_line] 0-glusterfs: failed to process line
15:34 FilipeCifali peer status show everyone as State: Peer in Cluster (Connected)
15:35 Peanut semiosis: I now have my '/gluster' entry in /etc/fstab as 'noauto' so that plymouth/mountall won't even try, and mount it from rc.local
15:35 ndevos hmm, that command looks good to me...
15:35 FilipeCifali yeah, maybe it's my version? I got the glusterfs 3.4 from epel repo
15:36 ndevos maybe? you could try the ,,(yum repo)
15:36 glusterbot The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
15:36 FilipeCifali gonna install / config 3.3.2 then
15:36 ndevos 3.4.1 has been released yesterday I think, you should rather use that
15:38 phox Peanut/semiosis: I abuse _netdev for that, FWIW.
15:39 phox otherwise you can't use mount -a -t glusterfs because noauto will kaibosh that
15:39 Peanut phox: true, but I just mount my one gluster thingy by name, so I don't need -a
15:39 phox yeah, just messy to have to hardcode it
15:40 phox :)
15:40 Peanut Sure, but less messy than having your machine come up without it every now and then.
15:40 kkeithley 3.4.1 was released a couple hours ago, which might be yesterday in some locales. ;-)
15:41 * phox makes a note
15:41 phox semiosis: any idea when 3.4.1 debs might fit into your schedule? =)
15:41 phox I'm deploying soon so I might as well deploy newer in this case.
15:41 FilipeCifali only source, gonna download and install
15:41 johnmark FilipeCifali: if 3.3.2 works better for you, then that's a bug in our 3.4 release and w eshould fix that
15:42 phox kkeithley: at the earliest that's like 01:00 at UTC-12:00
15:42 phox :P
15:42 johnmark FilipeCifali: I highly recommend you use 3.4.1
15:42 FilipeCifali I'm downloading 3.4.1 atm to try before downgrading
15:42 johnmark FilipeCifali: excellent, ok
15:42 phox semiosis: FYI I've scheduled deployment for Monday if that happens to be do-able ;) ;)
15:42 phox semiosis: if not oh well
15:42 FilipeCifali not a cutting edge fan but I believe it's better to have the newer release
15:43 johnmark phox: we're looking for people to help with our deb packaging :)
15:43 kkeithley I thought we had someone doing .debs for Debian?
15:43 saurabh joined #gluster
15:43 kkeithley Did we scare him away?
15:43 johnmark kkeithley: he still does, but I don't like SPOFs
15:43 mooperd joined #gluster
15:44 johnmark kkeithley: plus I assume at some poine that people get tired of doing the same thing
15:44 phox johnmark: I am not good with Debian's annoying packaging tools :l
15:44 johnmark heh ok
15:44 phox they never manage to not annoy the hell out of me and be totally counterintuitive
15:44 kkeithley speaking of people getting tired of doing the same thing.....
15:44 Peanut I kind of like Debian's packaging tools, but they're not nearly as nice as building packages under Solaris, admittedly.
15:44 phox I mean, there's that.... whatever make package thing that will just let you install to a sandbox with a make install prefix and then slurp the package, but everything else... gah.
15:44 phox Peanut: I'm from Gentooland.
15:45 phox if only Gentoo had binary packages _too_ and had an actual stable branch
15:45 FilipeCifali well, you can emerge precompiled stuff, but meh
15:45 Peanut Hmm.. gentooland.. you're not selling it really well, phox :-)
15:45 phox you can but there's still not stable
15:45 phox Peanut: the tools are awesome
15:46 phox as opposed to Debian which has really broken crap like Aptitude
15:47 jag3773 joined #gluster
15:49 mooperd joined #gluster
15:49 FilipeCifali this is the server tarball right? http://download.gluster.org/pub/gluster/g​lusterfs/3.4/3.4.1/glusterfs-3.4.1.tar.gz
15:49 glusterbot <http://goo.gl/Ey10na> (at download.gluster.org)
15:50 ndevos the tarball contains server+client (sources only)
15:50 FilipeCifali ok, compiling and hopes up :)
15:50 FilipeCifali ./configure --help | grep server says nothing, so better ask than compile only client
15:51 kkeithley FilipeCifali: yes, that's the one
15:51 ndevos kkeithley: no 3.4.1 rpms yet?
15:51 kkeithley They are built on Koji. I'm building the yum repos right now
15:52 ndevos FilipeCifali: if kkeithley is finished with that ^, you can use the repo to install 3.4.1
15:52 kkeithley I think FilipeCifali is on Debian, no?
15:52 FilipeCifali CentOS, rpm friendly
15:52 * ndevos thought FilipeCifali installed the epel rpms
15:53 kkeithley oh, okay. give me 10 minutes and epel RPMs will be up
15:53 kkeithley All Fedora RPMs will be in Fedora updates-testing repos soon
15:54 ndevos kkeithley: and that includes the gfapi python bits too? I think I've seen that chang to the fedora spec
15:54 FilipeCifali oh sure, I'll wait then
15:54 FilipeCifali I had via rpm, just erased them all and was compiling from source
15:54 kkeithley yes, theres a .../site-packages/gluster/__init__.py in 3.4.1
15:54 kkeithley as a place holder
15:55 ndevos kkeithley: oh, so the gfapi.py isnt there?
15:55 kkeithley should I have used the one from .../api/src/examples?
15:56 semiosis phox: _netdev doesnt do anything on ubuntu
15:56 ndevos kkeithley: thats what http://review.gluster.org/5835 does, but it is not merged yet
15:56 glusterbot Title: Gerrit Code Review (at review.gluster.org)
15:56 kkeithley right
15:57 semiosis phox: re: 3.4.1 debs... well 3.4.1 is not GA yet, rc1 is the latest.  are you interested in testing that?  i will build debs tonight if you are, otherwise I'll probably wait until 3.4.1 is GA to do it
15:57 zaitcev joined #gluster
15:57 semiosis phox: talking about wheezy debs ^^
15:57 kkeithley semiosis: ??? 3.4.1 GA was released earlier today
15:57 semiosis kkeithley: well how about that
15:57 semiosis then i will make debs tonight
15:58 johnmark lol :)
15:58 johnmark semiosis: you seem to be working too hard
15:58 johnmark let me have a word with your manager ;)
15:58 semiosis johnmark: is it that obvious?
15:58 johnmark heh
15:58 FilipeCifali can you post (if your not going to do already) here when you update the RPMs?
15:59 * semiosis manages himself
15:59 phox semiosis: I'll wait
15:59 semiosis @latest
15:59 glusterbot semiosis: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
15:59 johnmark semiosis: ah, then I'm going to tell you to take a vacation
15:59 phox semiosis: although per what kkeithley said I presume it's actually GA now
15:59 johnmark phox: yeah
15:59 johnmark semiosis: oh wait, that was last week ;)
16:00 semiosis hahaha
16:00 phox in which case I will bump my testing box up to 3.4.1 as soon as I can grab wheezy debs for it, and then deploy on Monday
16:01 semiosis ok so there's a source tarball for 3.4.1 GA but looks like no packages whatsoever yet
16:01 kkeithley I think you're all imaging things
16:01 semiosis http://download.gluster.org/pu​b/gluster/glusterfs/3.4/3.4.1/
16:01 glusterbot <http://goo.gl/kLUbFI> (at download.gluster.org)
16:02 * phox nods
16:03 johnmark yes... takes a bit of time for source to be packagted and made available for downloads
16:03 phox ideally it would be cool to be able to test it on testing before deployment but at the same time what's it going to do... the data is owned by an underlying FS so it's kind of a "whatever"
16:03 johnmark which is why I usually hold off from announcing releases until I know that packages exist
16:05 semiosis Package Maintainer sould be shortened to "paintainer" - https://twitter.com/agentder​o/status/359772570532331520
16:05 glusterbot <http://goo.gl/u65tja> (at twitter.com)
16:05 phox heh
16:05 FilipeCifali hahahaha
16:05 phox semiosis: and this is why Gentoo wins :P
16:05 phox assuming nobody messed with the format of the source tarball you just bump the ebuild version and you're done, period
16:06 phox if they did, well, you have to deal with that, but then if it's stable after that it's again a huge timesaver :)
16:07 FilipeCifali I wish I could run Gentoo for everything, but I'm not my own boss :(
16:07 phox I could run Gentoo here but we have too many packages installed
16:07 phox could probably do it on webserver + database server
16:08 phox but not the research servers
16:08 phox that would just go completely sideways
16:08 phox particularly because a LOT of those packages are maintained by "scientists" upstream :l
16:08 phox which tends to result in TU
16:09 zerick joined #gluster
16:09 FilipeCifali I don't really understand my boss, since sometimes he likes Gentoo, sometimes he likes CentOS, it's like change of humor
16:10 saurabh joined #gluster
16:10 FilipeCifali RPM up :)
16:11 phox I am so not into RH derivatives
16:11 phox although Debian is staggeringly stupid some days too
16:11 phox Wheezy shipped with broken GRUB because of dogmatic idiots
16:11 kkeithley 3.4.1 RPMs are there for EPEL and Pidora.  RPMs for Fedora will arrive in the Fedora updates-testing repo as soon as Fedora/Bodhi does its thing; they will migrate to updates repo after they pass the testing stage
16:11 phox *raise*
16:11 kkeithley lunch time
16:11 semiosis kkeithley: nice!
16:11 samu60 joined #gluster
16:11 samu60 hi all
16:12 semiosis hello
16:12 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:12 semiosis samu60: ^^
16:12 phox and let's not even get into the fact that Wheezy is deliberately still including insecure Tor because of the same dogmatic idiocy
16:12 samu60 we have a 8 nodes in a replicated distributed environment
16:12 phox glusterbot: aloha
16:12 samu60 version 3.3.0
16:12 samu60 stripped volumes
16:12 semiosis one of the best things about debian is that its a project driven by ideals.  that's also one of the worst things about it :)
16:13 samu60 1 server crashed and we had to restore it
16:13 daMaestro joined #gluster
16:13 samu60 we followed the instructions in http://gluster.org/community/document​ation//index.php/Gluster_3.2:_Brick_R​estoration_-_Replace_Crashed_Server
16:13 glusterbot <http://goo.gl/IqaGN> (at gluster.org)
16:13 samu60 the volume is up and the peers are connected
16:13 phox semiosis: you mean "blind adherence to not-well-thought-through ideals"
16:13 samu60 the problem is that the data is not selfhealed
16:14 samu60 when launching the crawl process with find, the following message appears in the "companion" node:
16:14 samu60 [2013-09-27 18:08:02.139967] E [afr-self-heald.c:418:_crawl_proceed] 0-storage-replicate-2: Stopping crawl as < 2 children are up
16:14 Peanut Debian is a political organisation who occasionally bring out an operating system - OpenBSD is an international hiking club who occasionally release an operating system.
16:14 samu60 and ther's no data in the just added peer
16:14 semiosis samu60: please ,,(pasteinfo) and also include the output of 'gluster volume status'
16:14 glusterbot samu60: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:15 phox s/(here)/t$1/
16:15 glusterbot What phox meant to say was: semiosis: you mean "blind adt$1nce to not-well-thought-through ideals"
16:15 phox srsly
16:15 samu60 http://fpaste.org/42727/98523138/
16:15 glusterbot Title: #42727 Fedora Project Pastebin (at fpaste.org)
16:16 semiosis phox: glusterbot is trolling you!
16:16 samu60 http://fpaste.org/42728/38029856/
16:16 glusterbot Title: #42728 Fedora Project Pastebin (at fpaste.org)
16:16 phox don't feed the trolls.
16:16 semiosis phox: in gluster irc bots troll *you*
16:16 * phox glues glusterbot's GI tract shut
16:16 samu60 well that's the main data of the volume
16:16 phox in Soviet pseudo-filesystem?
16:17 phox that "in Soviet Russia, spam deletes YOU!" thing was still the best evar.
16:18 samu60 is there any manual way to copy the missing data from one peer to the other?
16:18 semiosis samu60: iirc there was a bug in 3.3.0 where sometimes connections would not automatically restart when a server came back online.  maybe that is affecting you
16:18 semiosis since volume status shows all bricks are up
16:18 samu60 uops
16:19 samu60 but the connection messages appear on the other's peer's log....
16:19 semiosis can you restart the other peer, with the connection erros?
16:19 SpeeR joined #gluster
16:20 samu60 since it's the only working node, I'll have to wait because it's on production and don't want to affect users
16:21 samu60 as a side question, would it be safe, from the gluster client point of view, to restart the only node that has some part of the information?
16:21 semiosis client will error when operations try to access a file that is unavailable
16:22 semiosis but you have replication
16:22 semiosis so... ?
16:22 samu60 the replication is on the failed node, with no information on it
16:23 samu60 so, eventually, no information is available
16:23 samu60 I'd like to propagate the information prior to restart the node with the "stoping crawl message"
16:24 semiosis oh i see
16:25 semiosis ok here's another idea
16:25 semiosis on the "good" server, make a new client mount, and run the crawl on that
16:25 semiosis i suspect the existing client mount is affected by the bug and not reconnecting to the server that went down
16:25 FilipeCifali Installing : glusterfs-server-3.4.1-1.el6.x86_64                                                                                                                          1/1  error reading information on service glusterfsd: No such file or directory
16:25 FilipeCifali that's ood
16:26 samu60 that's an option...i'll give it a try
16:27 samu60 but the client mount will be "general", it can't mount on the "good" server
16:27 semiosis i dont understand
16:27 mohankumar joined #gluster
16:28 samu60 I don't understant what exactly you meant with " on the "good" server, make a new client mount"
16:29 samu60 what I understood is to mount the volume on a new client and crawl from it...am I right?
16:29 semiosis any server will do actually
16:29 semiosis just make a new client mount and run the find/crawl through that
16:29 semiosis yes you're right
16:29 FilipeCifali Am I the only one getting xlator.c:185:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/3.4.0​/xlator/mgmt/glusterd.so: cannot open shared object file: No such file or directory
16:29 samu60 ok, that's what I tried to explain
16:29 samu60 i'll work on it
16:30 samu60 thanks a lot!
16:30 semiosis yw
16:32 Shdwdrgn joined #gluster
16:36 * phox wanders off to install ZFS on his testing hardware :l
16:36 Peanut Ah, zfs is fun, good luck
16:36 phox good luck?
16:36 phox it's installed in production
16:36 phox it's just not on this box because this box is older and its current purpose is EOL
16:36 phox so I'm installing shiny stuff on it so I can play with production-equiv s/w :)
16:37 Peanut Ah ok. On Linux or OpenIndiana?
16:37 phox ZoL
16:37 phox hm, 0.6.2 is out, maybe I should care... heh
16:37 l0uis phox: gluster on top of zfs?
16:37 phox l0uis: very much so
16:37 l0uis phox: nice. any strangeness ?
16:37 phox nope
16:37 phox but use xattr=sa
16:37 samu60 semiosis: it seems to be working...the data is being moved
16:37 phox makes it less sluggish
16:38 semiosis samu60: thats great!
16:38 l0uis phox: performance on par w/ xfs?
16:38 samu60 semiosis: thank's a lot indeed!
16:38 semiosis samu60: you should probably upgrade to 3.3.1 when you have a chance.  i think the reconnect bug was fixed in that release
16:38 kkeithley Too bad about the license on ZFS; RHEL and Fedora won't touch it because of that.
16:39 phox l0uis: listing directories sucks severely.  and even local mounts are slower than native ZFS or XFS because Gluster is only able to hold down like 200MB/sec ATM
16:39 l0uis phox: hm. that's unfortunate. but the zfs awesomeness makes up for it, I guess?
16:39 phox kkeithley: I presume you can add separate repos, though, and DKMS handles it nicely..
16:39 phox kkeithley: it's not in Debian either, and if it was it would be 18 months out of date... but whatever
16:40 semiosis phox: is it in gentoo?
16:40 phox l0uis: yeah having crap like reservations and snapshots and snapshot diffs and file checksumming and other crap makes up for everything
16:40 phox semiosis: debian
16:40 phox oh ZFS?  yes.
16:40 phox richard yao maintains that
16:40 samu60 semiosis: there was a bug in 3.3.1 with stripped volumes that highly degraded performance and we were waiting for 3.4
16:40 phox so it's a bit better-integrated but not particularly in any way I care about
16:40 semiosis samu60: oh ok
16:41 phox semiosis: I am running ZFS on Gentoo, but at home w/o Gluster
16:41 phox I don't have IB at home so I don't care about Gluster there :P
16:41 semiosis cool!
16:41 semiosis oh wait, that means without gluster
16:41 l0uis phox: nice. well i was debating whether to do some zfs benchmarking when i go to upgrade my cluster. maybe i'll visit it since at least one preson is using it w/ success in production on gluster
16:41 * phox has the oldest gentoo/amd64 installation in existence :)
16:41 semiosis well still cool, but not cool!
16:42 phox l0uis: one caveat is that we're running fairly bleeding-edge kernels because ZoL is being targetted at very new kernels as relevant features are added etc
16:42 l0uis phox: ahhh
16:42 phox so we were running out-of-tree 3.6.7; now we're running out-of-tree 3.9.1, etc...
16:42 l0uis phox: not sure if i'm up for the maintenance hassle. i'll see where things are when i have time for an experiment i guess.
16:42 l0uis afk
16:42 phox and I've forgotten now if I needed to change anything in the kernel config for it to be happy... :l
16:42 phox there's not a lot
16:42 phox kernel stays still for quite a while
16:43 phox use the DKMS packages for ZoL
16:43 phox then you don't really have to think or care
16:43 phox also kernel.org's make deb-pkg is broken and they haven't fixed it because they're lazy.  it leaves out one of the source or whatever symlinks in lib/modules/...
16:44 phox -if- you're wanting to do roughly what I did and on Debian :l
16:44 Mo__ joined #gluster
16:45 DV joined #gluster
16:47 LoudNoises joined #gluster
16:51 shylesh joined #gluster
17:01 B21956 joined #gluster
17:06 jbrooks left #gluster
17:06 jbrooks joined #gluster
17:13 mibby- joined #gluster
17:17 BigRed_ joined #gluster
17:18 BigRed_ Greetings.
17:21 BigRed_ I'm looking for a way to keep a pair of LAMP servers synchronized and ran across gluster...  I have read all if the "Getting Started" stuff on the website but am still a bit unsure about the best approach in setting it up for my needs.  Do my disks need to be mounted in /export to be used with gluster?
17:22 JoeJulian No
17:22 BigRed_ ...and if so, if I set up a symlink to somewhere else in the file system and write to it locally, does that get replicated to the other node or do I have to mount via NFS and access it that way?
17:22 JoeJulian GlusterFS is not about keeping servers in sync. It's about providing a fault tolerant clustered filesystem.
17:23 JoeJulian There are some issues with using network filesystems for things like ,,(php). See that article for details.
17:23 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
17:24 jbrooks joined #gluster
17:24 PatNarciso joined #gluster
17:24 BigRed_ ...so on my web servers, I keep all website content in  /content.  I can just point gluster to that location and have the webservers read/write locally to that location?  ...and Gluster will keep the 2 nodes in sync?
17:25 JoeJulian nope
17:26 JoeJulian Your filesystem that you provide to GlusterFS to use as its backend storage belongs to GlusterFS. If you want to use that clustered filesystem, you then have to mount your volume and read/write through that.
17:26 BigRed_ Ah.
17:26 JoeJulian I've got to head out to Bellevue. Read up and feel free to ask questions. Fridays are sometimes a bit slower but there's usually someone that can answer questions.
17:26 BigRed_ Thxu for the help.
17:31 jbrooks joined #gluster
17:37 lpabon joined #gluster
17:37 hagarth joined #gluster
17:37 jbrooks joined #gluster
17:48 qu joined #gluster
17:48 qu Anyone know what happened to beowulf?
17:48 qu There's nothing for it in the Arch repo.
17:52 qu Oh no.  Everybody's leeching.
17:55 johnmark qu: leeching?
17:55 qu (listening/learning)
17:55 qu (not pitching in)
17:57 jbrooks left #gluster
17:57 jbrooks joined #gluster
17:57 johnmark ah... well, there was quite a bit of sharing going on earlier
17:57 johnmark but then friday afternoon happened :)
17:58 phox JoeJulian: is there a place I can read about all of those various mount options mentioned int he PHP blurb?
17:59 rotbeard joined #gluster
18:00 qu It's nt afternoon... it's morning.
18:00 * phox nods
18:01 * phox needs to go put jeans on :l
18:02 FilipeCifali can gluster cli be more specific about error messages?
18:02 FilipeCifali like, /export/sdc1/brick or a prefix of it is already part of a volume, in where? I have to check everywhere :(
18:02 glusterbot FilipeCifali: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
18:02 phox FilipeCifali: ahahaha you're funny :)
18:02 FilipeCifali I know it's my fault, but still...
18:03 FilipeCifali I already checked them all
18:03 phox and yeah I hate software being all opaque and obstinate
18:03 kPb_in_ joined #gluster
18:04 FilipeCifali I really think I don't have these .glusterfs files
18:04 semiosis phox: obtuse?
18:05 phox hm?
18:05 phox right now I'm dealing with Debian, which I want to kill
18:05 semiosis opaque, obstinate, and obtuse?
18:05 phox so other things with crappy, useless dianostics annoy me moreso right now
18:05 qu phox: Manjaro.  Debian is ancient.
18:06 phox qu: and then it goes and grows things like aptitude, which pretend to make it do sensible things, but in fact do not implement "sensible" at all
18:06 phox or "repeatable"
18:06 qu I ran Debian exclusively for 14 years, but it stagnated.
18:06 phox it's never properly implemented a mechanism for installing a mixed stable-unstable system
18:07 FilipeCifali oh finally working
18:07 FilipeCifali let's see about that replica upgrade...
18:07 qu I wish I could get some perspective on clustering...
18:07 phox either you let it install unstable and it'll install _everything_ from unstable whether necessary or not, or you don't and then it won't install dependencies because it's a broken pile of crap
18:07 phox neither of which falls under "sane" or "correct"
18:07 phox :|
18:08 FilipeCifali damn, still the same message
18:08 FilipeCifali volume add-brick: failed:
18:08 FilipeCifali gluster> volume add-brick gv0 brick3:/export/sdc1/brick brick4:/export/sdc1/brick volume add-brick: failed:  gluster>
18:08 qu Anyone know how to join the #ovirt channel?
18:09 phox qu: I can join it so I'm going to assume maybe it's +R
18:09 phox wish people would set +M instead of +R
18:09 qu On OFTC?
18:09 phox ?
18:09 phox qu: FYI: /mode #ovirt
18:09 qu ... or Freenode?
18:10 phox oh.
18:10 phox here.
18:10 phox and yeah it's just +cnt
18:10 johnmark qu: ugh. #ovirt is on efnet
18:10 johnmark bleh
18:10 johnmark at least it used ot be...
18:10 qu No wonder.  A redhat presentation has it wrog.
18:10 johnmark haha! fantastic
18:10 FilipeCifali [cli-rl.c:106:cli_rl_process_line] 0-glusterfs: failed to process line
18:11 FilipeCifali still the same message as 3.4.0
18:11 pdrakeweb joined #gluster
18:12 FilipeCifali just to be sure, when I have a replica 2 volume, I need to add-brick 2 new brick points / nodes / guests right?
18:13 semiosis FilipeCifali: you need to add bricks in multiples of the replica count, yes
18:13 semiosis idk what points/nodes/guests means
18:13 semiosis those are not gluster terms
18:13 semiosis @glossary
18:13 glusterbot semiosis: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
18:13 FilipeCifali it's bricks
18:13 FilipeCifali too much terms from everywhere haha
18:14 semiosis :)
18:14 FilipeCifali then Gluster hates me
18:15 FilipeCifali cause I have a replica 2 and he refuses to be clear with me, like perl used to do...
18:15 sprachgenerator joined #gluster
18:15 rickytato joined #gluster
18:16 FilipeCifali he even say he did but fails: [glusterd-brick-ops.c:256:gd_​addbr_validate_replica_count] 0-management: Changing the replica count of volume gv0 from 2 to 4  - cli_rl_process_line] 0-glusterfs: failed to process line
18:16 l0uis FilipeCifali: shouldnt it be: gluster volume add-brick gv0 replica 2 ....
18:16 l0uis ?
18:16 rickytato Hi, I've a problem with replica 2 volume in RDMA:  E [rdma.c:4485:init] 0-rdma.test-server: Failed to initialize IB Device
18:17 l0uis FilipeCifali: I am very much not an expert, so feel free to ignore me :)
18:17 rickytato It's first time I try to use rdma.. I've already GlusterFS with tcp transport...
18:17 FilipeCifali it shouldn't now, it doesn't requires the replica 2 if I'm not changing
18:17 FilipeCifali but doesn't work for me w/ and w/o, I'm trying to debug, but have a meeting now, brb
18:18 rickytato (and I'm not expter with infiniband :( )
18:19 l0uis rickytato: can you ibping between the nodes?
18:20 rickytato l0uis, mmm no :( with hostname: ibping: iberror: failed: can't resolve destination port ares2
18:20 rickytato with IP: ibwarn: [12234] mad_rpc_rmpp: _do_madrpc failed; dport (Lid 192)
18:21 phox does ibhosts show you the other nodes at least?
18:21 rickytato but the normal ping works...
18:21 phox ibping I believe wants the adapter UUIDs or whatever
18:21 phox offhand...
18:22 l0uis rickytato: step 1 would be to verify, w/o gluster in the mix, that your IB is connected and working properly
18:23 rickytato l0uis, ok, tnx, I've to contact the network administrator.. this is the output of ibhosts:
18:23 rickytato Ca: 0x0002c9030009166c ports 2 "WIN-V8IK3BE8U2F"
18:23 rickytato Ca: 0x0002c903000913c4 ports 2 "WIN-VIJ26ADIQIJ"
18:23 rickytato Ca: 0x0002c90300091530 ports 2 "MT25408 ConnectX Mellanox Technologies"
18:23 rickytato Ca: 0x0002c903000b3c46 ports 2 "MT25408 ConnectX Mellanox Technologies"
18:23 rickytato Ca: 0x0002c90300091354 ports 2 "MT25408 ConnectX Mellanox Technologies"
18:23 rickytato Ca: 0x0002c903000b2f82 ports 2 "MT25408 ConnectX Mellanox Technologies"
18:23 rickytato Ca: 0x0002c90300091434 ports 2 "managed145 HCA-1"
18:23 rickytato Ca: 0x0002c9030009154c ports 2 "managed135 HCA-1"
18:23 rickytato Ca: 0x002590ffff174ef4 ports 1 "WIN-PO00HNM54L0"
18:23 rickytato Ca: 0x0002c90300091468 ports 2 "WIN-E08OL6KDISR"
18:23 rickytato Ca: 0x002590ffff174f3c ports 1 "WIN-1T82J284UG9"
18:23 rickytato Ca: 0x002590ffff174f78 ports 1 "WIN-C8ASDB8ALIE"
18:23 phox agh.
18:23 rickytato Ca: 0x002590ffff174ed8 ports 1 "WIN-0OT2Q4UO77A"
18:23 rickytato Ca: 0x0002c9030057f0f4 ports 1 "MT25408 ConnectX Mellanox Technologies"
18:23 rickytato Ca: 0x0002c9030057f150 ports 1 "MT25408 ConnectX Mellanox Technologies"
18:24 rickytato Ca: 0x0002c9030057f188 ports 1 "MT25408 ConnectX Mellanox Technologies"
18:24 rickytato Ca: 0x0002c90300578eda ports 1 "MT25408 ConnectX Mellanox Technologies"
18:24 rickytato Ca: 0x0002c9030057f100 ports 1 "nuovola1 mlx4_0"
18:24 rickytato Ca: 0x0002c9030057f128 ports 1 "cloudstorage2 HCA-1"
18:24 rickytato Ca: 0x0002c9030057f13c ports 1 "cloudstorage1 HCA-1"
18:24 rickytato Ca: 0x0002c9030009152c ports 2 "MT25408 ConnectX Mellanox Technologies"
18:24 l0uis yeah no please dont do that :)
18:24 PatNarciso ohh 0x0002c90300578eda; thats the worst.
18:25 fkautz i recommend pastebin or one of its many derivatives ;)
18:26 rickytato ops sorry
18:38 FilipeCifali hmmm
18:38 FilipeCifali my peers are connected, but it still fails to add, is there a better way to get a verbose error?
18:43 qu joined #gluster
18:46 rickytato l0uis, if I start on server1 "ibping -S" and from server2 "ibping 0x12" I've response...
18:47 qu joined #gluster
18:47 rickytato Pong from server1.(none) (Lid 18): time 0.323 ms
18:51 FilipeCifali sigh... I can't even making from scratch make this work in CentOS 6.4
18:51 FilipeCifali any1 running RHEL/CentOS 6.4?
18:59 Technicool joined #gluster
19:04 davinder joined #gluster
19:06 FilipeCifali gluster> volume create gv0 replica 4 brick1:/export/sdc1/brick brick2:/export/sdc1/brick brick3:/export/sdc1/brick brick4:/export/sdc1/brick volume create: gv0: failed
19:13 sac__ joined #gluster
19:14 sac`away joined #gluster
19:14 cekstam joined #gluster
19:16 dan__ joined #gluster
19:23 BigRed_ left #gluster
19:28 cekstam joined #gluster
19:30 qu joined #gluster
19:31 sac__ joined #gluster
19:31 qu left #gluster
19:32 sac`away joined #gluster
19:38 DV joined #gluster
19:40 khushildep joined #gluster
19:43 sac__ joined #gluster
19:43 sac`away joined #gluster
19:43 mooperd joined #gluster
19:43 StarBeast joined #gluster
19:56 LoudNoises joined #gluster
20:06 sac`away joined #gluster
20:07 piotrektt joined #gluster
20:07 sac__ joined #gluster
20:10 khushildep joined #gluster
20:14 sac`away joined #gluster
20:16 sac__ joined #gluster
20:20 StarBeas_ joined #gluster
20:29 sac__ joined #gluster
20:34 sac__ joined #gluster
20:35 sac`away joined #gluster
20:40 purpleidea johnmark: any idea if the gluster videos are posted yet? i want to see the semiosis video and to know how goofy i looked.
20:41 Technicool joined #gluster
20:42 phox heh
20:42 phox I should check this out if/when it's available :P
20:44 sac`away joined #gluster
20:44 sac__ joined #gluster
20:45 semiosis purpleidea: i heard my main talk wasn't fully recorded, only the second half :/
20:53 sac__ joined #gluster
20:55 daMaestro joined #gluster
21:04 ujjain2 joined #gluster
21:05 phox heh, because people running conventions always fsck up A/V?
21:05 phox :D
21:05 phox and wifi
21:05 phox always wifi too
21:11 semiosis phox: no this was another community member, not staff or organizers
21:11 ctria joined #gluster
21:11 semiosis who was recording
21:12 phox ah
21:12 FilipeCifali Guys, thanks for the help today, I discovered that my xfs wasn't clear in the (3 and 4) bricks and that was the reason I wasn't able to add them
21:13 phox would be nice if organizers were, you know, organized :)
21:13 phox ugh XFS
21:13 FilipeCifali now it's working properly the add-brick for 2 off them as expected (replica 2 / 4 bricks)
21:13 phox Ex-FS
21:13 FilipeCifali what you use? JFS?
21:13 phox frequently found containing Ex-Files, thus it being an Ex-File System
21:13 phox ZFS
21:14 FilipeCifali better?
21:14 phox this sums up my feelings on ZFS and my experience with other people using it after I recommend it:  http://static.fjcdn.com/pictur​es/Cocaine_0b07c1_1447465.jpg
21:14 glusterbot <http://goo.gl/TRv0k9> (at static.fjcdn.com)
21:14 FilipeCifali I'm still in the mood for more ReiserFS :(
21:15 phox what, wanting some more cargo room in your CRX?
21:16 FilipeCifali haha
21:16 phox you can't bring up ReiserFS without me making jokes about it
21:16 phox them's the rules.
21:16 FilipeCifali but, but
21:17 FilipeCifali why you think ReiserFS is good w/ small files?
21:17 phox "I don't know, why?"
21:17 FilipeCifali Pieces, all about the pieces :D
21:18 phox speaking of pieces
21:18 FilipeCifali but, 3.6 it's still nice, I don't really like brtfs and ex3/4...
21:18 phox what's funny is what the first autocomplete result you get is when you type "richard stallman" into youtube's search box
21:18 phox well, first one that's not just "richard stallman"
21:18 phox btrfs is considering becoming good and happy and stuff
21:18 phox XFS seriously annoys the hell out of me
21:19 phox it's good at losing crpa and not telling me about it
21:19 FilipeCifali why? bad stuff happening?
21:19 phox ZFS OTOH has checksums
21:19 FilipeCifali HMMMM
21:19 phox yes, data corruption
21:19 phox a popular topic for XFS
21:19 FilipeCifali and if it's not sensitive data, like httpd sessions, it's ok to use right?
21:19 phox what's OK to use?
21:20 phox ZFS?
21:20 FilipeCifali I don't really care about loosing 1 session file, but some can be troublesome
21:20 phox you can run ZFS on top of dm-crypt if you want.
21:20 phox yeah we have online archival data among other things
21:20 phox so it's kind of important that out data not randomly change or disappear or whatever.
21:20 phox haha
21:20 FilipeCifali Oh I got it, your data is sensitive, mine is not
21:20 phox my friend responded about the pieces
21:20 phox "he did pack the CRX with tail."
21:20 FilipeCifali ROFL
21:20 phox it's important, yes
21:21 FilipeCifali I want to use glusterfs to store sessions for webmails, so I can have a webmail grid w/ HA and the user don't care about which one he's accessing
21:21 phox does whatever webmail system you're using have lots of little files?
21:22 phox if so I should point out that gluster is semi-bad at that
21:22 FilipeCifali yeah, that's something I was reading about
21:22 phox although not THAT bad if the system already knows what the files are caled; if it needs to readdir it's going to suck a fair bit
21:22 phox but yeah, FUSE is not fantastic for responsiveness
21:23 FilipeCifali even if I throw SSDs for it? :(
21:23 FilipeCifali this may not be the best scenario then...
21:24 phox it'd help probably
21:25 phox you're just storing sessions though?
21:25 phox cache, let that just disappear as necessary
21:25 phox the point of cache is locality anyways
21:25 phox as far as sessions, assuming they're just session data consider a database
21:26 FilipeCifali yeah, sessions only
21:27 compbio joined #gluster
21:27 FilipeCifali but the scenario is to make a HA cluster so, if one webmail server goes down, the user can access another and "don't" loose session (don't have to log again)
21:29 a2 phox, fuse with readdirplus is not too bad actually
21:30 FilipeCifali god I hate cp -i...
21:30 FilipeCifali done now, gonna try this anyway
21:30 FilipeCifali I already went this far, but I'll check ZFS out too
21:31 compbio when I was playing with ZFS a few months ago, writes were happening at about 3MB/s through gluster, but 800MB/s+ through the local mount
21:31 compbio would the xattr=sa alleviate that?
21:32 phox compbio: it helps quite a bit
21:32 phox although that still seems like too much of a discrepancy
21:32 phox we have been getting 200MB/sec just fine without xattr=sa
21:32 phox is this with lots of little files, or...?
21:32 compbio just a big dd write
21:32 phox also SLOG might help for writes.
21:32 phox weird, then.
21:32 compbio it was on top of dm-crypt though
21:33 compbio come to think of it, I had a fairly ancient kernel too, whatever comes with CentOS
21:33 phox heh, yeah, dm-crypt is going to be slowish
21:33 compbio so that may have been an issue too
21:33 phox not sure, I doubt that's that big of an issue
21:33 phox what dd block size, or did you set one?
21:33 compbio 1M block size
21:33 phox hm, yeah, should have been ok
21:33 phox 3MB/sec at 1MB blocks ize is insanely bad
21:33 compbio (I didn't know about the #gluster channel back then, otherwise would have come here for help!)
21:33 phox heh
21:34 compbio yeah, it was very strange, it may also be some interaction with our raid card driver
21:34 compbio it would be better if we had an HBA, but that has to wait for the next build
21:35 compbio though that somebody has gotten ZFS+Gluster going makes me want to go back and try again on some spare hardware
21:35 compbio because I really think that's the future
21:35 phox you can't tell your "raid card" to be an HBA?
21:35 phox most let you just bypass their BS
21:35 phox Gluster needs to get out of userspace... that'd be useful
21:35 compbio i just created one Raid0 per disk, it's an LSI MegaRAID
21:35 phox heh
21:36 * JoeJulian beats phox with a dead horse.
21:36 phox heh
21:36 phox JoeJulian: at least the client
21:36 phox FUSE = necessarily slow
21:36 JoeJulian Then bypass fuse and right directly to the api.
21:36 phox once it's in RDMA-land the server can be wherever it needs to be, although page faults kinda fail too
21:36 phox JoeJulian: then rewrite all of my apps for me :)
21:36 compbio oh, that's on infiniband too?
21:37 phox compbio: IPoIB, not native RDMA
21:37 JoeJulian You want some magic kernel module that handles plug-in translators... :P
21:37 phox so just fast IP
21:37 phox JoeJulian: really what I want is NFS/RDMA that isn't a broken pile :)
21:37 phox but that doesn't exist ;)
21:37 JoeJulian Isn't that in kernel already?
21:37 phox NFS/Linux is a pile =\
21:38 phox JoeJulian: not with the "that isn't a broken pile" requirement implemented
21:38 phox JoeJulian: NFS on Linux even shits the bed severely if you give it significantly more than 1Gbit of network bandwidth
21:38 purpleidea semiosis: oh, that sucks. i guess that means they have to send you somewhere else to give another talk
21:38 compbio NFS's brokenness has made me appreciate FUSE filesystems
21:38 purpleidea compbio: did you say megaraid? https://github.com/purpleidea/puppet-lsi
21:38 glusterbot Title: purpleidea/puppet-lsi · GitHub (at github.com)
21:39 phox seems to be something to do with it running out of send/receive buffer and then it just locks up and requires a restart of the machine.  fail.
21:39 a2 phox, "FUSE = necessarily slow" <-- not quite as blanket.. for some use cases it is
21:39 kPb_in_ joined #gluster
21:39 phox a2: context switches are expensive
21:39 phox sucks when you have to CS 4 times for a page fault
21:39 compbio purpleidea: you just made our devops guy very happy
21:39 a2 some of the recent enhancements happening in upstream FUSE makes it no less performant than NFS
21:40 phox or 6 in the case of non-RDMA
21:40 phox a2: hm
21:40 a2 context switches can hurt, sure.. but with the right optimizations the network should be the bigger overhead
21:40 purpleidea compbio: it would do a lot more, but i don't have any lsi hardware to test on anymore. donations welcome
21:41 a2 you want to exercise the in kernel caching intelligently and avoid all the context switches
21:41 compbio i too accept all storage hardware donations :) we don't have any spares ATM
21:41 phox a2: with RDMA it's not
21:41 phox :P
21:42 phox purpleidea: what LSI hardware are you after and what benefits will people see from your increased availability of said hardware?
21:42 phox we have a couple of 1068e cards floating around that need to be excessed.
21:42 a2 phox, i have some patches in my personal tree to allow 0-copy RDMA from page cache directly to remote server for a fuse filesystem
21:42 phox a2: heh :)
21:42 a2 needs a little more work before pushing it out upstream for reviews
21:42 purpleidea phox: depends on if there is interest really... take a look at the puppet module, and see if there are features you're missing first
21:42 phox ah
21:43 * phox would like it if LSI firmware did what it claims to do
21:43 phox like activating drive locate lights reliably :l
21:43 purpleidea phox: the truth is i'm not desperate for LSI hw, but am happy to hack on things for fun and profit
21:43 purpleidea phox: oh you had issues with that?
21:43 a2 you can do intelligent mmap()'ing with a custom page fault handler which treats the filesystem process differently and giving it unconditional access without recursive faults
21:43 phox yeah, doesn't work on our cards that go directly to backplanes
21:43 phox works fine on expander chassis
21:44 phox people seem to suggest it's dependant on firmware version etc but I haven't been able to substantiate that
21:44 phox I guess I could test again now that I have pretty bleeding-edge firmware on some of the cards (soon to be more/all of them)
21:44 a2 purpleidea, i plan to respond to your questions on the replace-brick phase-out.. though i have a question, how important is it a use case to support count of servers != multiple of replica count really? i haven't seen a single customer do taht in a real world deployment
21:44 phox buncha 9207 and 9300 variants
21:45 purpleidea phox: yeah test it!
21:46 purpleidea a2: i don't know if *i* personally care about that, but i do think it's an important feature to have (sadly)
21:46 purpleidea a2: i know of people doing it...
21:47 purpleidea (does a2 == anand ? )
21:54 _ilbot joined #gluster
21:54 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
22:10 _ilbot joined #gluster
22:10 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
22:11 purpleidea hehe yeah. when i was younger i used to rack 5U servers by myself :) properly too. i mean it was at the top of the rack, but i could heavy lift them :P
22:11 phox heh.
22:11 phox these don't work so well that way when they're loaded
22:11 phox 45 disks in 4U
22:11 purpleidea phox: yeah, take the disks out first
22:11 phox so easily upwards of 150lb of box
22:11 phox that makes the chassis all flexible and stuff :P
22:12 phox there's a lift in the datacentre though
22:12 purpleidea phox: what kind of flexible chassis is this?
22:12 phox semiosis: so I guess no 3.4.1 before Monday? :P
22:12 phox purpleidea: supermicro
22:12 phox they've had to minimize cross-bracing because the whole front and back are disk bays
22:12 phox so they flex enough to mess up lining up rails etc
22:13 purpleidea phox: they always worked for me :) they have some nice gear that you can put 2x2.5" drives in the back (for os)
22:13 phox yeah
22:13 phox just bought 2 of those
22:13 phox not sure exactly when I'll haev them in-hand
22:13 phox I've decided the bay module you have to buy is a "suppository adapter"
22:13 purpleidea phox:  http://www.supermicro.com/products/​system/4u/6047/ssg-6047r-e1r24n.cfm ?
22:13 glusterbot <http://goo.gl/D5tWt> (at www.supermicro.com)
22:13 phox the ones we got w/ 2x2.5 in back are 826 variants
22:14 phox the 45-disk ones are 847JBODs
22:14 phox all 2x1280W 80plus platinum, etc.
22:14 purpleidea ah. i like the supermicro a lot, but their IPMI is so so so so so so so SHIT.
22:14 phox heh
22:14 phox soemtimes it works better than other times
22:14 phox but yeah I'm generally running Tyan boards because of that
22:14 purpleidea it's buggy as hell.
22:14 phox not that theirs is spectacular either
22:15 phox yeah I have one box that keeps deciding to not let me log in until I reboot the stupid BMC
22:15 purpleidea phox: yep!
22:15 phox also bought a bunch of 24-pin connectors so I can make permanent supermicro-chassis-to-tyan-board breakouts :P
22:15 phox doing otherwise = suck
22:15 purpleidea what for ?
22:16 phox for putting Tyan boards in SM tin cans?
22:16 purpleidea oh okay
22:16 purpleidea you don't just buy it stock already done
22:16 phox SM makes a breakout thing with separate 2-pin connectors on it
22:16 phox and I'm lazy
22:16 phox does anyone do -that-?
22:16 purpleidea yeah
22:16 phox places I've had build servers tend to bugger shit up
22:16 phox it gets kinda old
22:16 purpleidea pick better places
22:16 phox and it doesn't take that much time that it can justify the cost
22:17 purpleidea depends how many servers you've got ;)
22:17 purpleidea we are in #gluster you know...
22:17 phox even then, I'd hire someone and train them
22:17 phox I don't trust dumb shops to not lose someone and hire some dumb kid to replace them because they need to for continuity
22:18 purpleidea phox: i found it was more useful for the contacts. i'm not best friends with whoever the relevant supermicro engineers who have new firmware were, for example
22:18 JoeJulian I have tyan boards in SM 2u.
22:18 phox JoeJulian: heh
22:18 phox yeah I'm moving towards 2U + expanders
22:18 phox makes my life easier
22:18 StarBeast joined #gluster
22:19 phox basically the entire world is dependant on LSI
22:19 phox HBAs, expanders, PHYs on drives :P
22:19 purpleidea phox: which is why you need puppet-lsi although i kind of wish software raid could do SGPIO somehow and blink lights
22:20 phox purpleidea: in theory sas2ircu should be able to do it
22:20 phox you have to dig around in sysfs to find the card/channel to the drive, and/or sas2ircu can let you see the disks by serial #
22:20 phox but yeah, didn't work for me even though it reported it was enabling them
22:20 phox but it DOES work with expanders
22:21 purpleidea phox: tell me more! make it happen! integrate it with mdadm so that it's magic!
22:21 phox MDADM can go jump off a bridge
22:22 phox I'm pretty sure at some point i read about being able to change its default behaviour as far as punting drives for dumb reasons and keeping them around if they only have a few read errors etc
22:22 phox but I can't find that again
22:22 phox mdraid has pretty bad error handling
22:22 phox zfs is nice and accountable that way and generally manages to punt the drives I would want it to punt, and not others
22:22 fidevo joined #gluster
22:22 phox having your array go criticial because of a Seagate firmware bug = fail
22:24 phox heh, we just bought a whole pile of disks and one manufacturer bitched about the RFQ we put out because we didn't ask for their drives, and it's like... you don't make 4T drives in this price ballpark... anyways I gave them a totally legit coverage of what we'd have to test to justify switching... and then they didn't bother bidding
22:25 purpleidea phox: ZFS can go jump off a bridge
22:25 phox oh?
22:25 purpleidea (until it cleans up it's act and learns to have proper licensing)
22:25 phox it's made my arrays go critical for no good reason far less
22:26 phox whatever, I'll work around that to not have to deal with everything else being broken and sketchy
22:26 purpleidea phox: i won't have a technical discussion about it until i can hack on it without legal issues :( sadly
22:26 phox heh
22:27 phox fair enough.  but operationally it solves an ENORMOUS pile of issues for us.
22:27 purpleidea phox: i won't have a technical discussion about it until i can hack on it without legal issues :( sadly
22:27 phox snapshots.  snapshot diffs.  checksums. actually being journalling.  reservations.
22:28 phox those alone make it functionally so far ahead of every other possible option (other than something like say Hadoop that we'd have to adapt everything else to) that it's not even a question around here
22:29 phox I'm surprised RH or someone doesn't just invest in a cleanroom implementation
22:29 phox I'd happily chip in to their costs
22:29 purpleidea phox: it's called the purpleidea fund. i erm will uhh make it happen :P
22:31 phox heh.
22:36 phox that supposed to be Bobby/Quimby? :P
22:41 purpleidea got to go! later
22:41 purpleidea a2: looking forward to discussing the algorithm with you. and help i can provide let me know. bbl
22:51 * phox resumed secure erasing disks
22:51 phox s/d/s/
22:51 glusterbot What phox meant to say was: * phox resumes secure erasing disks
22:53 phox s/.*/system("uname -a")/e
22:53 phox no?
22:53 phox :(
22:53 B21956 joined #gluster
22:53 phox YOUR BOT IS NO FUN
22:57 B21956 left #gluster
23:00 khushildep joined #gluster
23:04 phox 12345678910
23:04 phox s/1/01/g
23:04 glusterbot phox: Error: u's/1/01/g 12345678910' is not a valid regular expression.
23:04 phox pff, no global?  lame
23:05 phox I suppose it's trying to compile the string as a regexp to avoid injections as above :l
23:14 zerick joined #gluster
23:29 StarBeast joined #gluster
23:29 jiqiren I'm going from a 2brick(per server) on 2 servers to a 1brick(per server) on 4 servers setup. ok to use replace-brick to move a brick?
23:32 jporterfield joined #gluster
23:57 a2 jiqiren, suggest using add-brick + remove-brick
23:57 a2 the plan is to phase out replace-brick for moving data

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary