Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 psyl0n_ joined #gluster
00:11 gdubreui joined #gluster
00:12 gdubreui joined #gluster
00:23 Technicool joined #gluster
00:24 TrDS left #gluster
00:35 robo joined #gluster
01:08 jporterfield joined #gluster
01:14 jporterfield joined #gluster
01:26 dbruhn joined #gluster
01:31 mattappe_ joined #gluster
01:32 jporterfield joined #gluster
01:34 mattapperson joined #gluster
01:35 mattapperson joined #gluster
01:38 harish joined #gluster
01:44 robo joined #gluster
01:52 jporterfield joined #gluster
01:52 psyl0n joined #gluster
02:20 jporterfield joined #gluster
02:26 overclk joined #gluster
02:32 jporterfield joined #gluster
02:33 raghug joined #gluster
02:38 mattappe_ joined #gluster
02:49 harish joined #gluster
02:52 SFLimey joined #gluster
02:59 bharata-rao joined #gluster
03:11 jporterfield joined #gluster
03:19 smellis I am trying to figure out why self heal is slow, but normal read write is super fast, can anyone point me in the right direction?
03:28 jporterfield joined #gluster
03:40 jporterfield joined #gluster
03:41 itisravi joined #gluster
03:42 ppai joined #gluster
04:06 shubhendu joined #gluster
04:16 jporterfield joined #gluster
04:21 shylesh joined #gluster
04:27 saurabh joined #gluster
04:29 RameshN joined #gluster
04:40 meghanam joined #gluster
04:47 vpshastry joined #gluster
04:50 ndarshan joined #gluster
04:54 MiteshShah joined #gluster
05:02 _pol joined #gluster
05:04 rjoseph joined #gluster
05:08 _pol joined #gluster
05:13 nshaikh joined #gluster
05:24 hagarth joined #gluster
05:24 raghu joined #gluster
05:26 psharma joined #gluster
05:30 wgao joined #gluster
05:32 ndarshan joined #gluster
05:33 kanagaraj joined #gluster
05:34 prasanth joined #gluster
05:41 CheRi joined #gluster
05:46 davinder joined #gluster
05:47 msvbhat joined #gluster
05:48 aravindavk joined #gluster
05:50 XpineX joined #gluster
05:53 hagarth joined #gluster
05:54 CheRi joined #gluster
06:00 satheesh1 joined #gluster
06:20 hagarth joined #gluster
06:22 jporterfield joined #gluster
06:22 ngoswami joined #gluster
06:25 satheesh4 joined #gluster
06:27 eclectic joined #gluster
06:30 kshlm joined #gluster
06:31 anands joined #gluster
06:33 davinder2 joined #gluster
06:34 vpshastry left #gluster
06:42 rastar joined #gluster
06:45 lalatenduM joined #gluster
06:50 hagarth joined #gluster
06:53 jporterfield joined #gluster
07:02 vpshastry joined #gluster
07:06 shubhendu_ joined #gluster
07:14 andy____ joined #gluster
07:14 andy____ hi everyone
07:16 andy____ does glusterfs work with aufs?
07:19 andy____ does anyone actually have a working glusterfs + aufs?
07:21 jtux joined #gluster
07:22 hagarth joined #gluster
07:26 codex joined #gluster
07:31 dneary joined #gluster
07:42 ekuric joined #gluster
07:48 satheesh1 joined #gluster
08:05 jporterfield joined #gluster
08:06 eseyman joined #gluster
08:07 franc joined #gluster
08:09 franc joined #gluster
08:09 franc joined #gluster
08:11 keytab joined #gluster
08:16 keytab joined #gluster
08:27 bala joined #gluster
08:31 TrDS joined #gluster
08:39 andreask joined #gluster
08:54 aravindavk joined #gluster
08:57 blook joined #gluster
09:11 dusmantkp_ joined #gluster
09:12 mgebbe_ joined #gluster
09:18 shubhendu_ joined #gluster
09:18 cyberbootje joined #gluster
09:22 ProT-0-TypE joined #gluster
09:27 dusmant joined #gluster
09:35 itonexag joined #gluster
09:43 harish joined #gluster
09:47 itonexag Hello together, I have a problem with the performance of writing small files (~20KB). We have 2 Glusterfs-Server which are in replicated mode. For example we have one folder on a windows server,which has 86'593 items and has a size of 575MB. When I copy it to the samba-share on the glusterfs-server, which is mounted with the glusterfs-folder, he told me the during time will be over 20 hours.
09:47 itonexag When I copy it to a netgear NAS it has only 30min. The Speed is for the copy from windows to glusterfs 20-90KB/second and from windows to netgear NAS 200-500KB. I found some performance-options for glusterfs, but it doesn't seems to me that it works. Maybe I set the options not right...
09:48 shylesh joined #gluster
09:49 hagarth joined #gluster
09:52 samppah itonexag: what glusterfs version you are using?
09:53 itonexag 3.4.1
09:54 samppah using glusterfs over fuse or libgfapi?
09:54 itonexag fuse
09:57 hybrid512 joined #gluster
09:59 dusmant joined #gluster
10:02 gdubreui joined #gluster
10:07 tryggvil joined #gluster
10:07 F^nor joined #gluster
10:10 spandit joined #gluster
10:11 shubhendu_ joined #gluster
10:25 jporterfield joined #gluster
10:32 tryggvil joined #gluster
10:36 khushildep joined #gluster
10:37 bala joined #gluster
10:48 shubhendu_ joined #gluster
10:51 RameshN joined #gluster
10:51 codex joined #gluster
10:52 satheesh1 joined #gluster
10:53 ndarshan joined #gluster
10:59 nocturn joined #gluster
11:01 RameshN joined #gluster
11:04 hagarth joined #gluster
11:04 ababu joined #gluster
11:08 codex joined #gluster
11:23 kanagaraj joined #gluster
11:28 diegows joined #gluster
11:33 aravindavk joined #gluster
11:34 itonexag Hello together, I have a problem with the performance of writing small files (~20KB). We have 2 Glusterfs-Server which are in replicated mode. For example we have one folder on a windows server,which has 86'593 items and has a size of 575MB. When I copy it to the samba-share on the glusterfs-server, which is mounted with the glusterfs-folder, he told me the during time will be over 20 hours.
11:34 itonexag When I copy it to a netgear NAS it has only 30min. The Speed is for the copy from windows to glusterfs 20-90KB/second and from windows to netgear NAS 200-500KB. I found some performance-options for glusterfs, but it doesn't seems to me that it works. Maybe I set the options not right... Gluserfs-Version: 3.4.1 with fuse
11:37 ndk joined #gluster
11:40 ells joined #gluster
11:51 abyss^ itonexag: http://gluster.org/community/documentation/index.php/GlusterFS_Technical_FAQ#How_can_I_improve_the_performance_of_reading_many_small_files.3F
11:51 glusterbot Title: GlusterFS Technical FAQ - GlusterDocumentation (at gluster.org)
11:53 itonexag is it the same for writing or just reading?
11:54 kshlm joined #gluster
11:54 ndarshan joined #gluster
11:55 ababu joined #gluster
11:56 sahina joined #gluster
11:56 drscream joined #gluster
11:59 drscream hello, i've a problem with the geo-replication (synchronize the data completely but status display OK), described here http://linux.web.cern.ch/linux/ssa/3.2/User_Guide/#id1151074 - but i'm not sure how to reset the index?!
11:59 glusterbot Title: User Guide (at linux.web.cern.ch)
12:00 purpleidea itonexag: different... do you understand why reading many small files can be slow with php for example?
12:00 drscream should i only stop the geo-replication and than `gluster volume set data geo-replication.indexing off`, and start the geo-repl again?
12:01 hagarth joined #gluster
12:01 shubhendu_ joined #gluster
12:01 s2r2_ joined #gluster
12:01 psyl0n joined #gluster
12:02 abyss^ itonexag: "That that for a write-heavy load the native client will perform better.". You should wait for more experienced gluster user here, to get comprehensive answer ;)... But in my experience gluster has worst outcomes that nfs considering samll files (write and read).
12:02 itonexag purpleidea: No not, but my problem is writing. The webserver goes over the nfs client, so that should be not a problem
12:02 psyl0n joined #gluster
12:02 kanagaraj joined #gluster
12:02 purpleidea itonexag: you didn't answer my question... i'm trying to explain the reasoning by helping you understand...
12:03 itisravi joined #gluster
12:03 itonexag purpleidea: no I don't know why..
12:03 purpleidea itonexag: okay, have you heard of the elastic hash?
12:04 itonexag purpleidea: No I haven't
12:05 purpleidea itonexag: okay... so the cool thing about glusterfs is that it doesn't have or need a central metadata server... the trick is that there is some magic hashing algorithm that takes a filename, and turns that into a brick location. i won't explain how it works, but that's the basic idea. so if you have filename /foo/bar that might resolve to host1:brick2 get it so far?
12:06 satheesh2 joined #gluster
12:07 itonexag purpleidea: Okay I get the basic idea..
12:07 purpleidea so let's say you have some beast like php that wants to read a file... well, gluster will calculate the hash and look for it on that brick. if it's not there, it might be because a rebalance is happening or something, so then it has to send out a request to _all_ the servers to look for the file... this extra lookup makes the process take longer. multiply this by x1000 because php looks for the existence of multiple files, and you might have something s
12:07 RameshN joined #gluster
12:08 purpleidea the thing that makes glusterfs _fast_ and scale linearly, is the combination of many many (even thousands+) of clients all accessing different things at the same time. the aggregate io is huge.
12:08 purpleidea the reason using the nfs mount might make the small files fast is because the nfs server does some in kernel caching... all good so far?
12:09 itonexag purpleidea: yes :)
12:10 purpleidea so if your application is making lots of small writes, one after another, this might be "normal speed" (i won't say slow) because you aren't benefitting from the parallel nature of glusterfs, but if you're making many writes from multiple clients (and maybe multiple threads) you'll get a fast parallel workload.
12:11 purpleidea if your application depends on small non-parallel writes, there are a few things you can do to speed things up: (am i right about this being a question you're interested in?)
12:11 itonexag purpleidea: exactly! :D
12:11 purpleidea itonexag: okay:
12:11 abyss^ I am too:D
12:12 purpleidea 1) fix your application. (sorry but it's true) you want to write things that are parallel if you want to benefit from distributed storage. you can't really get around this.
12:12 dusmant joined #gluster
12:13 purpleidea 2) make your hardware way faster so that individual requests are faster. this might involve a faster network (10gE) faster io (better raid levels like raid10 and/or ssd's) or whatever your bottleneck is. profile it to know.
12:14 purpleidea note that in the future glusterfs will probably support some sort of tiering so that you can have ssd's+spindles,etc all in the same pool and will balance things appropriately. for now you can probably build this yourself by caching with bcache or dm-cache (i think that's what they're called)
12:15 purpleidea 3) have your app use libgfapi to talk directly to glusterfs instead of going through a fuse mount. the fuse overhead probably isn't significant, but it's non-zero. use this as a last resort, or if you're serious about squeezing performance.
12:15 purpleidea abyss^: itonexag: and most importantly... TEST your app, so you know what you need to plan for.
12:16 purpleidea hope this answers your questions
12:16 itonexag purpleidea: Thanks a lot! I will change to libgfapi first and check my application.
12:17 purpleidea itonexag: probably not the first thing i'd do :P
12:17 abyss^ purpleidea: Can I read about more about "that things" somewhere? Do you have a blog?;p
12:17 purpleidea abyss^: i write The Technical Blog of James: https://ttboj.wordpress.com/ i'm @purpleidea
12:17 purpleidea on tweeter
12:17 abyss^ itonexag: but you wrote you just copy files, so you use cp?
12:17 purpleidea abyss^: what
12:17 purpleidea what's "that things" ?
12:18 itonexag purpleidea: the hardware is good enough.
12:19 purpleidea itonexag: think about parallelizing your workload before you look at libgfapi. even with it, you still need a parallel workload.
12:19 abyss^ purpleidea: about gluster, how it work exactly and deep, because on gluster.org is only manual that doesn't deep into gluster:)
12:19 CheRi joined #gluster
12:19 itonexag purpleidea: okay I will do so
12:19 purpleidea itonexag: good luck!
12:20 purpleidea abyss^: well the best way is to test and test and keep trying different things and breaking them. for this i use ,,(vagrant) and ,,(puppet)
12:20 glusterbot abyss^: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
12:20 glusterbot https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/
12:20 purpleidea ~puppet | abyss^
12:20 glusterbot abyss^: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
12:21 purpleidea i'm the author of that stuff, so try it out and let me know if you have any issues. it's great (imo) for learning about glusterfs and prototyping a cluster
12:21 abyss^ purpleidea: btw: I have a problem with gluster too, could you look at this: http://www.gluster.org/pipermail/gluster-users/2013-December/038264.html
12:21 glusterbot Title: [Gluster-users] Error after crash of Virtual Machine during migration (at www.gluster.org)
12:21 edward1 joined #gluster
12:22 ppai joined #gluster
12:23 purpleidea abyss^: without look at that post in too much depth, if you hit the 3.3.0 bug, then upgrade :P other than that, post your question in channel or on the ml and maybe someone can answer. if it's from december, maybe time to send a new message.
12:23 purpleidea s/look/looking/
12:23 glusterbot What purpleidea meant to say was: abyss^: without looking at that post in too much depth, if you hit the 3.3.0 bug, then upgrade :P other than that, post your question in channel or on the ml and maybe someone can answer. if it's from december, maybe time to send a new message.
12:23 abyss^ purpleidea: I've done so far: I just moved gluster manually way: http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server and modified gluster config files and now I can do heal and options that didn't work (as I described in maillist) but I still have error with migrate
12:23 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
12:23 abyss^ purpleidea: I have 3.3.1 version:)
12:24 purpleidea abyss^: sounds like you should write a new (clear) message to the ml with your migration errors...
12:24 abyss^ purpleidea: I asked here but nobody could help me:( So I tried now :)))
12:24 abyss^ (again)
12:25 purpleidea your ml message didn't seem 100% clear, try sending a new one, and keeping it simple. eg: i have this setup, i did x, and i see y. here are the logs...
12:25 abyss^ purpleidea: I saw the same problem a half year earlier on maillist and nobody helped, so I lost hope:)
12:26 purpleidea your ml message didn't seem 100% clear, try sending a new one, and keeping it simple. eg: i have this setup, i did x, and i see y. here are the logs...
12:26 klaas_ joined #gluster
12:26 abyss^ purpleidea: yeah, maybe because of my english is not so clear;)
12:26 purpleidea abyss^: support is volunteer based. i'm a volunteer, most people here are. if you need professional support, you should contact redhat, they sell gluster as "red hat storage" and you'll get great support.
12:27 purpleidea abyss^: your english is not bad, but improving is never bad either.
12:27 abyss^ purpleidea: yeah, but they asked: do you use red hat? if answer is: yes then help else not help
12:27 abyss^ ;)
12:28 purpleidea abyss^: redhat charges for support of course... that's the way they pay the bills, and is the reason you get glusterfs for free as in beer
12:28 abyss^ :)
12:29 purpleidea itonexag: any other questions?
12:30 purpleidea did my answers solve your problem/answer your questions?
12:30 abyss^ purpleidea: I gave up with this problem because nobody could help me (I asked many times here and on gluster-meeting:)) so I going to try something else... I want to set up new gluster servers and detach the disks from vm's and attach those to new vm's...
12:30 hagarth1 joined #gluster
12:30 purpleidea abyss^: gluster-meeting is not for support.
12:31 abyss^ purpleidea: yeah, but there's a lot of gluster programmers :D
12:31 purpleidea this channel is for support, if you try and bug people there, you'll just be kicked or ignored :P
12:32 abyss^ so I hope they could help me, and I got one answer he gave me his e-mail to send the problem, but no answer:)
12:32 itonexag purpleidea: Could be the problem by samba? Because my windows machine goes over samba not nfs
12:32 purpleidea itonexag: i don't use samba (or windows)
12:33 abyss^ purpleidea: but you think it would work (set up new gluster and detach disks from old glusters to new ones?)
12:34 abyss^ of course I'am going to test this on my test environment but I just asked:)
12:34 purpleidea abyss^: you aren't explaining your problem or your question clearly enough for me to understand and comment on, sorry
12:34 abyss^ purpleidea: you mean that post on maillist is not clear?
12:35 purpleidea abyss^: correct, and the above question about your test env also.
12:35 abyss^ ok ok:D
12:35 abyss^ I can't solve the issue on maillist (and nobody can so far)? You understand this, yes?
12:36 purpleidea abyss^: i think my earlier comments have answered what you should do
12:36 abyss^ yes, obviously :)
12:37 abyss^ but I lost my hope (I asked here as well and nobody can help) - ok never mind
12:37 abyss^ Now I'd like to do: set up new glusters
12:37 abyss^ on VM (Xen)
12:38 abyss^ then detach disks from old glusters and attach to new ones, and make new brick etc...
12:39 purpleidea abyss^: have a look at ,,(vagrant)
12:39 glusterbot abyss^: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
12:39 glusterbot https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/
12:39 purpleidea to build a new cluster that works
12:40 abyss^ yes, build new gluster but with the old data (100TB)
12:40 abyss^ ok thank you:D I just will try this
12:41 abyss^ sorry, for that so many noise from me, a lot of fuss here (a lot of people talk loudly) and hard to focus on what I am writing :/
12:42 RameshN joined #gluster
12:42 abyss^ so please forgive me;)
12:45 abyss^ writing in english still requires me a lot of focusing
12:48 purpleidea i'll be gone for about an hour. happy hacking!
12:49 abyss^ :) Thank you for new knowledge:)
12:55 jporterfield joined #gluster
13:00 purpleidea no worries. cheers
13:00 MiteshShah joined #gluster
13:02 samppah abyss^: afaik it's possible to use existing data when creating gluster.. if you have used those disks in gluster before then you probably need to remove extended attributes.. i'm sorry but i haven't tested this my self and i'm not aware of any documentation what to do exactly :(
13:02 samppah i should test it but currently i lack time to do it :(
13:06 mattapperson joined #gluster
13:16 shubhendu joined #gluster
13:22 dusmant joined #gluster
13:37 jag3773 joined #gluster
13:38 ppai joined #gluster
13:45 blook joined #gluster
13:47 sroy_ joined #gluster
13:53 mattapperson joined #gluster
14:00 mattappe_ joined #gluster
14:09 japuzzo joined #gluster
14:11 codex joined #gluster
14:13 mattappe_ joined #gluster
14:18 prasanth joined #gluster
14:23 jag3773 joined #gluster
14:29 rjoseph joined #gluster
14:31 mattappe_ joined #gluster
14:36 dblack joined #gluster
14:42 dbruhn joined #gluster
14:48 jobewan joined #gluster
14:51 abyss^ samppah: ok. Thank you.
14:52 abyss^ btw: Can I check what client are connected to gluster 3.2? (not by netstat but glusterfs way)?
14:53 jobewan joined #gluster
14:56 jobewan joined #gluster
14:57 vpshastry joined #gluster
14:58 vpshastry left #gluster
15:01 aixsyd joined #gluster
15:01 aixsyd dbruhn: i found the issue to my server problem
15:01 aixsyd unbelievably it was RAM.
15:01 aixsyd RAM caused slow IB performance, slow CPU benchmarks, slow VM performance, everything
15:06 dbruhn aixsyd, that doesn't surprise me at all
15:08 aixsyd dbruhn: im doing the RAM dance now to find out which ones the problem. so far i'm using 6 of 8 sticks, and the problems gone away
15:09 plarsen joined #gluster
15:09 aixsyd watch the problem only shows up when 8/8 are used.
15:10 purpleidea aixsyd: memtest it!
15:10 aixsyd purpleidea: i did. ran it for 5 days, i forget how many passes, 0 errors
15:10 purpleidea also, check that one chip isn't a lower frequency than all the rest
15:11 purpleidea are they the same manufacturer and size, etc...
15:11 aixsyd 100% the same
15:11 purpleidea and if you're really desperate, check that the brand and chip size are on the motherboards certified compatibility list HCL or whatever it's called
15:11 aixsyd its Dell memory on a Dell server :P
15:11 dbruhn Were they all in the same slots as in the working servers?
15:11 purpleidea aixsyd: aha! that's the problem
15:12 aixsyd LOLL
15:12 dbruhn Dell doesn't make their own memory ;)
15:12 aixsyd oh i know
15:12 * purpleidea is just trolling on dell for no particular reason
15:12 aixsyd :P
15:12 bennyturns joined #gluster
15:13 dbruhn I have a mix of HP/Dell/Super Micro, I will say I like the dell chassis better, but they all work just fine. Price tag is less attractive in dell land though.
15:13 purpleidea actually, i think it's kind of shitty their design of not having a single server that supports say 24-36 drives. i don't like their "buy separate jbod's to attach to "heads"
15:13 aixsyd ^^^^^^^^^^
15:13 aixsyd HP has the right idea for storage
15:13 aixsyd and super micro
15:14 purpleidea i _do_ like the supermicro 24 pr 26 3.5" drive chassis, with the 2x2.5" drives in the back (for os)! although I can't stand their garbage bios and ipmi crapware!! someone give me an open source solution!
15:14 purpleidea 36*
15:14 dbruhn purple idea, I'll take super micro's $15 IPMI over dell or hp anyway...
15:14 dbruhn on the cost alone
15:15 dbruhn I end up ordering all my dell and hp stuff without
15:15 purpleidea dbruhn: i want an open source bios and open source ipmi firmware! this way we can start some project to hack on it and fix it and unify it across different machines
15:15 japuzzo left #gluster
15:16 purpleidea i can't understand why redhat or some big company doesn't implement that or spend a bunch of resources. i'd pay $x/year to "upgrade" to the "RedHat UEFI/BIOS"
15:20 wushudoin joined #gluster
15:24 LoudNoises joined #gluster
15:25 d-fence joined #gluster
15:28 dbruhn that would be awesome
15:29 dbruhn maybe this is a starting point? http://sourceforge.net/apps/mediawiki/tianocore/index.php?title=Welcome
15:29 glusterbot Title: SourceForge.net: tianocore (at sourceforge.net)
15:52 jclift joined #gluster
15:55 mattapperson joined #gluster
15:56 rwheeler joined #gluster
16:04 jskinner_ joined #gluster
16:06 failshell joined #gluster
16:07 failshell hello. im a geo-replicated setup, should both nodes be actives in the status? right now i have one active and one passive. replication works fine.
16:14 saurabh joined #gluster
16:17 benjamin__ joined #gluster
16:24 ira joined #gluster
16:24 mattappe_ joined #gluster
16:27 zaitcev joined #gluster
16:28 samppah failshell: geo replication is currently active-passive so it sounds right to me
16:28 failshell ah cool.
16:29 failshell i recall testing it in the past, but didnt remember if that was ok or not
16:31 mattappe_ joined #gluster
16:32 SpeeR joined #gluster
16:34 msvbhat failshell: Which version of glusterfs are you using?
16:34 failshell 3.4.0 (RHS 2.1)
16:35 msvbhat failshell: If you have a replicate master volume then one of them will be active and other will be passive
16:35 failshell so the passive is just used for failover?
16:36 s2r2_ joined #gluster
16:37 msvbhat failshell: Yes, When the brick which is active goes down, passive brick takes over
16:37 failshell thanks msvbhat :)
16:37 failshell we're starting to test our CMS with gluster
16:37 failshell HQ has the backend, that will write to 2 different datacenters
16:37 failshell hopefully, we get the performance we're hoping for
16:39 msvbhat failshell: There's a distributed-geo-rep on the way :)
16:39 failshell how is that different than the current georep?
16:40 msvbhat failshell: With some of the workloads we saw 8x perf improvement. If everything goes well it should be out in 3.5 version.
16:40 failshell nice
16:40 robo joined #gluster
16:40 failshell my only beef with gluster is initial bulk loading of data
16:41 failshell in my case, we have over 500GB of images to import, that took a long long time over several parallel rsync jobs
16:41 msvbhat failshell: Well, each of the distributed node shares the load of syncing the data to slave
16:41 msvbhat failshell: And we have some tunables and other option to sync the initial data to slave faster
16:42 failshell i guess using RHS  i wont have these features for a while. but that's what you get for using supported software i guess :)
16:42 msvbhat failshell: Depending on your type of data and workload you can choose either rsync or tar+ssh to sync the data.
16:42 msvbhat failshell: Oh, You're using RHS?
16:42 failshell yes
16:42 failshell and redhat
16:43 failshell my employer prefers to have support
16:43 failshell im fine with this, it helps the OSS community :)
16:44 msvbhat failshell: Then you should have it already. The new geo-rep. But some perf patches would be missing I guess
16:44 failshell ok
16:44 failshell cool :)
16:45 failshell i recall one of our RH people telling me about this a while back
16:45 failshell i have a call with him on Thursday to talk geo-repl best practices
16:47 msvbhat failshell: Great... There are some tunables which can be configured to suit your workload
16:47 mattapperson joined #gluster
16:52 ndk joined #gluster
16:53 zerick joined #gluster
16:54 mattappe_ joined #gluster
16:57 eseyman left #gluster
16:57 jbrooks joined #gluster
17:01 spstarr_work joined #gluster
17:01 spstarr_work hmmmmm
17:01 spstarr_work I have very high CPU with gluster :(
17:02 spstarr_work this just with apache reading /var/www
17:02 spstarr_work why though :(
17:02 spstarr_work READs vs Write
17:03 spstarr_work flood of [2014-01-20 16:16:00.227374] I [client-handshake.c:1468:client_setvolume_cbk] 0-content-client-1: Server and Client lk-version numbers are not same, reopening the fds
17:03 glusterbot spstarr_work: This is normal behavior and can safely be ignored.
17:03 spstarr_work hmm
17:04 vafina joined #gluster
17:04 tbaror hi, what type are you usually recommend, i am planing using seagate  12x sshd 4tb on ADAPTEC 71605Q + 2x ssd 128gb for cash is that is good choice for expecting   at least 600MBytes/s per brick?
17:04 tbaror of disks controllers
17:07 jag3773 joined #gluster
17:09 samppah tbaror: sounds good, redhat recommends 12 disks in raid 6 configuration per brick
17:10 chirino joined #gluster
17:11 samppah tbaror: i have similar configuration, altough currently with 6 sata disks and ssd cache with lsi controller
17:12 samppah tbaror: you probably need multiple threads / parallel io to max out your setup
17:26 tbaror thanks, my choice would be lsi but i saw that the only model supporting 16 drive with new ssd cash feature is is that mention adaptec , lsi have only   8 port MegaRAID SAS 9271-8iCC with that feature
17:26 tbaror yes i need total of around 70 of 120Mbit/s session and another 30 4~6Mbit/s that's sum around total 1.5 Gbytes/s
17:26 tbaror , in terms of what is more suggested stripe/replicate or distributed/replicate  if i have lots of big video files 1 to 20gb and also lots of small files 1k to 512k files
17:29 aixsyd dbruhn:  any idea why my writes to gluster seem to be bursty? good speed, drop to zero, then back up, then back down: https://i.imgur.com/D3Edask.jpg
17:30 purpleidea aixsyd: what is that a screenshot of?
17:30 aixsyd HDTune on a VM whos storage is on GlusterFS
17:32 purpleidea i feel it might be tricky to rule out some sort of windows caching or magic causing the bursts
17:32 * purpleidea windows debugging skills of these kinds of things are weak
17:32 samppah aixsyd: using qcow?
17:35 samppah aixsyd: i'm seeing same effect with qcow2 images while raw images seem to be fine.. not sure if qcow metadata is the reason to that
17:35 samppah disabling write-behind translator helps somewhat
17:40 aixsyd samppah: nope, RAW
17:41 samppah aixsyd: does it help if you disable write-behind translator?
17:41 aixsyd whats the command for that
17:41 aixsyd ill try it
17:42 samppah gluster vol set volName performance.write-behind disable
17:42 semiosis see also ,,(undocumented options)
17:42 glusterbot Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
17:42 aixsyd semiosis: semiosis spanks
17:43 aixsyd er
17:43 aixsyd samppah:
17:43 aixsyd :P
17:43 samppah :D
17:44 sputnik1_ joined #gluster
17:45 sputnik1_ joined #gluster
17:51 aixsyd whoa, something happened. my reads went from 33MB/s down to 5
17:51 daMaestro joined #gluster
17:51 aixsyd not changing anything
17:52 aixsyd somethigns up. brb
17:58 JoeJulian spstarr_work: glusterbot's answer wasn't entirely accurate. If you're getting a "flood" of client_setvolume_cbk messages, that means that you're getting a flood of client connecting. If the client's connecting enough to create a flood of those messages, I would probably look for a network problem.
17:59 spstarr_work i fixed some tuneups
17:59 spstarr_work XFS filesystem change block size as glusterfs.org aws section
17:59 spstarr_work testing...
18:00 JoeJulian ~stripe | tbaror
18:00 glusterbot tbaror: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
18:01 failshel_ joined #gluster
18:02 ira joined #gluster
18:03 ira joined #gluster
18:05 aixsyd samppah: wow, my VM sure did not like changing performance.write-behind disable whilst it was running. LOL
18:09 aixsyd weird, i get better read and write performance writing random than all zeros
18:09 samppah aixsyd: what happened to it when you changed that while it was running?
18:10 aixsyd VM took a crap and froze
18:10 samppah ouch
18:10 aixsyd its okay, test environment
18:10 aixsyd semiosis: samppah performance.write-behind disable seemed to make the jitteryness go away though
18:10 aixsyd my IOPS are pretty good, too
18:11 aixsyd read was at 33MBs, write at a crappy 8MB/s, now im getting 32 read, 39 write. i'm happy :)
18:13 JoeJulian Lol... An actual support email that came in to our info@ address today: "I can't log on to spend money, cause your log on is putting in the wrong email and will NOT let me fix it!!!! So if you want me to buy stuff online, fix the stupid thing!"
18:13 JoeJulian Off topic but funny.
18:13 aixsyd LOL
18:14 samppah JoeJulian :D
18:14 aixsyd JoeJulian: you should probably do as he asks and fix the stupid thing. just sayin :P
18:15 samppah aixsyd: i better check how's my test windows vm doing if it froze during write-behind disabling :P
18:17 s2r2_ joined #gluster
18:21 Technicool joined #gluster
18:23 sforsyt joined #gluster
18:23 spstarr_work oh MUCH better
18:23 spstarr_work wow
18:23 spstarr_work its like night and day
18:24 erik49_ joined #gluster
18:25 spstarr_work glusterfs is not using 40-50% CPU now only  < 20% and not often
18:28 s2r2_ joined #gluster
18:28 spstarr_work hmmm
18:28 spstarr_work 0-content-replicate-0: getxattr failed on content-client-1
18:28 spstarr_work they reconnected all fine
18:31 sroy_ joined #gluster
18:32 lalatenduM joined #gluster
18:34 rotbeard joined #gluster
18:36 mattappe_ joined #gluster
18:37 neofob joined #gluster
18:37 psyl0n joined #gluster
18:37 mattappe_ joined #gluster
18:37 chirino joined #gluster
18:38 dbruhn Glad you got it fixes aixsyd! Feeling better about your testing vs last week?
18:44 aixsyd dbruhn: youve no idea
18:44 aixsyd and i just found out management bought me an infiniband switch! :D :D :D
18:44 mattapperson joined #gluster
18:44 dbruhn oh nice, which one?
18:44 aixsyd 10gbps from VM hypervisor to storage! =OOOO
18:44 aixsyd one sec
18:44 mattapp__ joined #gluster
18:45 aixsyd Voltaire ISR-9024m
18:45 pravka joined #gluster
18:45 s2r2__ joined #gluster
18:47 dbruhn Sweet, I have the DDR version of that switch in my lab
18:47 dbruhn jclift swears by those for a reliable cheap switch
18:47 aixsyd now with an internally managed switch, i dont need opensm running, do i?
18:47 dbruhn Nope, the switch will act as your subnet manager. You will of course want to make sure the SM is actually enabled.
18:48 aixsyd surely
18:54 jclift dbruhn: Yeah, I swear by the DDR version (in a good way)
18:54 jclift dbruhn: The non-DDR version I'm unsure about.  I suspect they use an older version of the Infiniband chipset, which runs a bunch hotter.
18:55 andreask joined #gluster
18:55 jclift dbruhn: And the non-managed version of the switch I'd tend to avoid too, as there doesn't seem to be a way to ask it what temperature it is, so is hard to remotely detect overheating if it's run fanless.
18:55 jclift dbruhn: But, that's just me... ;)
18:55 dbruhn I really like my Mellenox QDR switch, had a PS fan go bad. but otherwise it's been rock solid.
18:56 jclift Cool :)
18:56 diegows joined #gluster
18:56 DV joined #gluster
18:57 jclift Oops, I have to run.  London Hackspace meeting to attend tonight.  :)
18:57 jclift 'nite guys
19:00 neofob left #gluster
19:01 SpeeR joined #gluster
19:04 SpeeR__ joined #gluster
19:05 SpeeR__ joined #gluster
19:08 codex joined #gluster
19:08 sforsyt I am having a problem where my peers are not seeing each other
19:09 sforsyt is there a way I can cleanly remove/readd them?
19:12 aixsyd peer probe detatch?
19:12 dbruhn Are you having another issue that's going to be present once you try and re-add them?
19:14 sforsyt This is a 4 node cluster, from Node A, shows the other 3 "State: Peer in Cluster (Disconnected)
19:14 sforsyt If I try to detatch, says "peer detach: failed: One of the peers is probably down. check with peer status"
19:15 sforsyt but If I do peer status from Node C, it says all are connected
19:20 sforsyt Seeing [2014-01-20 19:17:59.020823] E [glusterd-handshake.c:1074:__glusterd_peer_dump_version_cbk] 0-: Error through RPC layer, retry again later
19:21 SpeeR joined #gluster
19:22 jclift joined #gluster
19:23 aixsyd sforsyt: i think you can do peer detach peer force
19:30 JoeJulian sforsyt: Do you have any volumes created that use those peers?
19:31 blook joined #gluster
19:31 sforsyt ok, force detach worked, but when try to readd I get
19:31 sforsyt peer probe: failed: Error through RPC layer, retry again later
19:32 sforsyt yes, I have a volume across the 4 peers that I'd hopefully like to not lose
19:32 sforsyt ~8TB
19:33 sforsyt We had gluster mount turned off for a few weeks, did an upgrade from Centos 6.4 -> 6.5 and tried to restrat the volume and began to see the problem
19:34 sforsyt firewall is off on all nodes
19:34 JoeJulian Are they all running the same version? (glusterd --version)
19:35 sforsyt Yes, all on most recent glusterfs 3.4.2 built on Jan  3 2014 12:38:05
19:35 JoeJulian Check the glusterd log
19:35 JoeJulian ... on both the one you're probing from and the one you're probing to.
19:36 aixsyd sforsyt: be sure you have 3.4.2 glusterfs-server as well as glusterfs-common at the same.
19:36 aixsyd *as the same
19:36 JoeJulian Oh, right!
19:37 JoeJulian also just restart all glusterd
19:37 sforsyt I do not see a glusterfs-common
19:38 sforsyt http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/RHEL/epel-6Server/x86_64/
19:38 glusterbot Title: Index of /pub/gluster/glusterfs/3.4/LATEST/RHEL/epel-6Server/x86_64 (at download.gluster.org)
19:38 sforsyt you mean the base glusterfs-3.4.2-1.el6 ?
19:38 aixsyd yes, i suppose. not sure why its called -common in the debian world
19:39 aixsyd or NOT called -common in the RPM world
19:40 sforsyt yes all are at 3.4.2-1
19:42 aixsyd silly question, but you can ping and iperf each node from another?
19:42 aixsyd by IP and by DNS name?
19:42 JoeJulian aixsyd: standard naming conventions. One more thing that exposes why they're not both just fedora. :D
19:42 sforsyt The odd thing is , if I manually delete /var/lib/glusterd/[peers|vols] , then use rsync to repopulate the peers dir ... start glusterd , the vols info is copied over ( so it must be communicating with other ppers)
19:42 sforsyt yes, can ping , using ip addresses only
19:42 aixsyd not DNS name?
19:42 sforsyt gluster peer probe 10.10.6.94
19:42 aixsyd AFAIK, you should be using names
19:42 sforsyt no dns name
19:43 JoeJulian It's best practice, but not absolutely necessary.
19:43 aixsyd figured that
19:43 JoeJulian You'll kick yourself later if you don't, though.
19:43 aixsyd could he be kicking himself now?
19:43 JoeJulian Nah
19:43 JoeJulian It's when you make changes.
19:44 JoeJulian sforsyt: If you rsynced from another peer, did you delete the peer for this particular server?
19:44 sforsyt yes
19:44 sforsyt rsync'd from 2 peers (so would have the UUID for the other rsync), and deleted the file for current peer
19:46 JoeJulian restart ALL glusterd (which can be done safely without affecting your volumes) then try again.
19:47 sforsyt So I have A,B,C,D ... C is the only one that shows peer status (connected)  to all the others, I'm assuming start that one first?
19:48 dbruhn sforsyt, this sounds like your peer files are referencing different UUID's across all the servers
19:49 JoeJulian I'm not sure it matters which one is started first.
19:51 ricky-ticky joined #gluster
19:51 SpeeR joined #gluster
19:52 dbruhn JoeJulian, remember that time I had my UUID's all a mess, and some servers were seeing connected, and others weren't. Sounds like the same thing. And if he copied the vol/peer files from another server, they wouldn't lineup.
19:52 dbruhn Because the server it comes from, doesn't have a peer file for itself
19:53 SpeeR_ joined #gluster
19:53 aixsyd dbruhn: makes sense
19:54 JoeJulian dbruhn: yep, but he accounted for that by copying from two different servers.
19:55 dbruhn Not if his UUID is different on C now from what was copied
19:55 JoeJulian True
19:55 SpeeR joined #gluster
19:56 dbruhn C would be seeing itself and the rest properly though, so it would see them all as connected
19:57 SpeeR joined #gluster
19:59 SpeeR_ joined #gluster
19:59 sforsyt let me show you peer status from two sides
19:59 sforsyt Node C (10.10.6.96)
19:59 sforsyt Hostname: 10.10.6.94
19:59 sforsyt Uuid: 9b49195c-f61e-4269-8c68-067fa797d5a5
19:59 sforsyt State: Peer in Cluster (Connected)
19:59 sforsyt Node A (10.10.6.94)
19:59 sforsyt Hostname: 10.10.6.97
19:59 sforsyt Uuid: 21ee8bd9-8102-473d-87d0-c84adf7f91b0
19:59 sforsyt State: Peer in Cluster (Disconnected)
20:00 JoeJulian @paste
20:00 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
20:00 sforsyt and the UUID match what is in the /var/lib/glusterd/glusterd.info on corresponding note
20:00 sforsyt k, sorry for that then
20:00 emoginov joined #gluster
20:00 SpeeR__ joined #gluster
20:01 gdubreui joined #gluster
20:01 JoeJulian Hmm, that factoid isn't as detailed as I thought I remembered...
20:01 dbruhn sforsyt, so the /var/lib/glusterd/glusterd.info file shows Uuid: 9b49195c-f61e-4269-8c68-067fa797d5a5 on server c?
20:01 JoeJulian And are the uuids different on each server?
20:02 sforsyt sorry, I pasted the wrong one (the .97) instead of .96
20:02 SpeeR joined #gluster
20:02 sforsyt From node A http://paste.fedoraproject.org/70106/48148139
20:02 glusterbot Title: #70106 Fedora Project Pastebin (at paste.fedoraproject.org)
20:03 sforsyt from node C  http://paste.fedoraproject.org/70107/13902481
20:03 aixsyd now lets see node c
20:03 glusterbot Title: #70107 Fedora Project Pastebin (at paste.fedoraproject.org)
20:03 aixsyd dafuq?
20:03 gdubreui joined #gluster
20:03 sforsyt from glusterd.info , nodec, UUID=6d5625bd-5acb-42bd-93ac-403663868f0a
20:04 sforsyt (which is 96)
20:04 SpeeR_ joined #gluster
20:04 aixsyd can we see the same output as A from C? your C paste looked a little truncated
20:04 dbruhn his c paste is just the info in that file
20:05 aixsyd ah yeah, lets see the peer status from C
20:05 sforsyt http://paste.fedoraproject.org/70108/48291139/
20:05 glusterbot Title: #70108 Fedora Project Pastebin (at paste.fedoraproject.org)
20:05 dbruhn and the peers don't show themselves in that output
20:05 aixsyd thats fine - the uuids all match
20:06 dbruhn Yep, so that's not the issue
20:06 SpeeR__ joined #gluster
20:06 sforsyt but how does one show as connected to the other, but not vice-versus ... firewall is down on all nodes, routing is same, netmask, etc
20:06 aixsyd sforsyt: can you rerun the disable iptables command on all nodes again, just for luck?
20:06 aixsyd ya never know
20:06 aixsyd xD
20:07 sroy joined #gluster
20:07 JoeJulian That answer /should/ be in one of both peers log files.
20:07 sforsyt same output
20:07 sforsyt which log file is the relevant one?
20:08 SpeeR__ joined #gluster
20:08 dbruhn if I remember right the gluster peer status command runs from the server you are on, and doesn't actually communicate for status from all nodes like the other gluster cli commands
20:08 JoeJulian /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
20:09 JoeJulian What would be best would be to stop all glusterd. Truncate that log file. Bring them all back up again. fpaste the logs.
20:09 sforsyt Last 200 lines , from NodeA (shows as disconnected )
20:09 sforsyt http://paste.fedoraproject.org/70111/02485261
20:09 glusterbot Title: #70111 Fedora Project Pastebin (at paste.fedoraproject.org)
20:09 SpeeR joined #gluster
20:09 dbruhn +1 JoeJulian
20:09 sforsyt ok, I tried the whole thing and was rejected as to large, will stop. clear log, start, then give from two nodes
20:11 SFLimey joined #gluster
20:11 SpeeR_ joined #gluster
20:11 aixsyd dbruhn: which of these outputs would you prefer? https://i.imgur.com/irOseHg.jpg or https://i.imgur.com/qVDxKaK.jpg
20:12 sforsyt http://paste.fedoraproject.org/70112/24871413
20:12 sforsyt http://paste.fedoraproject.org/70113/90248720
20:12 glusterbot Title: #70112 Fedora Project Pastebin (at paste.fedoraproject.org)
20:12 glusterbot Title: #70113 Fedora Project Pastebin (at paste.fedoraproject.org)
20:13 JoeJulian Request received from non-privileged port. Failing request
20:13 SpeeR joined #gluster
20:13 JoeJulian Are you trying to run glusterd as a non-root?
20:13 blook joined #gluster
20:13 sforsyt no, I am root
20:14 JoeJulian Looks like all ports <1024 must be in use.
20:14 nocturn joined #gluster
20:14 aixsyd firewall could disable ports <1024, no?
20:14 JoeJulian No
20:14 aixsyd seems like a lot of blacket use of ports
20:15 SpeeR__ joined #gluster
20:15 JoeJulian You'd get a different error if it were firewall.
20:16 SpeeR joined #gluster
20:16 JoeJulian I suspect netstat -t will show a lot of CLOSE_WAIT.
20:17 sforsyt These are 10Gig cards, in the past I had some issued where is was just automatically trying to use rdma (which not using) and failed. And fixed by changing transport-type to JUST socket. Changed back today during debug to socket, rdma
20:19 sforsyt no, no CLOSE_WAIT
20:19 aixsyd sforsyt is the new me.
20:21 sforsyt ok, so just found something, 2 new pastebins, first from Node that shows connected, second from node that fails
20:22 sforsyt output from netstat -tn |grep 24007
20:22 sforsyt http://paste.fedoraproject.org/70122/24931013
20:22 glusterbot Title: #70122 Fedora Project Pastebin (at paste.fedoraproject.org)
20:22 sforsyt http://paste.fedoraproject.org/70121/39024930/
20:22 glusterbot Title: #70121 Fedora Project Pastebin (at paste.fedoraproject.org)
20:22 sforsyt so the BAD node, has a bunch of connections to itself?
20:23 andreask joined #gluster
20:23 sforsyt and yes that is taking it right up to port 1023
20:25 dbruhn aixsyd... weird, same system? Obviously my answer would be are you more concerned about read speed or write speed.
20:27 dbruhn the second one seems more controlled though
20:27 dbruhn do you know why that first one is so peaky?
20:27 aixsyd not really sure.
20:28 aixsyd I disabled performance.write-behind and it seems to be steady now
20:29 aixsyd and dbruhn im equally concerned
20:39 sforsyt well I fixed it
20:39 JoeJulian excellent, what was it?
20:39 sforsyt was an error in iproute2, we have multiple networks
20:39 JoeJulian Interesting.
20:39 sforsyt all the .6 network is on our 3rd network that has 10gig switches
20:40 sforsyt the primary 'default' gateways are through .1
20:40 sforsyt need to expliticly set in /etc/rc.local  (with corresponding tables set up in /etc/iproute2/rt_tables)
20:40 sforsyt ip rule add from 10.10.4.0/22 table z51
20:40 sforsyt ip route add to 10.10.4.0/22 dev p5p1 table z51
20:40 sforsyt ip route add default via 10.10.4.4 table z51
20:42 sforsyt someone may have made a loop in switches
20:44 aixsyd =O
20:45 sforsyt ya, fun day
20:48 mattappe_ joined #gluster
20:53 mattappe_ joined #gluster
20:55 lpabon joined #gluster
20:58 zapotah joined #gluster
20:58 zapotah joined #gluster
20:59 chirino joined #gluster
20:59 glusterbot New news from newglusterbugs: [Bug 1055747] CLI shows another transaction in progress when one node in cluster abruptly shutdown <https://bugzilla.redhat.com/show_bug.cgi?id=1055747>
21:03 primechuck joined #gluster
21:07 sforsyt left #gluster
21:12 qdk joined #gluster
21:19 mattappe_ joined #gluster
21:22 johnmark @chanstats
21:22 JMWbot johnmark: @3 purpleidea reminded you to: thank purpleidea for an awesome JMWbot (please report any bugs) [3120100 sec(s) ago]
21:22 JMWbot johnmark: @5 purpleidea reminded you to: remind purpleidea to implement a @harass action for JMWbot  [3048864 sec(s) ago]
21:22 JMWbot johnmark: @6 purpleidea reminded you to: get semiosis article updated from irc.gnu.org to freenode [2953394 sec(s) ago]
21:22 JMWbot johnmark: @8 purpleidea reminded you to: git.gluster.org does not have a valid https certificate [1024701 sec(s) ago]
21:22 JMWbot johnmark: Use: JMWbot: @done <id> to set task as done.
21:23 johnmark blarg
21:23 johnmark glusterbot: @chanstats
21:23 semiosis purpleidea: please make it stop!
21:23 johnmark LOL
21:23 * semiosis considers activating glusterbot powers
21:24 semiosis johnmark: did you see my response about "scripting" langs on the jvm friday?
21:24 johnmark semiosis: no
21:24 johnmark semiosis: oh wait
21:24 semiosis i'll get a link to the logs
21:24 johnmark didn't you say that any scripting language can be used?
21:24 johnmark although I wasn't sure how
21:24 semiosis https://botbot.me/freenode/gluster/msg/9895891/
21:24 glusterbot Title: Logs for #gluster | BotBot.me [o__o] (at botbot.me)
21:25 semiosis not any scripting language, any language that runs on the jvm
21:25 johnmark semiosis: ah. I didn't realize that all of those languages run on the jvm
21:26 semiosis for python there's jython, for ruby there's jruby, then there are many other langs that only run on the jvm, groovy, scala, & several others
21:26 johnmark semiosis: does java 7 include some translation layer "out of the box"?
21:26 JoeJulian glusterbot:chanstats
21:26 johnmark ok
21:26 JoeJulian ...
21:26 johnmark glusterbot: meh
21:26 glusterbot johnmark: I'm not happy about it either
21:26 semiosis johnmark: someone had to implement an interpreter/compiler for the language
21:26 johnmark ok
21:27 johnmark and how compatible are jython, jruby, et al with their native cousins?
21:27 JoeJulian glusterbot: channelstats
21:27 glusterbot JoeJulian: On #gluster there have been 237807 messages, containing 9802569 characters, 1622234 words, 6238 smileys, and 865 frowns; 1326 of those messages were ACTIONs. There have been 97478 joins, 2961 parts, 94519 quits, 24 kicks, 187 mode changes, and 7 topic changes. There are currently 188 users and the channel has peaked at 239 users.
21:27 johnmark JoeJulian: thanks :)
21:28 semiosis johnmark: depends on the app.  just like with C code -- glusterfs for example -- the C code is portable, but it might make calls to things that are only on one platform (fuse, for ex)
21:28 JoeJulian Wow... 24 kicks... That number has skyrocketed...
21:28 johnmark semiosis: ok
21:28 semiosis same thing with these langs, some things are available on one interpreter & not another, most things are portable
21:28 primechuck Don't forget perljvm :)
21:28 semiosis primechuck: bwahhahahahah
21:28 johnmark primechuck: lol... oh... right
21:28 semiosis only Perl can parse perl
21:29 johnmark semiosis: ok, so how can we start to get some serious actoin around scripting languages?
21:30 semiosis srsly!
21:30 johnmark semiosis: does this depend on your project?
21:30 johnmark lol
21:30 semiosis now we have two problems ;)
21:31 semiosis 1. building a bridge from Java to Glusterfs (my code)
21:31 semiosis 2. getting people to walk across that bridge (without fear of falling into the rapids below)
21:32 primechuck semiosis: Love your java bindings, must play with it in my next project.
21:32 JoeJulian WRT perljvm, the Vitalogy release had Bugs.
21:33 semiosis primechuck: cool!
21:33 JoeJulian ... no Vedder fans?
21:33 semiosis JoeJulian: ?
21:33 primechuck Perl Jam pun
21:34 JoeJulian Tried anyway...
21:34 dbruhn And no Vedder inspired a generation of boring music
21:34 dbruhn lol
21:34 dbruhn Granted Perl Jam had a good record
21:36 semiosis oh ha
21:43 johnmark JoeJulian: lol... took me a while :)
21:44 johnmark radez_g0n3: hey, you should get together with kkeithley and plot your nagios world domination together
21:44 johnmark kkeithley: ^^^
21:46 dbruhn Didn't the nagios folks just do something weird by taking over the plugins domain that was community property?
21:52 tryggvil joined #gluster
21:57 johnmilton joined #gluster
22:17 fidevo joined #gluster
22:36 robo joined #gluster
22:54 jobewan joined #gluster
23:01 mattappe_ joined #gluster
23:15 sprachgenerator joined #gluster
23:24 mattappe_ joined #gluster
23:28 gdubreui joined #gluster
23:51 dbruhn joined #gluster
23:53 mattappe_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary