Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 kevein joined #gluster
00:12 njTAP joined #gluster
00:28 TSM2 joined #gluster
00:30 njTAP In a 2x replicated scenario, if a node is down, can I find out that Gluster client was only able to write it to 1 node and the 2nd one failed?
00:31 semiosis njTAP: i think there's a 'gluster volume status' command since 3.3.0 that does something like that, but havent used it myself.  check it out, maybe it's what you're looking for
00:32 semiosis anyway, gotta run
00:32 semiosis good luck!
00:33 njTAP thanks semiosis
00:33 semiosis yw
00:44 stefanha joined #gluster
01:33 nodots joined #gluster
02:14 purpleidea semiosis: hehe, darnit! competing puppet modules! fwiw, my code is all agpl so if there are parts that you'd want to blend into yours to make a single all awesome puppet module, i won't be offended!
02:25 ika2810 joined #gluster
03:06 tg2 joined #gluster
03:06 tg2 hey guys, is there an easy way to "decomission" an node in a distributed setup?
03:07 tg2 ie: I have 10 nodes, want to take 1 out, is tehre a way to issue a "redistribute my files" command so that it can stop accepting new files and start displacing it's current files onto it's remaining peers?
03:07 tg2 from what I understand the 'levelling' that happens when adding a new node to a cluster should have the same logic
03:11 sunus joined #gluster
03:15 bharata joined #gluster
03:16 JZ_ joined #gluster
03:16 seanh-ansca joined #gluster
03:43 shylesh joined #gluster
03:46 sunus joined #gluster
03:56 Guest12444 joined #gluster
04:03 usrlocalsbin1 joined #gluster
04:06 vpshastry joined #gluster
04:07 m0zes @stripe
04:07 glusterbot m0zes: Please see http://goo.gl/5ohqd about stripe volumes.
04:38 ngoswami joined #gluster
04:53 vimal joined #gluster
04:57 Humble joined #gluster
05:03 rubbs joined #gluster
05:12 dec joined #gluster
05:14 faizan joined #gluster
05:18 vimal joined #gluster
05:42 mdarade1 joined #gluster
05:44 ramkrsna joined #gluster
05:51 jays joined #gluster
05:56 zwu joined #gluster
06:05 raghu joined #gluster
06:16 ngoswami joined #gluster
06:39 mtanner_ joined #gluster
06:40 usrlocalsbin joined #gluster
06:45 vpshastry joined #gluster
06:51 vimal joined #gluster
06:51 seanh-ansca joined #gluster
06:51 rgustafs joined #gluster
06:59 vikumar joined #gluster
07:03 bala1 joined #gluster
07:04 pkoro joined #gluster
07:04 guigui3 joined #gluster
07:09 sunus joined #gluster
07:09 ankit9 joined #gluster
07:10 lkoranda joined #gluster
07:39 vimal joined #gluster
07:39 puebele joined #gluster
07:41 crashmag joined #gluster
07:44 sshaaf joined #gluster
07:45 ekuric joined #gluster
07:45 ctria joined #gluster
07:48 vikumar joined #gluster
07:58 puebele joined #gluster
08:09 tjikkun_work joined #gluster
08:11 vpshastry joined #gluster
08:13 mgebbe_ joined #gluster
08:16 badone joined #gluster
08:17 andreask joined #gluster
08:17 ndevos stefanha: thanks for the wireshark bug report, but I cant reproduce it yet... do those steps always cause a crash for you?
08:23 manik joined #gluster
08:24 * ndevos downloads Fedora 18 and will test with that later
08:28 Nr18 joined #gluster
08:31 TheHaven joined #gluster
08:35 hagarth joined #gluster
08:47 Jippi joined #gluster
08:50 DaveS joined #gluster
08:51 Triade joined #gluster
08:57 OTNexus joined #gluster
08:57 OTNexus ne1 online?
08:58 ankit9 joined #gluster
09:01 manik joined #gluster
09:05 Humble joined #gluster
09:05 redsolar_office joined #gluster
09:10 gbrand_ joined #gluster
09:10 rz__ joined #gluster
09:15 NuxRo OTNexus: hi, just ask and wait for a reply
09:25 dobber joined #gluster
09:31 manik joined #gluster
09:45 dastar joined #gluster
09:48 SteveCooling Hi. Is there a recommended way to change IPs for a set of gluster nodes? Specifically I have a 4 node gluster that i need to move to another subnet.
09:48 ngoswami joined #gluster
10:00 tryggvil joined #gluster
10:18 hagarth joined #gluster
10:25 duerF joined #gluster
10:36 balunasj joined #gluster
10:37 * jdarcy o_O
10:39 fitzdsl joined #gluster
10:45 fitzdsl Is there any work to make glusterfs able to expose blockstorage in openstack ?
10:50 samppah fitzdsl: http://gluster.org/community/doc​umentation/index.php/Planning34 something "block device translator" ?
10:50 glusterbot <http://goo.gl/VoH8X> (at gluster.org)
10:54 rcheleguini joined #gluster
10:54 bala joined #gluster
10:54 fitzdsl samppah: thx
10:54 shireesh joined #gluster
11:24 vikumar joined #gluster
11:25 ngoswami joined #gluster
11:32 overclk joined #gluster
11:38 morse joined #gluster
11:44 ika2810 left #gluster
11:54 nueces joined #gluster
11:56 shireesh joined #gluster
11:57 bala joined #gluster
12:05 hagarth joined #gluster
12:06 edward1 joined #gluster
12:06 z00dax has anyone tested glusterfs on ipv6 ?
12:11 balunasj joined #gluster
12:11 balunasj joined #gluster
12:13 shireesh joined #gluster
12:19 kkeithley1 joined #gluster
12:42 SteveCooling jdarcy: did you o_O my ip change question?
12:53 lkoranda joined #gluster
12:57 matsekl joined #gluster
12:58 jdarcy SteveCooling: No, that's just my way of saying hi at 5:30am.  ;)
12:59 SteveCooling :) was afraid i had missed something obvious. couldn't find any hints in any documentation, you see..
12:59 SteveCooling and obviously the nodes like to know where the others are :)
13:00 shireesh joined #gluster
13:07 deckid joined #gluster
13:17 davdunc joined #gluster
13:57 mohankumar joined #gluster
14:04 nueces joined #gluster
14:06 ika2810 joined #gluster
14:18 puebele joined #gluster
14:20 deckid joined #gluster
14:26 dstywho joined #gluster
14:29 dstywho joined #gluster
14:34 nueces joined #gluster
14:36 puebele joined #gluster
14:43 plarsen joined #gluster
14:49 UnixDev are there any advantages/drawbacks to gluster over infiniband ?
14:50 nueces joined #gluster
14:58 raghu joined #gluster
15:05 bulde joined #gluster
15:06 manik joined #gluster
15:10 oscailt joined #gluster
15:14 UnixDev anyone familiar with gluster and rdma?
15:18 wushudoin joined #gluster
15:23 stopbit joined #gluster
15:25 shireesh joined #gluster
15:26 semiosis purpleidea: thanks!  though i dont see us as competing :)
15:27 semiosis SteveCooling: i recommend using ,,(hostnames) -- specifically using dedicated FQDNs for gluster servers, these can be easily remapped to different IP/hostnames as needed without having to mess with gluster's config
15:27 glusterbot SteveCooling: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
15:30 pdurbin is it dumb for my bricks to be on LVM volumes?
15:30 semiosis pdurbin: no, my impression is lots of people do that (joejulian for one)
15:31 pdurbin semiosis: ok, thanks. that's what i'm planning on, at least for my first go at it
15:31 semiosis i thought you already had PBs on glusterfs
15:32 pdurbin PBs?
15:32 semiosis petabytes
15:32 pdurbin oh. sure. terabytes at least. i didn't set it up personally, though
15:37 bulde joined #gluster
15:46 ascii0x3F joined #gluster
15:47 daMaestro joined #gluster
15:56 erik49 joined #gluster
15:57 NcA^ joined #gluster
15:57 atrius joined #gluster
16:00 NcA^__ joined #gluster
16:04 seanh-ansca joined #gluster
16:21 OTNexus joined #gluster
16:22 manik joined #gluster
16:23 ankit9 joined #gluster
16:25 JoeJulian Aight... I'm awake...
16:28 samppah good morning joe
16:31 shireesh joined #gluster
16:32 UnixDev JoeJulian: morning :)
16:36 semiosis heyo
16:39 UnixDev semiosis: any experience with rdma?
16:40 semiosis none
16:40 UnixDev :( does gluster have any interfaces that work over rdma? who would know about that?
16:41 nueces joined #gluster
16:41 semiosis there's people here who use infiniband/rdma
16:46 mohankumar joined #gluster
16:52 atrius joined #gluster
17:03 manik joined #gluster
17:06 mistermario joined #gluster
17:10 JoeJulian Gah! Why am I answering the mailing list... I must be losing it.
17:10 OTNexus julian u there?
17:11 JoeJulian Only partly.
17:11 OTNexus u got some time
17:11 JoeJulian Hey there neighbor.
17:11 OTNexus to kill
17:11 OTNexus :P
17:11 JoeJulian Are you coming on Thursday?
17:11 OTNexus waT? lol
17:11 mistermario Is it a bad idea to use a few VMs, on a few seperate hosts, to create a volume for gluster?
17:12 JoeJulian OTNexus: http://www.joejulian.name/blog/intro​duction-to-glusterfs-at-sasag-nov-8/
17:12 glusterbot <http://goo.gl/Bb7cA> (at www.joejulian.name)
17:12 OTNexus oh nope
17:12 OTNexus i just came here to ask if you wanted to check out my new forum software/site if u got some extra time
17:13 JoeJulian Sure, why not. :D
17:13 semiosis mistermario: i only use glusterfs on vms... tesing on a single host & prod in ec2
17:13 OTNexus okay @ JoeJulian
17:13 OTNexus http://otnexus.com/
17:13 glusterbot Title: OTNexus (at otnexus.com)
17:13 semiosis mistermario: doesn't seem like a bad idea to me
17:13 OTNexus It's about 48 hours old, so not that many members
17:13 mistermario thank you semiosis
17:14 OTNexus If u could make an account, post a bit, it'd be great if u got time, no rush ;)
17:14 semiosis JoeJulian: click some ads
17:14 semiosis hehe
17:15 JoeJulian Yeah, I guess. So does the OT stand for Off Topic?
17:15 OTNexus Online Trading Nexus actually, mybad I know I need to expose that somewhere
17:15 OTNexus But right now we trying to figure out what games to add under the trading grounds too
17:15 semiosis OT clearly means Off Topic
17:15 JoeJulian And is this in any way on topic?
17:16 semiosis ...clearly
17:19 OTNexus was kicked by glusterbot: JoeJulian
17:20 GrayD joined #gluster
17:20 GrayD hi!
17:20 glusterbot GrayD: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:21 GrayD cloud somebody please help me with setting up striped replicated volume
17:21 GrayD i have two physical servers with two bricks in each
17:21 JoeJulian @stripe
17:21 glusterbot JoeJulian: Please see http://goo.gl/5ohqd about stripe volumes.
17:27 ctria joined #gluster
17:27 ankit9 joined #gluster
17:28 GrayD I'm just little bit confused with official guide. In corresponding section there is diagram with configuration which equals to one I have. But while there are two servers displayed in diagram, sample commands uses four (and more) servers with one brick in each.
17:29 rodlabs_ joined #gluster
17:30 rodlabs joined #gluster
17:30 JoeJulian I've been fighting those inconsistencies in the docs since 2.0. :/
17:30 JoeJulian GrayD: Do you want fault tolerance?
17:30 GrayD Yes
17:31 JoeJulian Then with 2 servers you're just going to want a replicated volume.
17:31 rodlabs left #gluster
17:32 JoeJulian ie. gluster volume create $volname replica 2 ${server1}:${brick_path} ${server2}:${brick_path}
17:32 saz joined #gluster
17:32 GrayD I just want two striped volumes (each on it's own server with many bricks for more storage space) combined into one replicated volume (across network).
17:33 JoeJulian Why do you want striping?
17:33 JoeJulian Do you have files that will exceed the size of a single brick?
17:34 GrayD I plan to use this as network storage for private cloud to host VM images.
17:34 JoeJulian IMHO if you haven't tested the stripe translator on your specific workload and determined empirically that it's the right fit for your specific need, then you don't want it.
17:35 JoeJulian It's not going to help with VM images, unless you somehow have multiple VM's all reading and writing to/from the same image.
17:35 DaveS_ joined #gluster
17:35 GrayD But how I could make continous replicated storage.
17:36 GrayD ?
17:36 JoeJulian The command above. If you want it to be bigger, add more bricks. That makes a distributed replicated volume.
17:36 GrayD But what about fault tolerance?
17:37 JoeJulian That's the replication.
17:37 GrayD As I understand in replicated only volume file copied to each brick in volume, isn't it?
17:38 morse joined #gluster
17:38 JoeJulian Each brick in the replica subvolume. So with that "replica 2" in the command, each 2 bricks forms a replica set. Those are passed to the distribute translator to produce a single coherent namespace.
17:38 GrayD As I understand there is no way to make something similar to network distributed RAID3 or RAID5 storage.
17:39 JoeJulian Correct.
17:39 faizan joined #gluster
17:39 GrayD Ah. I see now.
17:40 GrayD So, in my case brick order should be like this: server1:/brick1 server2:/brick1 server1:/brick2 server2:/brick2 am I correct?
17:40 JoeJulian Absolutely. :D
17:40 semiosis @brick names
17:40 glusterbot semiosis: I do not know about 'brick names', but I do know about these similar topics: 'brick naming'
17:40 semiosis @brick naming
17:40 glusterbot semiosis: http://goo.gl/l3iIj
17:40 semiosis ^^^
17:41 GrayD Ah. Thanks!
17:45 nueces joined #gluster
17:50 mohankumar joined #gluster
18:02 Mo__ joined #gluster
18:19 mohankumar joined #gluster
18:19 tg2 hey guys if I have a distributed setup (no replication) and want to take a storage node offlin
18:19 tg2 what ist eh best way to go about that
18:21 semiosis best would probably be to stop all clients first so they don't try to access files on the down server
18:22 semiosis but what do you mean take a storage node offline?  like reboot for a kernel upgrade?  permanently decommission?
18:23 tg2 either
18:23 tg2 if I have 10 storage servers
18:23 tg2 each is a block in a distributed setup
18:23 tg2 each server is R6 under it's glusteerfs brick
18:23 tg2 say I want to take one offline
18:23 tg2 to do a hardware upate or whatever the case
18:24 tg2 if I know in advance I want to take it offline
18:24 tg2 is there a way to have it put it's files on the remaining nodes and stop accepting new ones
18:24 semiosis "either" is not a good answer
18:24 semiosis it matters a whole lot whether or not that server will come back
18:24 tg2 permanently decomission then
18:24 tg2 it's files need to be made available while it is offline
18:24 pranithk joined #gluster
18:24 tg2 sort of "flagged for removal" during which point it offloads its local files onto the remaining nodes
18:25 semiosis remove-brick
18:25 tg2 will its files be unavailable to the volume?
18:25 semiosis i think remove-brick can migrate data since 3.3.0 tho i've not tried
18:25 semiosis not sure the exact procedure for that
18:25 tg2 i think it's basicalyt he same logic as the levelling when you add a new brick
18:25 tg2 ie: migrate some files to the new node
18:25 tg2 but in reverse
18:26 tg2 stop accepting new ones to your brick and offload the ones you have to the exsiting nodes
18:26 tg2 ok i'll look up remove-brick and see if this was added
18:26 tg2 if not, is it possible to request this feature from the devs and put a bounty on it?
18:26 semiosis one step at a time
18:26 semiosis (and idk)
18:26 tg2 http://gluster.org/pipermail/glust​er-users/2011-February/006664.html
18:26 glusterbot <http://goo.gl/CsH3x> (at gluster.org)
18:26 tg2 ther is obviously some demand for it
18:27 semiosis so that was like 18 months ago
18:27 joeto joined #gluster
18:27 semiosis should work now iirc
18:29 tg2 as of 3.1 it is still destructive, I'll check 3.3 features specifically
18:31 tg2 3.3: Remove-brick can migrate data to remaining bricks
18:31 tg2 good
18:31 tg2 thx
18:32 semiosis when you find out exactly how, please let me know :)
18:32 Gilbs1 joined #gluster
18:39 Gilbs1 Hey guys, looking to upgrade to 3.3.1 tonight from 3.3, is this nothing more than a rolling update or do I need to reboot/remount anything?
18:41 elyograg Gilbs1: I did a rolling update of the servers in my testbed with no problems.  I don't think you can upgrade the clients without losing access to the volume on each client briefly.  Probably best to unmount, kill any gluster client processes, and remount.
18:42 semiosis +1 clients need to be unmounted/remounted
18:42 semiosis also it's usually a good idea to upgrade all servers before any clients
18:42 Gilbs1 gotcha, unmount clients, patch servers, clients then remount?
18:43 elyograg i didn't have to unmount my clients to upgrade the servers.  Replication saved the day there.
18:45 semiosis i would (though haven't actually tried) update package & reboot each server, one at a time.  then once they're all done, on each client, unmount, update package, remount
18:46 elyograg It's a good idea to wait between each server upgrade for self-healing to synchronize the replicas, especially if it's a very busy volume.
18:46 semiosis +1
18:46 semiosis of course if you unmounted all clients before doing teh server upgrades you'd know there were no changes to heal
18:47 gbrand_ joined #gluster
18:48 Gilbs1 We're not too busy just yet on production so i'll just ask for some downtime for patching.  Thanks guys.
18:52 wN joined #gluster
19:01 DaveS joined #gluster
19:05 DaveS joined #gluster
19:05 NcA^__ joined #gluster
19:10 hattenator joined #gluster
19:14 raghu joined #gluster
19:30 NcA^ joined #gluster
19:35 y4m4 joined #gluster
19:36 Nr18 joined #gluster
19:40 atrius joined #gluster
19:43 Gilbs1 left #gluster
19:51 Bullardo joined #gluster
20:04 mohankumar joined #gluster
20:07 badone joined #gluster
20:12 mohankumar joined #gluster
20:20 Psi-Jack Wow...
20:20 Psi-Jack My first performance testing with GlusterFS is showing huge latency issues. Website response time going from 600ms to 5s right from the start.
20:20 semiosis ,,(php)
20:21 glusterbot php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
20:22 Psi-Jack Varnish is a definite no-go, and we're already using APC.
20:22 Psi-Jack And Memcached. :)
20:22 Psi-Jack Hmmm, apt.stat=0 though, one sec. ;0
20:24 semiosis another thing which isnt mentioned (afaict) on joe's blog is that you should also optimize your include_path
20:24 Psi-Jack I'm thinking a major portion of it's our network infrastructure, It's only a 2x1Gb Ethernet, physically, shared accross each physical hypervisor server.
20:25 semiosis ZF docs talk about optimizing include_path: http://framework.zend.com/manual/1.​12/en/performance.classloading.html
20:25 glusterbot <http://goo.gl/sIaxc> (at framework.zend.com)
20:25 samppah Psi-Jack: you might want to test glusterfs over nfs if that's possible in your use case
20:26 Psi-Jack Hmm, that completely destroys the idea of HA, samppah. heh.
20:26 semiosis Psi-Jack: what php framework are you using, if any?
20:27 Psi-Jack It's all Zend Framework,.
20:28 Psi-Jack And yeah, include_path is already specifically tailored to this application's needs.
20:28 semiosis then definitely see that page i just linked
20:28 purpleidea semiosis: no of course not! i just mean that i don't care which module is "used" more as long as they're all free. anyways, happy hacking, and feel free to ping me if you want to work on the code.
20:28 semiosis follow the instrucitons there to strip out all the require/include calls
20:29 purpleidea semiosis: (sorry, i go afk a lot...)
20:29 semiosis using autoloading w/out the require/include calls will give you a huuuuge perf. boost
20:29 semiosis purpleidea: :)
20:29 Psi-Jack semiosis: ALready is. But I'm also not the R&D team working on code. I'm the system engineer setting up the server infrastructure.
20:30 semiosis already is, what?  i'd find it hard to believe you're getting 5s page load times after stripping out the require/include calls
20:31 Psi-Jack Already is optimized.
20:31 semiosis not your include path
20:31 Psi-Jack Yes it is.
20:31 semiosis i'm sure it is but that's not what i'm talking about anymore :)
20:31 Psi-Jack I know. :)
20:31 semiosis ok i'm lost
20:32 Psi-Jack We're using pretty much near 100% autoloading without the require/include calls (though autoloading DOES include, no matter what you actually think.) ;)
20:33 semiosis did you run the command using find+sed to strip out the require/include calls from the entire ZF library, or not?
20:36 Psi-Jack I just checked, searching for anything relavant to what that would replace, and yeah, every single require_once is commented out specifically in their incorporated ZF library.
20:37 semiosis awesome
20:37 Psi-Jack So, they did it, sometime, for sure. ;)
20:37 Psi-Jack Instead of removing, they commented. ;)
20:40 Psi-Jack Like I said, I'm guessing the bigger problem is ethernet latency somewhere. Since we only  have 1Gb between each hypervisor server, and some kind of vswitch somewhere in there, our ethernet's getting bottlenecked like no tomorrow and latency is increased.
20:41 Psi-Jack Native storage, not bad at all. Direct FCx4 to FC disks in the SAN.
20:41 Bullardo joined #gluster
20:44 sagenate joined #gluster
20:45 sagenate hey
20:46 semiosis hi
20:46 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
20:46 semiosis sagenate: ^^
20:46 semiosis JoeJulian: watch this...
20:46 semiosis google glusterfs
20:47 semiosis ok apparently glusterbot doesnt behave like the other supybots in my life
20:47 sagenate thanks semiosis, it's been ages since I've actually used IRC. I'm unfamiliar with protocols :)
20:49 sagenate Simply put, I am trying to build a MS Windows Failover Cluster in which to install MS SQL Server IN Amazon Web Services. I am having trouble with the lack of shared storage and I was wondering if Gluster could solve this problem.
20:55 plarsen joined #gluster
21:07 TSM2 joined #gluster
21:10 Bullardo joined #gluster
21:11 Psi-Jack sagenate: Ummm... No.
21:12 blendedbychris joined #gluster
21:12 blendedbychris joined #gluster
21:12 sagenate Psi-Jack: thanks, i didn't think so.
21:13 sagenate i think i need something with an iscsi interface
21:15 andreask joined #gluster
21:15 JoeJulian @google glusterfs
21:15 glusterbot JoeJulian: Frontpage | Gluster Community Website: <http://www.gluster.org/>; Documentation - Gluster: <http://www.gluster.org/docs/>; GlusterFS Concepts - GlusterDocumentation: <http://goo.gl/LvkRA>; GlusterFS - Wikipedia, the free encyclopedia: <http://en.wikipedia.org/wiki/GlusterFS>; Glusterfs, Glusterfs | SlideShare:
21:15 glusterbot JoeJulian: <http://www.slideshare.net/Gluster>; gluster/glusterfs · GitHub: <https://github.com/gluster/glusterfs>; Storage Clustering Part 2: GlusterFS | RimuHosting Blog: <http://goo.gl/75LGv>; Anatomy Of An Open Source Acquisition: From GlusterFS To Red ...: <http://goo.gl/wV0Mf (1 more message)
21:16 JoeJulian semiosis: Was that what you were expecting?
21:16 Psi-Jack sagenate: No, you need a full blown SAN, or a hardware battery backed RAID server, simply put.
21:16 Psi-Jack sagenate: "Shared Storage" + "Database" =  Extremely bad, period.
21:17 JoeJulian YEAH! 'Cause mine hasn't worked reliably for 2 years!
21:17 JoeJulian Oh, wait... yes it has.
21:17 sagenate shared storage is how you create a sql server failover cluster
21:19 JoeJulian sagenate: The real problem with your design is Windows. It doesn't have FUSE and, therefore, could only access a glusterfs volume via nfs.
21:21 sagenate okay thanks
21:21 JoeJulian What Psi-Jack  and everyone else seems to mean when poo-pooing the idea of putting a database on shared storage is that it's going to be much slower. Too slow for some applications.
21:21 mohankumar joined #gluster
21:21 sagenate I'm trying to fork lift an existing solution/architecture into AWS as a proof of concept
21:22 Psi-Jack sagenate: Incorrect. Replication is how you do failover clusters.
21:22 JoeJulian Both correct. Either one works.
21:24 JoeJulian I have a single mysql (actually MariaDB) server running with the data stored in innodb tables on a gluster volume. If that mysql vm fails, I can quickly kill it and switch over to another standby.
21:24 JoeJulian I've found that innodb is much more efficient than isam for gluster volumes (not that you have much of a choice with MS).
21:25 Psi-Jack heh yikes
21:25 JoeJulian Like I say, Psi-Jack, it's been working reliably since 2.0.2.
21:26 JoeJulian On replica 3 volumes, btw.
21:26 * m0zes uses postgres, mysql and rrd databases on shared/replicated glusterfs volumes. also vms.
21:27 m0zes of course I do very few writes to the the postgres/mysql databases, and use rrdcached to batch update the rrd files.
21:30 eightyeight joined #gluster
21:32 semiosis JoeJulian: the other supys (which aren't limnoria) basically do a @lucky if you just start a message with "google ..."
21:33 m0zes @google supybot
21:33 glusterbot m0zes: Supybot | Free Communications software downloads at - SourceForge: <http://sourceforge.net/projects/supybot/>; SourceForge.net: Supybot Resources - gribble: <http://goo.gl/0jXrk>; Supybot : Project Summary - Ohloh: <http://www.ohloh.net/p/supybot>; Supybot: <http://www.supybot.org/>; Supybot - IRC Wiki: <http://www
21:33 glusterbot m0zes: .irc-wiki.org/Supybot>; Supybot – Freecode: <http://freecode.com/projects/supybot>; mmueller/supybot-git · GitHub: <https://github.com/mmueller/supybot-git>; ProgVal/Supybot-plugins · GitHub: <https://github.com/ProgVal/Supybot-plugins>
21:34 semiosis @lucky m0zes
21:34 glusterbot semiosis: http://www.facebook.com/m0zes
21:34 semiosis ;)
21:34 m0zes hah. not me
21:34 semiosis lol
21:35 sagenate @lucky sagenate
21:35 glusterbot sagenate: http://www.facebook.com/sagenate
21:35 sagenate got me! ha
21:35 sagenate neat bot
21:36 gatuus joined #gluster
21:37 gatuus please help: What are the dissadvantages or recommendations if I install gluster-Fs without any Raid??
21:38 semiosis gatuus: you would probably want to use a replicated volume with two or more servers
21:38 semiosis just a guess
21:39 mgebbe_ joined #gluster
21:39 gatuus semiosis: yes, I'm considering using 3 repiclas... but what if I install it without RAID??
21:39 semiosis sounds fine to me, i dont use raid
21:40 gatuus ohh ok... that's a vote for not using it thanks
21:40 semiosis though i do use ebs, which i've been told has some redundancy built in
21:43 JoeJulian I don't use raid, but I do replica 3
21:45 gatuus what about performance?? a redudant raid is recommended?
21:57 JoeJulian Your biggest performance hurdle is probably going to be your network. You'll need to look at how you're going to use it and decide from that what performance issues you need to focus on. Like everything, there are tradeoffs to be had.
21:58 blendedbychris semiosis: so i go back to my question originally… why not just use the mount resource heh :)
21:59 blendedbychris instead of your puppet module
21:59 blendedbychris I'm gonna make a new one though
22:00 semiosis :)
22:00 semiosis sounds great to me
22:02 semiosis i put my glusterfs module out there but never advocated that anyone else should use it :D
22:02 semiosis i sure don't want to have to support people using it!
22:07 semiosis @puppet
22:07 glusterbot semiosis: (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
22:28 tryggvil joined #gluster
22:39 JoeJulian mp add "^google (.*)$" "echo $nick: [lucky $1]"
22:40 JoeJulian google Joe Julian
22:42 semiosis idk if that's something to be desired, seemed more like a bug imho
22:42 semiosis a bug that glusterbot didnt have
22:44 JoeJulian well okay then. :D
22:56 stefanha joined #gluster
22:57 snarkyboojum joined #gluster
23:13 plarsen joined #gluster
23:14 lh joined #gluster
23:14 lh joined #gluster
23:16 davdunc joined #gluster
23:16 davdunc joined #gluster
23:22 blendedbychris JoeJulian: do you use hosts file to specify your hosts?
23:24 JoeJulian No, I use bind9
23:25 blendedbychris out of curiousity how do you handle instances where you might want to return a lan address vs wan address?
23:27 JoeJulian Currently, I don't. I'm considering doing that, though, and will probably use hosts to provide that split-horizon name resolution.
23:27 JoeJulian It can also be done with bind, but I'm not ambitious enough to figure that out.
23:28 JoeJulian Actually... now that I've said that...
23:29 JoeJulian If you just use a subdomain for the local addresses, ie server1.local.domain.dom and don't use fqdn for hostnames, you can set the search parameter in resolv.conf to local.domain.dom.
23:29 JoeJulian The client search would remain domain.dom so you'd have your split-horizon lookups.
23:30 JoeJulian booyah!
23:31 daMaestro joined #gluster
23:31 semiosis @lucky split-horizon dns
23:31 glusterbot semiosis: http://en.wikipedia.org/wiki/Split-horizon_DNS
23:32 semiosis glusterbot: awesome
23:32 glusterbot semiosis: ohhh yeeaah
23:35 blendedbychris thanks
23:37 blendedbychris JoeJulian: do you really have to do local.domain? could it just assume if it's missing the fqdn look somewhere and then use search?
23:38 atrius joined #gluster
23:39 JoeJulian If a hostname is missing the domain on lookup, unless you specifically configured it otherwise, it's going to look in /etc/hosts for that hostname. If it doesn't find it, it's going to query dns. If it doesn't find that, it's going to add the first search domain to the hostname and start over again. It will repeaat that for as many domains as you have listed after the search parameter in resolv.conf.
23:41 blendedbychris JoeJulian: so ideally really what you'd do is setup bind9 just for your local addresses?
23:41 blendedbychris and have that before your ISP dns entries?
23:41 blendedbychris (or even just include entries without the fqdn, if that's even possible)
23:41 blendedbychris or a fake "fqdn"
23:43 JoeJulian I always define my own fqdns for my hosts. I don't want them querying an external host to determine who each other is.
23:45 blendedbychris hrm
23:45 blendedbychris what do you mean?
23:45 semiosis one.gluster.joe?
23:46 blendedbychris here's another question…. is it possible for bind9 (or some other service) to allow the host to assign it's own hostname?\
23:47 semiosis yes but we're getting pretty OT for #gluster
23:48 semiosis see dynamic dns & dns-sd/bonjour for two different things that fit that description
23:48 semiosis s/bonjour/bonjour\/avahi\/zeroconf/
23:48 semiosis heh
23:48 JoeJulian Here at Ed Wyse, all my servers are in the domain ewcs.com. It's strictly an internal domain name, even though it's registered and hosted. If I ever get around to doing the "local" network for the servers, I'll call those (for instance) belinda.local.ewcs.com for the host whose "public" address is belinda.ewcs.com. (We don't actually have any "public" addresses to any of our servers)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary