Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 jiqiren joined #gluster
00:07 saltsa joined #gluster
00:17 harish joined #gluster
00:35 saltsa joined #gluster
00:36 _Spiculum joined #gluster
00:38 morse_ joined #gluster
00:40 micu joined #gluster
00:40 ultrabizweb joined #gluster
00:42 mkzero_ joined #gluster
00:43 jbd1_ joined #gluster
00:45 katka_ joined #gluster
00:45 lanning_ joined #gluster
00:45 REdOG_ joined #gluster
00:48 davidbierce joined #gluster
00:52 wgao__ joined #gluster
00:57 andreask joined #gluster
00:57 cyberbootje joined #gluster
01:02 yinyin joined #gluster
01:04 bala joined #gluster
01:15 saltsa joined #gluster
01:16 psyl0n joined #gluster
01:20 Oneiroi joined #gluster
01:35 Alex Morning. I think I may misunderstand how a gluster rebalance works. I had a volume (distribute-replica, two nodes) added four new bricks (2 per node), and ran a rebalance. One of my bricks still shows as 100% full even after that. The others definitely have data on them though. Is there something I'm missing?
01:43 purpleidea Alex: what commands did you run? fix-layout or not?
01:43 purpleidea @rebalance
01:43 glusterbot purpleidea: I do not know about 'rebalance', but I do know about these similar topics: 'replace'
01:44 Alex purpleidea: I didn't run fix-layout, but my understanding from the docs (and from the output of the log) was that fix-layout was implied with rebalance
01:44 purpleidea Alex: here's how it works:
01:44 Alex (volume rebalance <vol> start)
01:47 purpleidea when you "fix layout" it changes where the elastic hashes will put files. this is useful if you add/remove bricks so that the hashes are approximately even across all bricks...
01:47 purpleidea this operation is typically "fast-ish"
01:47 purpleidea when you do the rebalance (not the fix-layout part) this actually moves all the files to their correct places... this can take a while... this is what you probably didn't do.
01:47 purpleidea so after a fix layout, files that don't move will be where they are, and files that should move (after a rebalance) will be pointed to from their correct place. get it?
01:47 Alex Right, so what I did was just run volume rebalance <vol> start, which according to the log did both: "fixing the layout of /" and... "0-shared-dht: migrate data called on /" (and for the N thousand sub directories)
01:47 Alex so is it a separate operation that I now need to call?
01:47 Alex Just for ref, after my rebalance the output of status was: https://gist.github.com/3050ebdb1a3d33768300
01:47 glusterbot Title: gist:3050ebdb1a3d33768300 (at gist.github.com)
01:47 Alex (ps, thank you for the help as always :))
01:48 JoeJulian No, the plain rebalance is supposed to do both, it's not an extra step.
01:48 purpleidea Alex: you typically run a fix layout first. then you run a rebalance.
01:48 JoeJulian Check the rebalance logs for clues what might have happened.
01:48 daMaestro joined #gluster
01:49 JoeJulian purpleidea: Nope. The separate fix-layout option was for some users that didn't want to move their files.
01:49 purpleidea JoeJulian: hrm, i thought when you did it with rebalance start .. commit it only moved files, but rebalance (without the start/commit) it did both. I'm okay being wrong :)
01:53 _BryanHm_ joined #gluster
01:53 JoeJulian purpleidea: You're thinking of remove-brick/replace-brick. There's no commit for rebalance.
01:53 Alex I definitely see some failures, but no real info about why. It's a chunky logfile, so will spend a bit of time digging...
01:53 purpleidea JoeJulian: oh, you're right!
01:53 JoeJulian The most important parts should be " E "
01:53 purpleidea JoeJulian: karma++
01:53 JoeJulian Hehe
01:53 purpleidea glusterbot: !
01:53 JoeJulian I don't have that enabled.
01:53 purpleidea JoeJulian: are you the glusterbot op?
01:53 JoeJulian yep
01:53 JoeJulian Well, owner. There are several with ops rights.
01:53 purpleidea JoeJulian: ah. now i know why you're so good at it. enable .py and .karma and etc :P
01:53 purpleidea @teach jamesisawesome james is awesome
01:53 JoeJulian @meh
01:53 glusterbot JoeJulian: I'm not happy about it either
01:53 purpleidea apparently i can't teach it anything
01:53 JoeJulian @learn
01:54 glusterbot JoeJulian: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
01:54 purpleidea @learn jamesisawesome as james is awesome
01:54 glusterbot purpleidea: The operation succeeded.
01:54 purpleidea @jamesisawesome
01:54 glusterbot purpleidea: james is awesome
01:54 purpleidea @forget jamesisawesome
01:54 glusterbot purpleidea: The operation succeeded.
01:54 purpleidea @jamesisawesome
01:54 JoeJulian @kick...
01:54 purpleidea JoeJulian: aha
01:54 JoeJulian lol
01:54 purpleidea @kick ?
01:54 yinyin joined #gluster
01:55 purpleidea (,,kick)
01:55 JoeJulian <at>kick purpleidea
01:55 purpleidea JoeJulian: oh, you get your bot to do your dirty work :P
01:55 daMaestro yeah, but usually you do that out of channel ;-)
01:56 kickme joined #gluster
01:56 purpleidea @kick kickme
01:56 glusterbot purpleidea: Error: You don't have the #gluster,op capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
01:56 purpleidea @whoami
01:56 glusterbot purpleidea: I don't recognize you.
01:56 JoeJulian Not I. It's just faster than gaining ops. Usually I use a kick for spamming the channel. And twice a kban for trolling.
01:57 purpleidea JoeJulian: i've wanted to kick people for pasting in channel. then again i have no op. however i've only got 24 lines of scroll :P
01:57 daMaestro there are antiflooding plugins we use in #fedora-* JoeJulian
01:57 daMaestro we also run supybot
01:57 JoeJulian Ok, take the glusterbot playing to /msg.
01:58 purpleidea i'm done
01:58 JoeJulian Yeah, I started looking at those, and it locked the channel due to a netsplit.
02:00 daMaestro yeah, you have to be specific on what you want to take action on
02:00 JoeJulian I'm sure it must have been my own configuration failure, but I've been too busy lately to play.
02:00 purpleidea JoeJulian: fwiw I got vagrant working on fedora with libvirt/qemu
02:00 kickme left #gluster
02:00 JoeJulian nice
02:00 JoeJulian Did you write it up somewhere?
02:00 purpleidea (my memory tells me you were a fedora user too)
02:00 JoeJulian yep
02:00 purpleidea JoeJulian: just did: https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/
02:00 purpleidea save yourself some time and have a look. took some knob turning and fiddling. going to use this to do glusterfs of course
02:00 JoeJulian That's also something that's bugged me. I don't like installing kmods if I can avoid it.
02:00 purpleidea kmods? for what?
02:00 JoeJulian For virtualbox...
02:01 daMaestro JoeJulian, http://www.scrye.com/~kevin/fedora/FloodProtect/plugin.py ... not sure if it's the latest version or not
02:01 purpleidea Oh, vbox needs that? yikes! another reason to avoid it. It was never getting installed on my machine, but i wanted to try vagrant. so far, i like it a lot.
02:02 JoeJulian Thanks. Maybe I'll have time after Christmas. With a 3 1/2 year old, there's no extra time this year.
02:02 daMaestro sure, understand
02:02 purpleidea JoeJulian: np. ping me if you get stuck or if i messed up something in my post or there's a better way. After xmas i'll probably have Vagrantfile's for gluster, and puppet-gluster (of course)
02:03 JoeJulian actually, that's not the same one. This is much simpler. Maybe I will play with it later.
02:03 daMaestro yeah, it's just for channel msg spam, kevin wrote it a while ago
02:04 psyl0n joined #gluster
02:04 sarkis joined #gluster
02:16 d-fence_ joined #gluster
02:17 [o__o] joined #gluster
02:18 sysconfig joined #gluster
02:18 nullck joined #gluster
02:37 bharata-rao joined #gluster
02:55 NeatBasis joined #gluster
02:56 lkoranda joined #gluster
02:58 mattap___ joined #gluster
02:59 rjoseph joined #gluster
03:06 Alex Yeah, can't see much in terms of errors past just something like [2013-12-06 05:26:21.211765] E [dht-rebalance.c:1202:gf_defrag_migrate_data] 0-shared-dht: migrate-data failed for /lfd/networktest/_logs/2013-07-28.log.gz -- I'll dig on the debug modes now :)
03:07 DV__ joined #gluster
03:11 jag3773 joined #gluster
03:14 ccha4 joined #gluster
03:21 hagarth joined #gluster
03:21 mikedep333 joined #gluster
03:23 dusmant joined #gluster
03:38 satheesh joined #gluster
03:38 RameshN joined #gluster
03:50 vpshastry joined #gluster
03:52 badone joined #gluster
04:01 saurabh joined #gluster
04:06 kanagaraj joined #gluster
04:08 sgowda joined #gluster
04:11 yinyin joined #gluster
04:12 sarkis joined #gluster
04:13 johnbot1_ joined #gluster
04:20 itisravi joined #gluster
04:24 spandit joined #gluster
04:27 ppai joined #gluster
04:29 ngoswami joined #gluster
04:30 mattapp__ joined #gluster
04:30 vpshastry joined #gluster
04:37 kshlm joined #gluster
04:37 kshlm joined #gluster
04:40 mohankumar joined #gluster
04:42 MiteshShah joined #gluster
04:48 ndarshan joined #gluster
04:54 shruti joined #gluster
04:54 sahina joined #gluster
04:54 aravindavk joined #gluster
04:55 johnbot11 joined #gluster
04:59 mattapp__ joined #gluster
05:07 satheesh joined #gluster
05:13 bala joined #gluster
05:13 johnbot11 joined #gluster
05:14 raghu joined #gluster
05:22 bala1 joined #gluster
05:25 aravinda_ joined #gluster
05:26 vpshastry joined #gluster
05:26 meghanam joined #gluster
05:26 meghanam_ joined #gluster
05:27 mattapp__ joined #gluster
05:28 psharma joined #gluster
05:30 timothy joined #gluster
05:30 MiteshShah joined #gluster
05:34 johnbot1_ joined #gluster
05:36 sgowda joined #gluster
05:40 mattappe_ joined #gluster
05:40 amukherj_ joined #gluster
05:42 vkoppad joined #gluster
05:45 satheesh joined #gluster
05:46 shylesh joined #gluster
05:50 johnbot11 joined #gluster
05:57 ababu joined #gluster
06:03 bulde joined #gluster
06:04 CheRi joined #gluster
06:09 anands joined #gluster
06:10 sonicrose joined #gluster
06:11 sonicrose would this be the right place to come for advice on a gluster configuration?
06:11 badone joined #gluster
06:13 sonicrose i have a node cluster running sharing a very small amount of data (from SSD drives).  The two nodes each have 165GB of space with 1 brick each, single volume replica count = 2
06:14 sonicrose glusterfs 3.4.1 btw, seems to be working great right now.  I'm considering a way to use this for VM service with a hypervisor
06:15 yinyin joined #gluster
06:16 zeittunnel joined #gluster
06:20 micu3 joined #gluster
06:22 mattapp__ joined #gluster
06:25 harish joined #gluster
06:25 glusterbot New news from newglusterbugs: [Bug 928656] nfs process crashed after rebalance during unlock of files. <https://bugzilla.redhat.com/show_bug.cgi?id=928656>
06:40 zwu joined #gluster
06:45 FarbrorLeon joined #gluster
06:48 sgowda sonicrose: are these for hosting vm images? or to attach storage?
06:54 hagarth joined #gluster
06:54 aravindavk joined #gluster
06:54 aravinda_ joined #gluster
06:55 timothy joined #gluster
06:58 yinyin joined #gluster
07:12 ngoswami joined #gluster
07:15 sonicrose i need a solution for inexpesive redundant fast shared storage for vm images
07:17 sonicrose i want to use large 3.5 platter drives in a pair of storage nodes and i want to have a few SSD drives in my hypervisor nodes.  i'd like the running VMs to read and write from the local SSD storage, meanwhile gluster replicates the data elsewhere so that if that hypervisor goes down i can still mount the vm images on a different hypervisor
07:19 hagarth joined #gluster
07:23 jtux joined #gluster
07:24 mattappe_ joined #gluster
07:25 sgowda joined #gluster
07:50 geewiz joined #gluster
07:56 timothy joined #gluster
07:57 adsaman joined #gluster
07:58 pkoro joined #gluster
07:59 eseyman joined #gluster
08:09 vpshastry joined #gluster
08:10 adsaman Hi, I was setting up a glusterfs cluster to store user's home directories files but it seems that gluster doesn't support user quotas. Anybody has a similar configuration? How do you manage user quotas?
08:15 prasanth joined #gluster
08:17 sticky_afk joined #gluster
08:18 stickyboy joined #gluster
08:26 keytab joined #gluster
08:29 andreask joined #gluster
08:29 mattapp__ joined #gluster
08:30 mattappe_ joined #gluster
08:32 harish joined #gluster
08:50 DV__ joined #gluster
08:50 RameshN joined #gluster
08:51 sahina joined #gluster
08:54 prasanth joined #gluster
08:54 dusmant joined #gluster
08:56 timothy joined #gluster
08:59 aravinda_ joined #gluster
08:59 kanagaraj joined #gluster
08:59 ababu joined #gluster
08:59 ndarshan joined #gluster
09:02 mgebbe_ joined #gluster
09:04 bala1 joined #gluster
09:05 FarbrorLeon joined #gluster
09:06 calum_ joined #gluster
09:09 ctria joined #gluster
09:13 shylesh joined #gluster
09:27 timothy joined #gluster
09:27 ndarshan joined #gluster
09:28 ababu joined #gluster
09:28 RameshN joined #gluster
09:28 aravinda_ joined #gluster
09:28 sahina joined #gluster
09:28 kanagaraj joined #gluster
09:29 Guest71544 joined #gluster
09:29 dusmant joined #gluster
09:31 alexcos20 joined #gluster
09:31 mattapp__ joined #gluster
09:31 alexcos20 Hello
09:31 glusterbot alexcos20: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:31 mohankumar joined #gluster
09:31 bala1 joined #gluster
09:32 alexcos20 i'm not sure how to ask, but anaybody has seen this before:
09:32 alexcos20 [2013-12-09 13:08:36.788765] I [afr-common.c:2120:afr_discovery_cbk] 0-stor1-replicate-0: selecting local read_child stor1-client-1
09:32 alexcos20 [2013-12-09 13:08:36.789257] E [afr-self-heal-common.c:197:afr_sh_print_split_brain_log] 0-stor1-replicate-0: Unable to self-heal contents of '/' (possible split-brain). Please delete the file from all but the preferred subvolume.- Pending matrix:  [ [ 0 2 ] [ 2 0 ] ]
09:32 alexcos20 [2013-12-09 13:08:36.789648] E [afr-self-heal-common.c:2212:afr_self_heal_completion_cbk] 0-stor1-replicate-0: background  meta-data self-heal failed on /
09:32 alexcos20 i'm unable to build a volume of 2 replica bricks, containing no files
09:36 Remco Which version of gluster is this about?
09:36 ngoswami joined #gluster
09:37 atrius joined #gluster
09:37 alexcos20 glusterfs-3.4.1-3.el5
09:38 alexcos20 (rpm installed)
09:39 alexcos20 Volume info:
09:39 alexcos20 Volume Name: stor1
09:39 alexcos20 Type: Replicate
09:39 alexcos20 Volume ID: 6bd88164-86c2-40f6-9846-b21e90303e73
09:39 alexcos20 Status: Started
09:39 alexcos20 Number of Bricks: 1 x 2 = 2
09:39 alexcos20 Transport-type: tcp
09:39 alexcos20 Bricks:
09:39 alexcos20 Brick1: blade7.xen.simplus.ro:/gluster/stor1
09:39 alexcos20 Brick2: blade6.xen.simplus.ro:/gluster/stor1
09:39 alexcos20 Options Reconfigured:
09:39 alexcos20 nfs.port: 2049
09:40 vimal joined #gluster
09:41 ninkotech joined #gluster
09:41 ninkotech_ joined #gluster
09:46 ninkotech joined #gluster
09:46 ninkotech_ joined #gluster
09:56 ninkotech joined #gluster
09:56 ninkotech_ joined #gluster
09:57 alexcos joined #gluster
09:57 abyss^ I have question about maillist, I send mail to gluster-users@ two days ago and my mail don't reach the maillist. Do I something wrong?
09:59 alexcos20 joined #gluster
10:01 alexcos20 abyss - did you subscribed to the list?
10:02 hagarth abyss^: what was the subject line of your email?
10:04 abyss^ alexcos20: yes
10:05 abyss^ hagarth: Error after crash of Virtual Machine during migration
10:05 muhh joined #gluster
10:06 hagarth abyss^: can you please try sending it out again?
10:09 social how does one get rid of old georeplication after updating to 3.5 ?
10:10 social after I updated I got defunct state and I'm unable to stop it as I get Passwordless ssh login has not been setup
10:15 abyss^ hagarth: I sorted it out myself (I just moved gluster via: http://gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server). Or you want to check what wrong? So I can resend this? In summary this problem is still unresolved (I think it's bug;))
10:15 glusterbot Title: Gluster 3.4: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at gluster.org)
10:20 vpshastry2 joined #gluster
10:29 lanning joined #gluster
10:35 bulde joined #gluster
10:35 dylan joined #gluster
10:38 shylesh joined #gluster
10:43 hagarth joined #gluster
10:43 gdubreui joined #gluster
10:46 lava joined #gluster
10:48 hagarth abyss^: it might be good to still send it across :)
10:49 Guest71544 joined #gluster
10:51 ppai joined #gluster
10:52 shyam joined #gluster
10:53 aravinda_ joined #gluster
10:54 mattapp__ joined #gluster
10:59 calum_ joined #gluster
11:00 abyss^ hagarth: OK. I resend this :) And now it available;)
11:01 abyss^ thx
11:26 FarbrorL_ joined #gluster
11:32 psyl0n joined #gluster
11:37 alexcos20 this is driving me insane:  0-stor1-replicate-0: Unable to self-heal contents of '/' (possible split-brain). Please delete the file from all but the preferred subvolume.- Pending matrix:  [ [ 0 2 ] [ 2 0 ] ]
11:37 alexcos20 but there are no files on any of the subvolumes !!
11:40 VeggieMeat_ joined #gluster
11:44 fyxim joined #gluster
11:44 hybrid5121 joined #gluster
11:45 edward1 joined #gluster
11:49 baul joined #gluster
11:52 hagarth joined #gluster
11:56 baul http://review.gluster.org/#/c/5807/ has patch set 6,its commit id is 3438475cfda277fac0de6758ea9f85e3587596b5.why finaly ,some cherry pick this change as 3438475cfda277fac0de6758ea9f85e3587596b5 .which meaning ???
11:56 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:57 baul git log only find the cherry pick commit id,??
11:57 baul some one cherry pick
12:00 stickyboy joined #gluster
12:00 hagarth1 joined #gluster
12:01 baul can some explain the cherry-pick commit policy in glusterfs?
12:05 kkeithley1 joined #gluster
12:09 aravinda_ joined #gluster
12:11 stickyboy joined #gluster
12:11 stickyboy joined #gluster
12:15 dusmant joined #gluster
12:16 timothy joined #gluster
12:16 FarbrorLeon joined #gluster
12:19 ppai joined #gluster
12:25 mattappe_ joined #gluster
12:28 glusterbot New news from newglusterbugs: [Bug 1039954] replace-brick command should warn it is broken <https://bugzilla.redhat.com/show_bug.cgi?id=1039954>
12:34 shyam joined #gluster
12:34 vpshastry1 joined #gluster
12:44 itisravi joined #gluster
12:44 bala joined #gluster
12:54 chirino joined #gluster
12:55 vkoppad joined #gluster
12:56 hagarth joined #gluster
12:57 DV__ joined #gluster
12:58 glusterbot New news from newglusterbugs: [Bug 1039965] Merge in the Fedora spec changes to build one single unified spec <https://bugzilla.redhat.com/show_bug.cgi?id=1039965>
13:00 CheRi joined #gluster
13:04 bala joined #gluster
13:04 glusterbot New news from resolvedglusterbugs: [Bug 918917] 3.4 Alpha3 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=918917> || [Bug 952693] 3.4 Beta1 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=952693> || [Bug 819130] Merge in the Fedora spec changes to build one single unified spec <https://bugzilla.redhat.com/show_bug.cgi?id=819130>
13:08 badone joined #gluster
13:09 giannello joined #gluster
13:20 vkoppad joined #gluster
13:20 zeittunnel joined #gluster
13:23 dusmant joined #gluster
13:34 ababu joined #gluster
13:44 mattappe_ joined #gluster
13:47 B21956 joined #gluster
13:49 mkzero joined #gluster
13:51 skered- If I stop glusterd and glusterfsd I still see two glusterfs processes running.  Something to do with nfs.log and glustershd.log.  Any idea what those two are all about?
13:51 FarbrorLeon joined #gluster
13:52 skered- If I don't kill them and the same samething as above (stop glusterd/glusterfsd) I'll four of these process (old old, two new)
13:54 kkeithley_ glusterfs is a a client. Sometimes it's both a client and a server, e.g. on a brick, it's the gluster NFS server, i.e. it's both an NFS server and a client of gluster. On client machines it's the fuse bridge client.  On clients it goes away when you unmount a gluster volume. Oh the server it should go away when you stop  the volume, e.g. with `gluster volume stop $volname`
13:56 Alpinist joined #gluster
14:03 bala joined #gluster
14:07 pawkor joined #gluster
14:08 skered- kkeithley_: Ok that makes sense. I forgot we're using glusterfs in /etc/fstab on the glusterfs server itself
14:08 skered- We have a glusterfs for the ctdb lock file.
14:10 foster_ joined #gluster
14:10 B21956 joined #gluster
14:11 msvbhat joined #gluster
14:12 haakon joined #gluster
14:12 mgebbe_ joined #gluster
14:12 nixpanic joined #gluster
14:13 nixpanic joined #gluster
14:13 basic` joined #gluster
14:16 bdperkin joined #gluster
14:17 nhm joined #gluster
14:21 ngoswami joined #gluster
14:22 dbruhn joined #gluster
14:23 edward1 joined #gluster
14:24 vkoppad joined #gluster
14:28 japuzzo joined #gluster
14:33 dusmant joined #gluster
14:33 hagarth joined #gluster
14:38 matta____ joined #gluster
14:40 mattapp__ joined #gluster
14:48 mattapp__ joined #gluster
14:49 guix joined #gluster
14:57 bennyturns joined #gluster
15:01 mohankumar joined #gluster
15:04 ctria joined #gluster
15:09 gmcwhistler joined #gluster
15:11 kshlm joined #gluster
15:12 MrNaviPacho joined #gluster
15:13 bugs_ joined #gluster
15:15 mattapp__ joined #gluster
15:15 _BryanHm_ joined #gluster
15:17 wushudoin joined #gluster
15:18 bsaggy joined #gluster
15:20 ndk joined #gluster
15:21 badone joined #gluster
15:25 shyam joined #gluster
15:26 daMaestro joined #gluster
15:27 harish joined #gluster
15:34 ctria joined #gluster
15:37 jag3773 joined #gluster
15:41 jbrooks joined #gluster
15:47 vpshastry joined #gluster
15:49 Technicool joined #gluster
15:55 ira joined #gluster
15:55 ira joined #gluster
15:58 ProT-0-TypE joined #gluster
16:01 theron joined #gluster
16:09 kaptk2 joined #gluster
16:18 shyam joined #gluster
16:19 mattapp__ joined #gluster
16:31 vpshastry left #gluster
16:36 japuzzo joined #gluster
16:40 theron joined #gluster
16:44 bdperkin joined #gluster
16:45 wushudoin| joined #gluster
16:46 avati joined #gluster
16:47 Technicool joined #gluster
16:48 a2_ joined #gluster
16:49 jbautista- joined #gluster
16:52 mattapp__ joined #gluster
16:58 zaitcev joined #gluster
16:58 theron joined #gluster
16:59 thogue joined #gluster
17:01 jag3773 joined #gluster
17:02 mattapp__ joined #gluster
17:03 theron joined #gluster
17:03 jbd1 joined #gluster
17:04 johnbot11 joined #gluster
17:12 sarkis joined #gluster
17:14 vpshastry joined #gluster
17:16 mattapp__ joined #gluster
17:17 LoudNoises joined #gluster
17:21 mattapp__ joined #gluster
17:23 mattapp__ joined #gluster
17:24 japuzzo joined #gluster
17:24 mattapp__ joined #gluster
17:27 mattapp__ joined #gluster
17:32 psyl0n joined #gluster
17:32 mattap___ joined #gluster
17:35 mattappe_ joined #gluster
17:36 mattappe_ joined #gluster
17:38 vpshastry left #gluster
17:39 Mo__ joined #gluster
17:39 mattapp__ joined #gluster
17:40 mattappe_ joined #gluster
17:43 johnbot11 joined #gluster
17:43 mattappe_ joined #gluster
17:48 mattappe_ joined #gluster
18:05 sroy__ joined #gluster
18:06 mattapp__ joined #gluster
18:16 ctria joined #gluster
18:17 theron joined #gluster
18:18 mattapp__ joined #gluster
18:26 DV__ joined #gluster
18:40 rotbeard joined #gluster
19:19 mattapp__ joined #gluster
19:22 dewey joined #gluster
19:25 DV joined #gluster
19:25 mattapp__ joined #gluster
19:31 thogue_ joined #gluster
19:35 sonicrose is it possible to georeplicate to a second volume?
19:37 thogue joined #gluster
19:39 thogue joined #gluster
19:46 samppah sonicrose: what do you mean exactly?
19:57 jag3773 joined #gluster
19:58 DV joined #gluster
20:00 gdubreui joined #gluster
20:02 mattapp__ joined #gluster
20:04 matta____ joined #gluster
20:08 zerick joined #gluster
20:08 Gilbs1 joined #gluster
20:08 andreask joined #gluster
20:29 mattappe_ joined #gluster
20:36 mattapp__ joined #gluster
20:39 mattapp__ joined #gluster
20:45 Alpinist joined #gluster
20:50 mattapp__ joined #gluster
21:02 bsaggy joined #gluster
21:04 badone joined #gluster
21:08 sroy__ joined #gluster
21:13 _Bryan_ joined #gluster
21:14 jag3773 joined #gluster
21:23 Gilbs1 left #gluster
21:26 sonicrose what i have is 4 node replica 2 stripe 2 on regular centos 6.     I want to do XenServer VMs.  The XenServer hosts have local SSD storage I want to use as Read and Write cache while having the data ultimately stored in gluster
21:27 sonicrose i would like to avoid NFS entirely by telling XenServer to use a local directory for it's VM storage, but I want the glusterFS to appear in the local directory
21:28 sonicrose is there a way to hook in easy clientside local SSD caching for a gluster volumes?
21:28 sonicrose right now it takes gluster too long to acknowledge the write completion
21:28 sonicrose I realize the risk of having the local cache and using write behind
21:30 RedShift2 joined #gluster
21:39 sonicrose can i tweak the performance settings of the volume to get it to OK the write before it completely syncs?
21:40 sonicrose i'm seeing about the same results as this guy: http://gluster.org/pipermail/gluster-users/2011-August/008656.html  and the presented solution was to use geo-replication, but I dont think that will allow me to live migrate VMs
21:40 glusterbot Title: [Gluster-users] write-behind / write-back caching for replicated storage (at gluster.org)
21:49 primechuck joined #gluster
22:11 ctria joined #gluster
22:17 badone joined #gluster
22:23 theron joined #gluster
22:47 semiosis ,,(undocumented options)
22:47 glusterbot Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
22:48 semiosis sonicrose: check that doc out
22:48 semiosis might be helpful
23:21 theron joined #gluster
23:55 mattappe_ joined #gluster
23:57 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary