Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-01-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 jporterfield joined #gluster
00:43 _pol joined #gluster
00:44 MacWinner joined #gluster
00:53 diegows joined #gluster
01:01 SpeeR joined #gluster
01:10 _pol joined #gluster
01:21 SpeeR joined #gluster
01:37 geewiz joined #gluster
02:00 TrDS left #gluster
02:05 SpeeR joined #gluster
02:11 purpleidea @fileabug
02:11 glusterbot purpleidea: Please file a bug at http://goo.gl/UUuCq
02:23 jporterfield joined #gluster
02:25 purpleidea glusterbot: thanks
02:25 glusterbot purpleidea: you're welcome
02:30 KyleG joined #gluster
02:30 KyleG joined #gluster
02:31 geewiz joined #gluster
02:34 glusterbot New news from newglusterbugs: [Bug 1057818] gluster volume status has unusual nested elements in output <https://bugzilla.redhat.com/show_bug.cgi?id=1057818>
02:34 KyleG joined #gluster
02:34 KyleG joined #gluster
02:35 SpeeR joined #gluster
02:46 SpeeR joined #gluster
03:02 SpeeR joined #gluster
03:21 jporterfield joined #gluster
03:27 geewiz joined #gluster
03:43 purpleidea semiosis: https://gluster.org/pipermail/gluster-users/2014-January/038794.html
03:43 glusterbot Title: [Gluster-users] Gluster Volume Properties (at gluster.org)
03:52 SpeeR joined #gluster
03:54 SpeeR joined #gluster
03:56 SpeeR joined #gluster
03:58 SpeeR joined #gluster
04:00 SpeeR joined #gluster
04:01 SpeeR joined #gluster
04:03 SpeeR joined #gluster
04:05 SpeeR joined #gluster
04:07 SpeeR_ joined #gluster
04:09 SpeeR joined #gluster
04:10 SpeeR joined #gluster
04:12 SpeeR joined #gluster
04:14 SpeeR_ joined #gluster
04:16 SpeeR joined #gluster
04:18 SpeeR_ joined #gluster
04:19 SpeeR joined #gluster
04:21 SpeeR joined #gluster
04:23 SpeeR_ joined #gluster
04:28 geewiz joined #gluster
04:45 kshlm joined #gluster
05:21 jporterfield joined #gluster
05:29 geewiz joined #gluster
05:43 blook joined #gluster
06:15 jporterfield joined #gluster
06:21 jporterfield joined #gluster
06:22 vpshastry joined #gluster
06:24 _pol joined #gluster
06:29 geewiz joined #gluster
06:57 vpshastry joined #gluster
07:01 vpshastry left #gluster
07:17 khushildep joined #gluster
07:20 jporterfield joined #gluster
07:27 geewiz joined #gluster
07:32 kshlm joined #gluster
07:36 kaushal_ joined #gluster
07:44 Dave2_ joined #gluster
07:50 vpshastry joined #gluster
08:05 glusterbot New news from newglusterbugs: [Bug 1021686] refactor AFR module <https://bugzilla.redhat.com/show_bug.cgi?id=1021686>
08:19 delhage joined #gluster
08:25 vpshastry joined #gluster
08:28 geewiz joined #gluster
08:45 jkroon joined #gluster
08:52 jkroon hi all, after starting glusterd it seems the nfs mounts can take quite a bit of time to become available, so if glusterd starts *just prior* to netmount which tries to mount the nfs exports (no, unfortunately I don't have dedicated hosts for running the gluster daemon on) fails, does anybody have an intelligent way of waiting for nfs to start up before proceeding with netmount?
09:01 shyam joined #gluster
09:12 bala joined #gluster
09:15 purpleidea jkroon: i actually do have such a solution which does this with my ,,(puppet) #1 module, however it won't be ready for about a week... it uses puppet scripts to only mount once it sees the volume in question is "started". if you want to implement something on your own, look in my source dir at files/xml.py which will show you a way to detect such things...
09:15 glusterbot jkroon: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
09:18 jkroon purpleidea, basically does a gluster volume status and check the NFS value to be Y ?
09:20 purpleidea jkroon: i currently actually only am interested in the fuse-based mounts (which i would recommend in case you haven't seen them) but something similar yeah.
09:20 jkroon i love the fuse mounts, but performance is nowhere near what I need.
09:21 purpleidea jkroon: yeah no worries, i just figured i'd mention it
09:21 jkroon ok well, here I think is a simple solution:  while ! gluster volume status | awk 'BEGIN{ offline=0 } $0~"^NFS Server on localhost" && $(NF-1) != "Y" { offline=1 } END { exit(offline); }'; do sleep 1; done
09:22 jkroon it's very naïve in that it assumes ALL volumes MUST have NFS available eventually, and will stall indefinitely if NFS doesn't eventually become available, so definitely needs some tweaking.
09:22 purpleidea jkroon: NO!
09:22 purpleidea use --xml
09:22 jkroon xml is harder to parse in bash.
09:22 purpleidea "that's the point" (of it)
09:23 jkroon but yea, then I guess I can reduce that whole mess to a simpler grep -q
09:23 purpleidea harder in terms of easy to use in bash, but you know, a stable interface
09:23 jkroon ah, that much is true
09:23 purpleidea read my first comment again
09:24 jkroon well, thanks, I'll build some solution for me out of the information at some point, but at least there is one.
09:24 purpleidea jkroon: files/xml.py <-- i already wrote all the xml parsing stuff
09:24 jkroon thinking I'll need to make the xml plan (ie, grab your code)
09:25 purpleidea jkroon: a long time ago i started with a bash grep style solution, and i quickly replaced it with a naive xml.py option. and then i replaced it with a much more advanced xml.py
09:29 geewiz joined #gluster
09:30 jkroon cloned, thanks :)
09:34 TrDS joined #gluster
09:38 vpshastry joined #gluster
09:40 jporterfield joined #gluster
09:52 ricky-ticky joined #gluster
09:56 nshaikh joined #gluster
10:03 ngoswami joined #gluster
10:04 hagarth :O
10:05 purpleidea hagarth: hey
10:05 glusterbot New news from newglusterbugs: [Bug 1057846] Data loss in replicate self-heal <https://bugzilla.redhat.com/show_bug.cgi?id=1057846>
10:05 purpleidea hagarth: afaict, i see you on here at these weird hours because you're in india (is this right?) and i have a weird sleeping schedule :P or maybe we both have weird sleeping schedules
10:12 samppah purpleidea: naah, this is just perfect time for finland ;)
10:12 vpshastry joined #gluster
10:12 purpleidea samppah: ah, hello finland!
10:13 samppah hey hey :)
10:13 purpleidea samppah: i know someone from finland. also, i really liked my n900, and i'm sad that it's basically dead :(
10:14 samppah purpleidea: i liked it too.. unfortunately the screen died couple years ago :(
10:14 purpleidea oh no :( more tears
10:15 samppah but life goes on.. just bought Jolla this morning :)
10:15 hagarth purpleidea: I am in India, perfectly sane hours to be up :)
10:15 purpleidea samppah: oh yeah? cool... i'll still miss gtk+ on a mobile device those
10:15 purpleidea hagarth: this confirms that i'm broken. afternoon!
10:17 hagarth purpleidea: morning! and I probably should be wishing you 'night soon :)
10:18 purpleidea hagarth: in a few hours :) i'm hacking on something cool! (well, at least, i think it's cool!)
10:19 hagarth purpleidea: ok, sounds like fun :)
10:27 vpshastry left #gluster
10:29 leochill joined #gluster
10:29 geewiz joined #gluster
10:30 bala joined #gluster
10:42 bala joined #gluster
11:02 tryggvil joined #gluster
11:03 jkroon joined #gluster
11:06 marley joined #gluster
11:16 bala joined #gluster
11:30 geewiz joined #gluster
12:04 rwheeler joined #gluster
12:31 geewiz joined #gluster
12:34 AtAM1 joined #gluster
12:38 AtAM1 Hello! I need help troubleshooting repeated failures in creating volumes, distributed or replicas, with version 3.4.2 - the error relates to bug 812214 - https://bugzilla.redhat.com/show_bug.cgi?id=812214
12:38 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=812214 medium, high, 3.3.0beta, kparthas, CLOSED CURRENTRELEASE, [b337b755325f75a6fcf65616eaf4467b70b8b245]: add-brick should not be allowed for a directory which already has a volume-id
12:38 glusterbot Bug 812214: medium, high, 3.3.0beta, kparthas, CLOSED CURRENTRELEASE, [b337b755325f75a6fcf65616eaf4467b70b8b245]: add-brick should not be allowed for a directory which already has a volume-id
12:40 AtAM1 all attempts have been made on a new installation with no previous volumes or bricks created and with new directories, clean mounts and namespaces
12:43 AtAM1 I am able to successfully create volumes with v. 3.2.7 (debian/wheezy) but not with 3.4.2
12:44 AtAM1 Any help would be great, thanks
13:19 ninkotech joined #gluster
13:25 ninkotech joined #gluster
13:27 leochill joined #gluster
13:29 davinder joined #gluster
13:32 geewiz joined #gluster
13:47 vpshastry joined #gluster
13:48 eryc joined #gluster
13:48 kaushal_ joined #gluster
14:00 jkroon joined #gluster
14:11 VerboEse joined #gluster
14:26 SpeeR joined #gluster
14:28 SpeeR joined #gluster
14:29 SpeeR joined #gluster
14:31 SpeeR joined #gluster
14:33 geewiz joined #gluster
14:33 SpeeR_ joined #gluster
14:33 vpshastry joined #gluster
14:35 SpeeR joined #gluster
14:37 SpeeR joined #gluster
14:38 SpeeR joined #gluster
14:40 SpeeR joined #gluster
14:42 SpeeR joined #gluster
14:44 SpeeR joined #gluster
14:46 SpeeR joined #gluster
14:46 SpeeR_ joined #gluster
14:49 leochill joined #gluster
14:54 CheRi joined #gluster
14:55 jkroon joined #gluster
14:59 andreask joined #gluster
15:05 ninkotech_ joined #gluster
15:08 vpshastry left #gluster
15:12 ninkotech_ joined #gluster
15:18 kaushal_ joined #gluster
15:33 geewiz joined #gluster
15:44 jkroon joined #gluster
16:14 nshaikh joined #gluster
16:19 abyss^ JoeJulian: yes, I am not only person that have that issue (I found somethin similar on mailing list roughtly half year ago). But this issue above sounds like something just a little different. But ofcourse issue with replace-brick:)
16:24 dbruhn joined #gluster
16:31 nshaikh left #gluster
16:34 geewiz joined #gluster
16:34 jkroon joined #gluster
16:37 AtAM1 hi abyss^
16:37 AtAM1 where you refering to my issue?
16:37 AtAM1 *were
16:41 abyss^ AtAM1: no:)
16:42 AtAM1 abyss^: oh ok, sorry.
16:50 Wiss joined #gluster
17:01 sroy joined #gluster
17:19 glusterbot New news from resolvedglusterbugs: [Bug 862082] build cleanup <https://bugzilla.redhat.com/show_bug.cgi?id=862082>
17:35 geewiz joined #gluster
17:37 glusterbot New news from newglusterbugs: [Bug 1057881] geo-rep/gfid-access : Lookup fails on /.gfid/ /bname <https://bugzilla.redhat.com/show_bug.cgi?id=1057881>
17:52 dbruhn joined #gluster
18:28 davinder joined #gluster
18:36 geewiz joined #gluster
18:58 rotbeard joined #gluster
19:00 TrDS left #gluster
19:37 geewiz joined #gluster
19:58 RobertLaptop joined #gluster
20:37 geewiz joined #gluster
21:34 Gugge joined #gluster
21:38 geewiz joined #gluster
22:02 tryggvil joined #gluster
22:39 geewiz joined #gluster
22:41 jporterfield joined #gluster
22:45 flrichar joined #gluster
23:00 skarpa joined #gluster
23:02 skarpa Hello, I have a question that might seem trivial however I had a hard time finding the answer -  can I have multiple bricks residing on a single mount point and participating in multiple volumes?
23:05 NuxRo skarpa: yes
23:06 NuxRo i also use this setup, one big raid mounted in /bricks and subdirs used for volume bricks
23:07 skarpa So If I have a single LVM volume (RAID60 underneath)  I can have as many bricks as I please then?
23:08 NuxRo yes
23:08 skarpa Great! Thanks a lot NuxRo!
23:08 NuxRo np
23:11 skarpa left #gluster
23:16 jporterfield joined #gluster
23:40 geewiz joined #gluster
23:46 TrDS joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary