Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 elico joined #gluster
01:12 dataio left #gluster
01:13 bala joined #gluster
01:20 livelace joined #gluster
01:39 harish joined #gluster
01:49 ro78 joined #gluster
01:49 ro78 Hi all
01:51 ro78 I've a question about the security level of "peer probe" command, on a default installation of a dedicated server or VPS, anyone can probe my device from anywhere on Internet and use it, and I can't find a way to secure that (except put firewall rules in place)?
01:52 sputnik13 joined #gluster
01:53 nangthang joined #gluster
01:56 mtanner joined #gluster
02:08 haomaiwa_ joined #gluster
02:51 kshlm joined #gluster
02:52 kshlm joined #gluster
02:53 kshlm joined #gluster
02:59 kshlm joined #gluster
03:00 calisto joined #gluster
03:11 glusterbot News from newglusterbugs: [Bug 1178590] Enable quota(default) leads to heal directory's xattr failed. <https://bugzilla.redhat.com/show_bug.cgi?id=1178590>
03:24 mtanner joined #gluster
03:32 rejy joined #gluster
03:35 suman_d joined #gluster
03:43 fandi joined #gluster
03:48 bharata-rao joined #gluster
03:49 atinmu joined #gluster
03:50 itisravi joined #gluster
03:51 kanagaraj joined #gluster
03:57 shubhendu joined #gluster
04:04 badone joined #gluster
04:06 spandit joined #gluster
04:06 RameshN_ joined #gluster
04:08 RameshN joined #gluster
04:09 ppai joined #gluster
04:11 vimal joined #gluster
04:12 badone_ joined #gluster
04:27 iPancreas joined #gluster
04:29 nbalacha joined #gluster
04:34 lalatenduM joined #gluster
04:38 rafi1 joined #gluster
04:45 plarsen joined #gluster
04:45 kumar joined #gluster
04:50 B21956 joined #gluster
04:53 hagarth joined #gluster
04:56 anoopcs joined #gluster
04:56 B21956 left #gluster
04:57 ndarshan joined #gluster
05:00 bala joined #gluster
05:11 jiffin joined #gluster
05:13 lalatenduM joined #gluster
05:29 iPancreas joined #gluster
05:30 nshaikh joined #gluster
05:32 soumya joined #gluster
05:35 DV joined #gluster
05:36 nrcpts joined #gluster
05:36 nrcpts joined #gluster
05:39 kaushal_ joined #gluster
05:39 meghanam joined #gluster
05:41 prasanth_ joined #gluster
05:42 RameshN joined #gluster
05:52 atinmu joined #gluster
05:53 nbalacha joined #gluster
06:08 kshlm joined #gluster
06:12 rjoseph joined #gluster
06:19 atalur joined #gluster
06:20 atinmu joined #gluster
06:24 dusmant joined #gluster
06:29 iPancreas joined #gluster
06:31 saurabh joined #gluster
06:32 rafi joined #gluster
06:32 overclk joined #gluster
06:44 anil joined #gluster
06:45 hagarth joined #gluster
06:47 raghu joined #gluster
07:04 api984 joined #gluster
07:12 glusterbot News from newglusterbugs: [Bug 1178619] Statfs is hung because of frame loss in quota <https://bugzilla.redhat.com/show_bug.cgi?id=1178619>
07:17 quydo joined #gluster
07:20 rgustafs joined #gluster
07:21 nrcpts joined #gluster
07:21 LebedevRI joined #gluster
07:23 jtux joined #gluster
07:30 iPancreas joined #gluster
07:34 kovshenin joined #gluster
07:40 mtanner joined #gluster
07:44 nangthang joined #gluster
08:04 vimal joined #gluster
08:04 soumya joined #gluster
08:06 quydo joined #gluster
08:09 hagarth joined #gluster
08:14 soumya joined #gluster
08:19 deniszh joined #gluster
08:19 deniszh left #gluster
08:24 fsimonce joined #gluster
08:25 Slashman joined #gluster
08:26 vimal joined #gluster
08:26 sahina joined #gluster
08:30 iPancreas joined #gluster
08:41 quydo joined #gluster
08:45 cultav1x joined #gluster
08:52 quydo joined #gluster
08:56 rafi1 joined #gluster
08:58 api984 joined #gluster
08:59 mbukatov joined #gluster
09:05 Norky joined #gluster
09:13 glusterbot News from resolvedglusterbugs: [Bug 1138385] [DHT:REBALANCE]: Rebalance failures are seen with error message  " remote operation failed: File exists" <https://bugzilla.redhat.com/show_bug.cgi?id=1138385>
09:15 morse joined #gluster
09:17 rafi joined #gluster
09:18 atalur joined #gluster
09:20 deniszh joined #gluster
09:20 dusmant joined #gluster
09:22 bala joined #gluster
09:24 api984 joined #gluster
09:31 iPancreas joined #gluster
09:36 verboese|sleep joined #gluster
09:37 quydo joined #gluster
09:39 lalatenduM joined #gluster
09:47 quydo joined #gluster
09:48 fandi joined #gluster
09:52 ppai joined #gluster
09:53 meghanam joined #gluster
10:02 rafi1 joined #gluster
10:07 rafi1 joined #gluster
10:13 SmithyUK joined #gluster
10:14 quydo joined #gluster
10:16 nangthang joined #gluster
10:20 atinmu joined #gluster
10:23 badone joined #gluster
10:27 RaSTar joined #gluster
10:27 meghanam joined #gluster
10:27 _shaps_ joined #gluster
10:27 ppai joined #gluster
10:31 lalatenduM joined #gluster
10:31 DV joined #gluster
10:31 iPancreas joined #gluster
10:31 vimal joined #gluster
10:36 quydo joined #gluster
10:39 dusmant joined #gluster
10:43 glusterbot News from newglusterbugs: [Bug 1168809] logging improvement in glusterd/cli <https://bugzilla.redhat.com/show_bug.cgi?id=1168809>
10:51 bala joined #gluster
10:54 calum_ joined #gluster
11:05 badone joined #gluster
11:06 prasanth_ joined #gluster
11:09 masterzen_ joined #gluster
11:14 quydo joined #gluster
11:27 atinmu joined #gluster
11:32 iPancreas joined #gluster
11:36 tobias-_ joined #gluster
11:36 ndevos_ joined #gluster
11:36 ndevos_ joined #gluster
11:36 lanning joined #gluster
11:36 vimal joined #gluster
11:37 mikedep333 joined #gluster
11:37 kumar joined #gluster
11:37 prasanth_ joined #gluster
11:37 JonathanD joined #gluster
11:38 jiffin joined #gluster
11:38 ckotil_ joined #gluster
11:38 eightyeight joined #gluster
11:38 nshaikh joined #gluster
11:39 Champi joined #gluster
11:39 AaronGr joined #gluster
11:41 verboese|sleep joined #gluster
11:47 bfoster joined #gluster
11:53 quydo joined #gluster
12:01 ira joined #gluster
12:04 ubungu joined #gluster
12:05 dusmant joined #gluster
12:13 ubungu joined #gluster
12:13 itisravi joined #gluster
12:15 masterzen joined #gluster
12:16 ppai joined #gluster
12:25 ubungu joined #gluster
12:26 hagarth joined #gluster
12:27 jiffin joined #gluster
12:27 RameshN joined #gluster
12:28 nbalacha joined #gluster
12:32 iPancreas joined #gluster
12:33 ppai joined #gluster
12:35 Pupeno joined #gluster
12:36 ubungu joined #gluster
13:00 edwardm61 joined #gluster
13:01 rtalur_ joined #gluster
13:11 B21956 joined #gluster
13:13 Fen1 joined #gluster
13:14 rjoseph joined #gluster
13:22 ubungu joined #gluster
13:25 calisto joined #gluster
13:33 iPancreas joined #gluster
13:34 chirino joined #gluster
13:36 ubungu joined #gluster
13:37 harish joined #gluster
13:49 ubungu joined #gluster
13:49 pcaruana joined #gluster
14:00 ndk joined #gluster
14:00 ubungu joined #gluster
14:00 dblack joined #gluster
14:00 calisto joined #gluster
14:01 ppai joined #gluster
14:04 msvbhat joined #gluster
14:04 virusuy joined #gluster
14:04 virusuy joined #gluster
14:04 radez_g0n3 joined #gluster
14:05 scuttle` joined #gluster
14:06 soumya joined #gluster
14:09 nbalacha joined #gluster
14:15 mdavidson joined #gluster
14:19 jvandewege joined #gluster
14:19 ckotil joined #gluster
14:28 PatNarciso Happy 2015 y'all. \
14:29 sickness tnx, to you too :)
14:29 NuxRo joined #gluster
14:33 iPancreas joined #gluster
14:36 msmith_ joined #gluster
14:42 Dw_Sn joined #gluster
14:46 Dw_Sn when i generate the docs, all the commandline not wraped , any idea ?
14:48 coredump joined #gluster
14:49 sage_ joined #gluster
14:50 ubungu joined #gluster
14:51 plarsen joined #gluster
14:52 jobewan joined #gluster
14:55 tdasilva joined #gluster
15:00 ubungu joined #gluster
15:00 fandi joined #gluster
15:01 _Bryan_ joined #gluster
15:02 mbukatov joined #gluster
15:02 lmickh joined #gluster
15:13 lkoranda joined #gluster
15:14 shubhendu joined #gluster
15:14 hagarth joined #gluster
15:17 bennyturns joined #gluster
15:17 ubungu joined #gluster
15:21 georgeh-LT2 joined #gluster
15:26 plarsen joined #gluster
15:26 plarsen joined #gluster
15:29 suman_d joined #gluster
15:30 Fen1 joined #gluster
15:31 jbrooks joined #gluster
15:31 ubungu joined #gluster
15:34 iPancreas joined #gluster
15:34 lpabon joined #gluster
15:37 jmarley joined #gluster
15:41 bene joined #gluster
15:57 sputnik13 joined #gluster
16:05 iPancreas joined #gluster
16:07 tdasilva joined #gluster
16:09 meghanam joined #gluster
16:10 roost joined #gluster
16:17 sputnik13 joined #gluster
16:24 ubungu joined #gluster
16:25 sputnik13 joined #gluster
16:34 rotbeard joined #gluster
16:34 ubungu joined #gluster
16:37 n-st joined #gluster
16:39 RameshN joined #gluster
16:42 T3 joined #gluster
16:42 tetreis joined #gluster
16:43 shubhendu joined #gluster
16:48 ubungu joined #gluster
16:54 mbukatov joined #gluster
16:56 ubungu joined #gluster
16:56 tdasilva joined #gluster
17:09 shubhendu joined #gluster
17:19 sputnik13 joined #gluster
17:25 ubungu joined #gluster
17:47 jackdpeterson joined #gluster
17:56 ubungu joined #gluster
17:57 fubada joined #gluster
17:57 fubada hi purpleidea, i think you tried to help me with the gluster module and vrrp changes on every agent run. do you have a second?
17:58 nage joined #gluster
17:58 PeterA joined #gluster
17:58 semiosis best to just get on with your question.  when purpleidea has a second i'm sure he'll reply
17:58 purpleidea fubada: yeah :P ... hang on...
17:59 purpleidea fubada: want to debug the gluster volume create thing?
17:59 bennyturns joined #gluster
17:59 fubada basically any host that either manages the gluster service or mount with puppet, has this happening on every agent run: https://gist.github.com/aamerik/64e210384a0e4c9cbaa1
17:59 fubada purpleidea: i think the create is working fine now, i sent a PR
18:00 fubada there was a type o
18:00 purpleidea fubada: right!
18:00 purpleidea fubada: which PR?
18:00 fubada now Im just getting https://gist.github.com/aamerik/64e210384a0e4c9cbaa1, which is no big deal but my puppet reports show changes
18:00 fubada for those items
18:00 fubada which is misleading
18:00 purpleidea fubada: we'll fix the vrrp issue, i just wanted to fix the create one first if it's okay :)
18:00 fubada purpleidea: i think you already merged it in
18:01 fubada i think i fixed the create issue and you accepted my pr
18:01 purpleidea fubada: you sure? commit id ?
18:01 fubada https://github.com/purpleidea/puppet-gluster/pull/24
18:01 purpleidea fubada: nope, that's different
18:02 fubada hmm, i dont have any issues creating volumes...i dont think
18:02 purpleidea fubada: you sure? i thought volume create didn't run automatically for a new cluster?
18:02 purpleidea on 3.6
18:03 fubada hmm how can i test? I can remove a volume and rerun puppet?
18:03 purpleidea fubada: yeah sure
18:05 fubada ah purpleidea my aplogies must step away to a meeting
18:05 fubada ill be back :)
18:05 purpleidea fubada: okay no worries.
18:05 purpleidea fubada: leave me your results, i might be in and out the next few days
18:07 fubada thank yku
18:07 fubada you**
18:08 purpleidea yw
18:10 lalatenduM joined #gluster
18:13 sage_ joined #gluster
18:16 DV joined #gluster
18:16 n-st joined #gluster
18:17 JordanHackworth joined #gluster
18:17 owlbot joined #gluster
18:19 prg3 joined #gluster
18:21 Bosse joined #gluster
18:21 sadbox joined #gluster
18:23 masterzen joined #gluster
18:25 TealS joined #gluster
18:27 siel joined #gluster
18:27 siel joined #gluster
18:33 TealS Hi there: One of my clients is asking for a glusterfs volume across 16 nodes that is equivalent to raid-5: striped for performance with redundancy (in case a host goes down).  Is there a volume configuration that makes this possible?
18:35 mbukatov joined #gluster
18:36 rjoseph joined #gluster
18:37 semiosis TealS: erasure coding is coming soon to glusterfs, that will provide raid-5 like data redundanyc
18:37 semiosis for now your best bet is distributed-replicated, which mirrors files between replica sets and distributes files over several replica sets
18:37 semiosis files are stored whole, not chunked
18:38 fubada purpleidea: do you know how I can check why some peers are flagged as rejected https://gist.github.com/aamerik/7b90ac21bcee1fca6157
18:38 TealS semiosis: Ah, I thought I had read something about that.  Thank you for the quick response.  This helps!
18:38 semiosis yw
18:43 calisto joined #gluster
18:49 fubada purpleidea: nevermind, fixed
18:53 fubada purpleidea: https://gist.github.com/aamerik/1ab19505ca95692e7449 volume create seems to be working
18:53 fubada glusterfs-server-3.6.1-1.el6.x86_64
18:54 fubada this seems to be the only issue now: Notice: /Stage[main]/Gluster::Vardir/File[/var/lib/puppet/tmp/gluster/vrrp]/ensure: removed
19:12 purpleidea fubada: sweet... then we'll solve the vrrp issue... one sec (small meeting)
19:15 Philambdo joined #gluster
19:17 _br_ joined #gluster
19:39 jackdpeterson Hey all, what's the recommended way to migrate from one gluster (3.5 Ubuntu to 3.6.1 CentOS 7)? I'm debating between simply `ol rsync vs configuring geo-replication and then going from there. Any thoughts/suggestions based on experience?
19:39 jackdpeterson I can't have an outage on the existing 3.5 boxes as they are serving production traffic
19:54 Pupeno_ joined #gluster
20:04 _dist joined #gluster
20:23 calisto joined #gluster
20:30 iPancreas joined #gluster
20:46 mbelaninja joined #gluster
20:51 DV joined #gluster
21:06 Pupeno joined #gluster
21:07 mbelaninja Hi everyone.  I've got a gluster (3.4.3) cluster that is in service and performing as expected.  I'm trying to mount the volumes with a new client and not having a ton of luck.  It looks like i'm hvaing issues getting the volume file and I'm not sure where to go from here.  This nodes has the same gluster* packages and versions as a working client, it has the same /etc/glusterfs/glusterd.vol info as a working machine.  Both machines are trying to mou
21:09 mbelaninja The working machines has been serving content (mostly flawlessly) for months, I cannot get this new client to mount.  Ideas?  Neither machine is part of the gluster cluster.
21:10 purpleidea fubada: i've got a bit more work to do, any chance you're free in an hour or less maybe?
21:12 CyrilPeponnet Hey guys :) Happy new year !
21:14 CyrilPeponnet I have a setup with 2 nodes running 3.5.2, I set up a third node but it install me the 3.6.1 and it seems I cannot put then together (Peer rejected)
21:14 CyrilPeponnet What is the best, downgrade my 3.6.1 or upgrade the whole setup.... which is in production...
21:14 CyrilPeponnet (centos7)
21:21 mbelaninja seems like downgrading the new peer would be less impactful than upgrading produciton
21:21 sputnik13 joined #gluster
21:22 CyrilPeponnet that's what I'm starting to think...
21:32 iPancreas joined #gluster
21:33 badone joined #gluster
21:45 Dw_Sn joined #gluster
21:48 Alpinist joined #gluster
21:51 Pupeno_ joined #gluster
21:56 fubada purpleidea: hi yes im free
21:56 fsimonce joined #gluster
22:02 CyrilPeponnet hmm since I try to add a 3.6 node to my 3.5 cluster one of my node is in bad state
22:02 CyrilPeponnet E [xlator.c:403:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
22:02 CyrilPeponnet I don't have any management volume defined...
22:03 CyrilPeponnet I successfully downgrade the new node and it fits perfectly with the seconde node up, but my third old node didn't respond anymore and refuse to restart
22:03 CyrilPeponnet any clues?
22:05 CyrilPeponnet I understand thats the management volume..
22:07 CyrilPeponnet E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0
22:21 CyrilPeponnet ok somehow peers file in /var/lib/gluster/peers/ get corrupted
22:21 CyrilPeponnet I fix by hand and it restarts
22:29 B21956 left #gluster
22:32 iPancreas joined #gluster
22:34 purpleidea fubada: hey fubada
22:34 purpleidea fubada: okay, so let's fix your issue
22:35 purpleidea fubada: can you confirm if it stops "fluttering" after the cluster has been fully built and puppet has run on each node for a few runs in round robin fashion?
22:35 aulait joined #gluster
22:36 CyrilPeponnet hey purpleidea Hayppy new year :)
22:39 purpleidea CyrilPeponnet: hey, you too!! bonne annee!
22:39 mbelaninja left #gluster
22:40 purpleidea CyrilPeponnet: we're going to hack on puppet-keepalived actually ;)
22:44 CyrilPeponnet purpleidea what kind of hack :)
22:44 purpleidea CyrilPeponnet: just fixing a small, not very harmful bug :P
22:44 purpleidea CyrilPeponnet: but other hacks are welcome!
22:45 CyrilPeponnet ;p
22:46 purpleidea fubada: lmk. and i'll try and look at the code later and see what's up. i think i have an idea, but not confirmed yet
22:48 CyrilPeponnet purpleidea by the way puppet-gluster still working like a charm in our setup :)
22:48 CyrilPeponnet we planned to  use georeplication, is it handle by your class >
22:48 purpleidea CyrilPeponnet: i am glad to hear it! Are you running the git-master version?
22:49 purpleidea CyrilPeponnet: btw, if you ever feel like writing me a private email with some info about your setup, i'd love to hear it. if it can be public, that's fine too :)
22:49 CyrilPeponnet purpleidea no because I add to make some changes by hand (don't use your repo management and few little things),
22:49 purpleidea CyrilPeponnet: actually no support for geo-rep, i started hacking on it at one point, but no time to finish. if you want to add it, we can discuss and i can help
23:06 TealS left #gluster
23:06 sputnik13 joined #gluster
23:08 sputnik13 joined #gluster
23:10 sputnik13 joined #gluster
23:15 CyrilPeponnet purpleidea I will see if our setup architecture can be published and I will keep you in touch regarding georep
23:16 purpleidea CyrilPeponnet: sounds good, if it can't be published, feel free to send it to me privately for my personal interest. there may be a way to improve the code base to benefit your architecture
23:16 CyrilPeponnet Sure :)
23:24 fubada purpleidea: hi
23:25 fubada purpleidea: so i can confirm, cluster was built and puppet runs every 30min
23:25 fubada i see the vrrp remove notice every run
23:25 roost joined #gluster
23:25 fubada on all the boxes
23:25 fubada and even those that just use the module to mount
23:26 roost left #gluster
23:33 iPancreas joined #gluster
23:36 calisto joined #gluster
23:37 ira joined #gluster
23:46 purpleidea fubada: (looking at the code)
23:47 iPancreas joined #gluster
23:47 purpleidea fubada: can you confirm if 1) you're using gluster::simple, and two if you've set $vrrp => vrrp
23:47 purpleidea err $vrrp => true
23:50 bene joined #gluster
23:52 DurzoAU joined #gluster
23:53 DurzoAU has anyone seen semiosis lately? i've been waiting for updated ubuntu debs for gluster 3.5 or even 3.6 but he hasnt made a change since 2014-08
23:53 purpleidea DurzoAU: he's a volunteer! keep that in mind :)
23:54 fubada purpleidea: checking
23:54 DurzoAU purpleidea, i know
23:54 purpleidea fubada: k
23:55 fubada purpleidea: im using gluster::server
23:55 fubada and Im not setting vrrp anywhere
23:55 DurzoAU we are having geo-repl issues with 3.5.1. as of Dec 13th it just stopped replicating and there are thousands of files in /run/gluster/blahblahblah/.processing and the logs are filled with Rsync [errcode: 23]... any ideas?
23:55 purpleidea fubada: how are you using puppet-keepalived ? are you at all?
23:55 fubada i dont think i am
23:56 fubada im using a quad A record
23:56 fubada gluster.foo.com = 4 gluster instance ip's
23:57 purpleidea fubada: hmmm okay, more code reading

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary