Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:14 recidive joined #gluster
00:22 primechuck joined #gluster
00:56 pdrakewe_ joined #gluster
00:59 JoeJulian n0de: that setting wasn't in there
01:08 mjsmith2 joined #gluster
01:12 vimal joined #gluster
01:15 velladecin joined #gluster
01:24 gildub joined #gluster
01:29 harish joined #gluster
01:34 chirino_m joined #gluster
01:37 Ark joined #gluster
02:05 chirino joined #gluster
02:09 jcsp1 joined #gluster
02:12 lpabon joined #gluster
02:23 harish joined #gluster
02:25 bharata-rao joined #gluster
02:29 recidive joined #gluster
02:31 bharata_ joined #gluster
02:38 chirino joined #gluster
02:50 hagarth joined #gluster
02:54 kkeithley1 joined #gluster
02:55 prasanthp joined #gluster
03:03 gildub purpleidea, ping - in puppet-gluster is data/tree/hiera.yaml supposed to be merge with /etc/puppet/hiera.yaml?
03:03 gildub ^merged
03:08 gildub purpleidea, actually that's data/hiera.yaml
03:14 lyang0 joined #gluster
03:19 JoeJulian ndevos: @puppet
03:19 JoeJulian oops, not sure where ndevos came from in that...
03:20 JoeJulian @puppet
03:20 glusterbot JoeJulian: https://github.com/purpleidea/puppet-gluster
03:21 JoeJulian gildub: Looks like an example to me.
03:21 JoeJulian That
03:21 JoeJulian That's a structure you can use for overriding the params.
03:23 gildub JoeJulian, well, it seems puppet agents are not using even hiera at all, weird,  from puppet 3.x hiera is not even needed on the clients
03:23 gildub JoeJulian, thanks - will use that once there :)
03:27 JoeJulian gildub: No, hiera is not "needed" it's just another way of overriding values passed to parameterized classes.
03:27 JoeJulian It's nice because it separates your settings from the modules.
03:28 gildub JoeJulian, yeah, I like the concept but never used/troubleshoot it in details. it seems puppet-gluster latest version needs it to support rhel7/centos...
03:29 Ark check out http://garylarizza.com/blog/2014/02/17/puppet-workflow-part-1/ on some nice tip tips
03:29 glusterbot Title: Building a Functional Puppet Workflow Part 1: Module Structure - Shit Gary Says (at garylarizza.com)
03:30 JoeJulian btw, hiera has been able to override default class parameters since puppet 3.3. It /could/ use it before that, but you had to specifically design your module to do that.
03:30 rejy joined #gluster
03:35 JoeJulian hagarth: During a migration, an open file was moved. A lsof shows that glusterfsd still has the deleted file open. Is this expected behavior?
03:36 itisravi joined #gluster
03:37 gildub JoeJulian, my issue at this stage is to validate hiera is working fine on puppet clients
03:38 JoeJulian hiera works on the server.
03:47 ramteid joined #gluster
03:53 hagarth joined #gluster
03:59 hagarth JoeJulian: does the rebalance crash happen consistently for you?
04:00 gildub JoeJulian, of course but it's used by client catalogue. I think, it's working fine, but nothing is happening because the datastructure is not kicking in.
04:01 kanagaraj joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 gildub purpleidea, is puppet-gluster using in module_data for hiera. There is no documentation how to use your yaml
04:02 JoeJulian hagarth yes
04:04 hagarth JoeJulian: am wondering if you can attempt reproducing the problem by disabling a few performance translators
04:04 JoeJulian hagarth: Sure
04:05 gildub purpleidea, well it looks like it's in module_data structure :)
04:05 hagarth since this is a bad fd, the following xlators are likely suspects: io-cache, md-cache, open-behind & write-behind
04:06 hagarth JoeJulian: disabling one at a time might be useful to see if any of them indeed is the cause for the crash
04:06 JoeJulian I was wondering if those would even be at issue since the segfault happens invalidating the fuse cache.
04:08 hagarth JoeJulian: right, but the fd_t structure seems to have gone corrupt (likely because of an additional fd_unref) and the manifestation of the problem seems to be seen in fuse
04:08 hagarth we could at least rule out these translators if the crash continues to happen after disabling them
04:08 JoeJulian Makes sense
04:10 JoeJulian fyi, io-cache was already off
04:12 hagarth ok
04:13 shubhendu joined #gluster
04:15 vpshastry joined #gluster
04:18 haomaiwa_ joined #gluster
04:25 vimal joined #gluster
04:25 aravindavk joined #gluster
04:29 sjm joined #gluster
04:33 ndarshan joined #gluster
04:34 haomai___ joined #gluster
04:38 chirino_m joined #gluster
04:39 gildub joined #gluster
04:39 gildub joined #gluster
04:45 [o__o] joined #gluster
04:48 kdhananjay joined #gluster
04:52 deepakcs joined #gluster
04:53 meghanam joined #gluster
04:53 meghanam_ joined #gluster
04:58 XpineX_ joined #gluster
04:59 ppai joined #gluster
05:02 dusmant joined #gluster
05:03 hagarth JoeJulian: https://bugzilla.redhat.com/show_bug.cgi?id=961615 seems similar in nature to your report
05:03 hagarth the fix for this bug is in 3.5.x not in 3.4.
05:04 purpleidea gildub: in response to your first question, no you aren't supposed to merge data/hiera.yaml. leave it as is in the puppet-gluster module. i responded with the solution to your issue on the bug. for more info see: https://ttboj.wordpress.com/2014/06/04/hiera-data-in-modules-and-os-independent-puppet/
05:04 purpleidea to understand how the hiera data in module works.
05:04 marcoceppi_ joined #gluster
05:04 purpleidea JoeJulian: https://ttboj.wordpress.com/2014/06/04/hiera-data-in-modules-and-os-independent-puppet/ <-- he didn't include the module for this to work which is why it didn't work :P
05:04 glusterbot` joined #gluster
05:05 delhage joined #gluster
05:06 psharma joined #gluster
05:06 spandit joined #gluster
05:09 gildub purpleidea, thanks, I was reading/following ripienaar module.
05:09 purpleidea yw
05:11 rastar joined #gluster
05:14 atinmu joined #gluster
05:15 gildub joined #gluster
05:21 lalatenduM joined #gluster
05:21 hflai joined #gluster
05:25 gildub purpleidea, adding the module_data effectively helps, also initial hiera.yaml file is needed, openstack-puppet-modules doesn't, we're the first :)
05:25 gildub purpleidea, but strangly it seems now it's regressing, investigating this: Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error NameError: uninitialized constant Puppet::DataBinding::LookupError at /usr/share/openstack-foreman-installer/puppet/modules/quickstack/manifests/gluster/server.pp:41 on node f2-glu1.os.tst
05:26 purpleidea puppet --version ?
05:26 Intensity joined #gluster
05:26 dblack joined #gluster
05:26 glusterbot joined #gluster
05:26 kdhananjay joined #gluster
05:26 chirino_m joined #gluster
05:26 aravindavk joined #gluster
05:26 kanagaraj joined #gluster
05:26 hagarth joined #gluster
05:26 rejy joined #gluster
05:26 kkeithley_ joined #gluster
05:26 bharata_ joined #gluster
05:26 harish joined #gluster
05:26 velladecin joined #gluster
05:26 systemonkey joined #gluster
05:26 yosafbridge joined #gluster
05:26 DV__ joined #gluster
05:26 DV joined #gluster
05:26 GabrieleV joined #gluster
05:26 juhaj joined #gluster
05:26 osiekhan1 joined #gluster
05:26 ackjewt joined #gluster
05:26 necrogami joined #gluster
05:26 anotheral joined #gluster
05:26 Dave2 joined #gluster
05:26 semiosis joined #gluster
05:26 eryc joined #gluster
05:26 brosner joined #gluster
05:26 SteveCooling joined #gluster
05:26 stigchristian joined #gluster
05:26 ndevos joined #gluster
05:26 tziOm joined #gluster
05:26 weykent joined #gluster
05:26 johnmwilliams__ joined #gluster
05:26 eightyeight joined #gluster
05:26 JoeJulian joined #gluster
05:26 Licenser joined #gluster
05:26 ron-slc joined #gluster
05:26 RobertLaptop joined #gluster
05:26 basso joined #gluster
05:26 nixpanic_ joined #gluster
05:26 foster joined #gluster
05:26 sadbox joined #gluster
05:26 sauce joined #gluster
05:26 jiqiren joined #gluster
05:26 m0zes joined #gluster
05:26 pasqd joined #gluster
05:26 coreping joined #gluster
05:26 ultrabizweb joined #gluster
05:26 d3vz3r0 joined #gluster
05:26 lanning joined #gluster
05:26 Georgyo joined #gluster
05:26 cyber_si joined #gluster
05:26 mjrosenb joined #gluster
05:26 y4m4 joined #gluster
05:26 SpeeR joined #gluster
05:26 siel joined #gluster
05:26 eshy joined #gluster
05:26 _jmp_ joined #gluster
05:26 rturk|afk joined #gluster
05:26 silky joined #gluster
05:26 n0de joined #gluster
05:26 brad[] joined #gluster
05:26 NCommander joined #gluster
05:26 masterzen joined #gluster
05:26 radez_g0n3 joined #gluster
05:26 wgao joined #gluster
05:26 gmcwhistler joined #gluster
05:26 \malex\ joined #gluster
05:27 purpleidea hagarth: how's it going :)
05:27 dusmant joined #gluster
05:27 dblack joined #gluster
05:28 dusmant joined #gluster
05:28 johnmark joined #gluster
05:29 primusinterpares joined #gluster
05:30 kkeithley1 joined #gluster
05:30 gildub purpleidea, agent: 3.4.3, master: 3.3.2
05:31 purpleidea gildub: hmph, i dunno. somethings broken with your puppet. if you can reproduce the error with the clean vagrant-libvirt environment, let me know so i can fix the bug. otherwise i don't know what that error means :(
05:32 purpleidea whats at line 41?
05:32 samkottler joined #gluster
05:32 gildub purpleidea, my issue with vagrant, besides my desire to use it, is quickstack(astapor) doesn't so it's not helping me at the moment
05:33 lalatenduM joined #gluster
05:33 purpleidea gildub: i understand, but here is the issue:
05:33 purpleidea you have two problems to solve...
05:33 purpleidea 1) testing puppet-gluster on RHEL7/F20
05:33 purpleidea 2) integrating that with astapor/etc...
05:33 Intensity joined #gluster
05:33 dblack joined #gluster
05:33 glusterbot joined #gluster
05:33 kdhananjay joined #gluster
05:33 chirino_m joined #gluster
05:33 aravindavk joined #gluster
05:33 kanagaraj joined #gluster
05:33 hagarth joined #gluster
05:33 rejy joined #gluster
05:33 kkeithley_ joined #gluster
05:33 bharata_ joined #gluster
05:33 harish joined #gluster
05:33 velladecin joined #gluster
05:33 systemonkey joined #gluster
05:33 yosafbridge joined #gluster
05:33 DV__ joined #gluster
05:33 DV joined #gluster
05:33 GabrieleV joined #gluster
05:33 juhaj joined #gluster
05:33 \malex\ joined #gluster
05:33 gmcwhistler joined #gluster
05:33 wgao joined #gluster
05:33 radez_g0n3 joined #gluster
05:33 masterzen joined #gluster
05:33 NCommander joined #gluster
05:33 brad[] joined #gluster
05:33 n0de joined #gluster
05:33 silky joined #gluster
05:33 rturk|afk joined #gluster
05:33 _jmp_ joined #gluster
05:33 eshy joined #gluster
05:33 siel joined #gluster
05:33 SpeeR joined #gluster
05:33 y4m4 joined #gluster
05:33 mjrosenb joined #gluster
05:33 cyber_si joined #gluster
05:33 Georgyo joined #gluster
05:33 lanning joined #gluster
05:33 d3vz3r0 joined #gluster
05:33 ultrabizweb joined #gluster
05:33 coreping joined #gluster
05:33 pasqd joined #gluster
05:33 m0zes joined #gluster
05:33 jiqiren joined #gluster
05:33 sauce joined #gluster
05:33 sadbox joined #gluster
05:33 foster joined #gluster
05:33 nixpanic_ joined #gluster
05:33 basso joined #gluster
05:33 RobertLaptop joined #gluster
05:33 ron-slc joined #gluster
05:33 Licenser joined #gluster
05:33 JoeJulian joined #gluster
05:33 eightyeight joined #gluster
05:33 johnmwilliams__ joined #gluster
05:33 weykent joined #gluster
05:33 tziOm joined #gluster
05:33 ndevos joined #gluster
05:33 osiekhan1 joined #gluster
05:33 ackjewt joined #gluster
05:33 necrogami joined #gluster
05:33 anotheral joined #gluster
05:33 Dave2 joined #gluster
05:33 semiosis joined #gluster
05:33 eryc joined #gluster
05:33 brosner joined #gluster
05:33 SteveCooling joined #gluster
05:33 stigchristian joined #gluster
05:33 gildub purpleidea,  https://github.com/gildub/astapor/blob/master/puppet/modules/quickstack/manifests/gluster/server.pp#L41
05:33 purpleidea the problem is that you're combining these two... if you tested first #1, and then did #2, you would be able to isolate your bugs better :) :) get it?
05:34 glusterbot Title: astapor/puppet/modules/quickstack/manifests/gluster/server.pp at master · gildub/astapor · GitHub (at github.com)
05:35 gildub purpleidea, nope, what's #1/#2?
05:35 purpleidea 01:36 < purpleidea> 1) testing puppet-gluster on RHEL7/F20
05:35 purpleidea 01:36 < purpleidea> 2) integrating that with astapor/etc...
05:35 sputnik13 joined #gluster
05:37 kkeithley1 joined #gluster
05:37 dblack joined #gluster
05:37 gildub purpleidea, yeah agreed, pb is my job is to do #2. I'm sort of assuming #1 works. Look, I'm not saying I don't want to help with #1, but don't really have the time either
05:38 gildub purpleidea, and if #1 works well, which so far does, it's really the integration part to be wrapped
05:38 Intensity joined #gluster
05:40 purpleidea gildub: as previously disclosed, i haven't test puppet-gluster on RHEL7 or F20. not. even. once. i don't plan to in the next 30 days either.
05:40 purpleidea s/test/tested/
05:40 glusterbot What purpleidea meant to say was: gildub: as previously disclosed, i haven't tested puppet-gluster on RHEL7 or F20. not. even. once. i don't plan to in the next 30 days either.
05:41 purpleidea gildub: i've got to go back to other hacking. if you find bugs that i can patch, let me know. good luck!
05:41 gildub purpleidea, I understand, that's fine, I did the test anyway, through my integration, actually #1 works just fine on RHEL7 besides the python-argparse issue,
05:41 purpleidea night!
05:41 gildub purpleidea, yeah, getting darker now, almost WE.
05:42 hchiramm_ joined #gluster
05:43 gildub purpleidea, just one more question, why can't we use directly ripienaar/puppet-module-data ?
05:44 purpleidea ^ idk what this means. "directly" that's the code. you *have* to use it :P (did you see BZ comment i made?)
05:46 davinder6 joined #gluster
05:47 sputnik1_ joined #gluster
05:48 primusinterpares joined #gluster
05:49 gildub purpleidea, Because I read BZ, I'm asking why you have a fork?
05:49 gildub purpleidea, also, as I said, in the meantime you came back to me, I realized I had to use module_data, which I did, but I use the original one. Makes sense?
05:50 * purpleidea faceplant;s
05:50 gildub purpleidea, ^?
05:50 purpleidea i'm not using a fork :P i import it as a submodule so that you know which sha1 i'm tracking and so that it's available for vagrant :)
05:50 shruti joined #gluster
05:52 gildub purpleidea, I understand, yeah, I just realizing that. Ok, openstack-puppet-modules will have to do the same anyway. Thanks :)
05:53 purpleidea yw, goodluck!!
05:53 purpleidea bonne chance!
05:53 aravindavk joined #gluster
05:53 gildub purpleidea, Merci, j'en ai besoin! :)
05:54 dusmant joined #gluster
05:55 hagarth joined #gluster
05:58 gildub purpleidea, https://github.com/ripienaar/puppet-module-data/issues/6
05:58 glusterbot Title: Updating to 0.0.2 results in uninitilized constant · Issue #6 · ripienaar/puppet-module-data · GitHub (at github.com)
06:00 raghu joined #gluster
06:07 RameshN joined #gluster
06:08 bala joined #gluster
06:08 haomaiwa_ joined #gluster
06:10 haomaiw__ joined #gluster
06:21 ProT-0-TypE joined #gluster
06:26 vimal joined #gluster
06:29 nshaikh joined #gluster
06:30 bala joined #gluster
06:36 aravindavk joined #gluster
06:37 hagarth joined #gluster
06:39 dusmant joined #gluster
06:40 chirino joined #gluster
06:50 kdhananjay joined #gluster
06:55 Philambdo joined #gluster
06:56 bala joined #gluster
06:59 glusterbot New news from newglusterbugs: [Bug 1105439] [USS]: RPC to retrieve snapshot list from glusterd and enable snapd (snapview-server) to refresh snap list dynamically <https://bugzilla.redhat.com/show_bug.cgi?id=1105439>
07:02 rgustafs joined #gluster
07:06 zero_ark joined #gluster
07:08 ekuric joined #gluster
07:13 bharata-rao joined #gluster
07:16 haomaiwa_ joined #gluster
07:17 eseyman joined #gluster
07:25 glusterbot New news from resolvedglusterbugs: [Bug 868792] baseurl in the yum repo file is incorrect. <https://bugzilla.redhat.com/show_bug.cgi?id=868792> || [Bug 989038] backupvolfile-server does not work with automount <https://bugzilla.redhat.com/show_bug.cgi?id=989038>
07:26 fsimonce joined #gluster
07:27 ProT-0-TypE joined #gluster
07:29 glusterbot New news from newglusterbugs: [Bug 1093594] Glfs_fini() not freeing the resources <https://bugzilla.redhat.com/show_bug.cgi?id=1093594> || [Bug 1077516] [RFE] :- Move the container for changelogs from /var/run to /var/lib/misc <https://bugzilla.redhat.com/show_bug.cgi?id=1077516> || [Bug 1075417] Spelling mistakes and typos in the glusterfs source <https://bugzilla.redhat.com/show_bug.cgi?id=1075417>
07:31 haomaiw__ joined #gluster
07:40 zerick joined #gluster
07:41 Philambdo joined #gluster
07:44 bala joined #gluster
07:45 dusmant joined #gluster
07:51 andreask joined #gluster
07:53 saurabh joined #gluster
07:53 lalatenduM joined #gluster
07:53 kkeithley1 joined #gluster
08:01 Philambdo joined #gluster
08:01 jiku joined #gluster
08:06 RameshN joined #gluster
08:19 mbukatov joined #gluster
08:25 hagarth joined #gluster
08:26 ctria joined #gluster
08:29 glusterbot New news from newglusterbugs: [Bug 1105466] Dist-geo-rep : geo-rep history crawl consumes even zero-byte changelogs and accumulates in .history/.processed directory of working_dir <https://bugzilla.redhat.com/show_bug.cgi?id=1105466>
08:30 liquidat joined #gluster
09:04 ngoswami joined #gluster
09:04 Pupeno joined #gluster
09:26 vdrandom seems like restarting glusterd and remount nfs didn't change a thing for me, still getting the same http://fpaste.org/107728/40204677/ errors
09:26 glusterbot Title: #107728 Fedora Project Pastebin (at fpaste.org)
09:35 vipulnayyar joined #gluster
09:35 vipulnayyar left #gluster
09:38 ktosiek joined #gluster
09:42 \malex\ left #gluster
09:43 chirino_m joined #gluster
09:52 qdk_ joined #gluster
09:58 kkeithley1 joined #gluster
10:07 harish joined #gluster
10:10 firemanxbr joined #gluster
10:11 qdk_ joined #gluster
10:13 lalatenduM joined #gluster
10:14 chirino joined #gluster
10:29 spiekey joined #gluster
10:30 spiekey hello!
10:30 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:30 spiekey what does this mean? E [afr-self-heald.c:1479:afr_find_child_position] 0-raidvolb-replicate-0: getxattr failed on raidvolb-client-0 - (Transport endpoint is not connected)
10:32 spiekey oh, maybe because one brick is offline
10:37 zero_ark joined #gluster
10:40 swebb joined #gluster
10:42 nbalachandran joined #gluster
10:42 nbalachandran left #gluster
10:57 rwheeler joined #gluster
11:00 glusterbot New news from newglusterbugs: [Bug 1105524] Disable nfs.drc by default <https://bugzilla.redhat.com/show_bug.cgi?id=1105524>
11:15 chirino_m joined #gluster
11:21 capri joined #gluster
11:22 capri is there actually a release date for gluster 3.5?
11:22 ndevos capri: 3.5 has been released already...
11:22 capri ndevos, really, i didn´t get that
11:22 capri thanks :)
11:23 ndevos capri: 3.5.1 has a 1st beta out too already, depending on some patches a beta2 should get out this weekend or early next week
11:24 capri ndevos, thanks a lot. cause im really interested in testing the new brick failure detection
11:25 hchiramm_ joined #gluster
11:26 ndevos capri: note that there is an issue with the brick failure detection on ext* filesystems (bug 1100204), but xfs works as planned
11:26 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1100204 medium, high, ---, lmohanty, ASSIGNED , brick failure detection does not work for ext4 filesystems
11:26 capri ndevos, thanks for that hint
11:29 bene2 joined #gluster
11:39 kkeithley1 joined #gluster
11:51 edward1 joined #gluster
12:00 glusterbot New news from newglusterbugs: [Bug 962169] prove ./tests/basic/rpm.t fails on non x86_64 architectures <https://bugzilla.redhat.com/show_bug.cgi?id=962169>
12:01 itisravi_ joined #gluster
12:01 haomaiwa_ joined #gluster
12:02 vpshastry joined #gluster
12:07 haomai___ joined #gluster
12:08 jmarley joined #gluster
12:08 jmarley joined #gluster
12:25 ricky-ti1 joined #gluster
12:27 bene2 joined #gluster
12:31 vikhyat joined #gluster
12:36 vikhyat joined #gluster
12:40 vikhyat joined #gluster
12:50 spiekey can anyone help me with this? http://fpaste.org/107765/20590101/
12:50 glusterbot Title: #107765 Fedora Project Pastebin (at fpaste.org)
12:50 spiekey Mount failed. Please check the log file for more details.
12:51 spiekey i wonder why it resolves to named and to ips when i run gluster volume status all
12:51 spiekey hostname -f and the /etc/hosts entries are correct and pringing ech other works, too
12:57 B21956 joined #gluster
12:59 sjm joined #gluster
13:02 plarsen joined #gluster
13:05 shyam1 joined #gluster
13:08 sputnik1_ joined #gluster
13:11 sroy_ joined #gluster
13:11 sputnik1_ joined #gluster
13:14 dusmant joined #gluster
13:16 spandit joined #gluster
13:16 chirino joined #gluster
13:21 brad_mssw joined #gluster
13:27 glusterbot New news from resolvedglusterbugs: [Bug 1091777] Puppet module gluster (purpleidea/puppet-gluster) to support RHEL7/Fedora20 <https://bugzilla.redhat.com/show_bug.cgi?id=1091777>
13:31 glusterbot New news from newglusterbugs: [Bug 1075611] [FEAT] log: enhance gluster log format with message ID and standardize errno reporting <https://bugzilla.redhat.com/show_bug.cgi?id=1075611>
13:37 mjsmith2 joined #gluster
13:39 japuzzo joined #gluster
13:47 bennyturns joined #gluster
13:47 bennyturns joined #gluster
13:48 jobewan joined #gluster
13:48 gmcwhist_ joined #gluster
13:51 primechuck joined #gluster
13:59 ekuric joined #gluster
13:59 hagarth joined #gluster
14:03 recidive joined #gluster
14:04 davinder6 joined #gluster
14:06 plarsen joined #gluster
14:08 _dist joined #gluster
14:08 zero_ark joined #gluster
14:10 wushudoin joined #gluster
14:11 _dist Morning, I was wondering if anyone knows where the bottlenecks might be in a heal operation, I've had to do some maintenence over the past couple of days and am finding a brick outage of 5-6 minutes takes that brick 3-4 days to heal. I don't expect this is normal? (these are VM images on a replicate volume)
14:11 vikumar joined #gluster
14:18 jbd1 joined #gluster
14:22 in joined #gluster
14:26 diegows joined #gluster
14:27 shubhendu joined #gluster
14:30 plarsen joined #gluster
14:32 spiekey anyone? :-/
14:32 spiekey E [cli-xml-output.c:2822:cli_xml_output_vol_info_end] 0-cli: Returning 0
14:33 LoudNoises joined #gluster
14:39 lpabon joined #gluster
14:39 lmickh joined #gluster
14:42 JoeJulian _dist: check logs
14:43 JoeJulian @probe
14:43 glusterbot JoeJulian: I do not know about 'probe', but I do know about these similar topics: 'paste', 'ports', 'process'
14:44 Slashman joined #gluster
14:45 ctria joined #gluster
14:45 JoeJulian spiekey: I'd point you at the documentation but apparently my correction has still not been merged. You have to probe the server you initiated your peer probes on from one other server to set its hostname.
14:47 _dist JoeJulian: I should look at the shd logs?
14:47 JoeJulian Oh, right... I forgot that there was a reason it's not merged yet. D'oh!
14:47 JoeJulian _dist: yep
14:48 _dist does it matter on which brick host?
14:48 spiekey JoeJulian:  okay, the names are okay now. But i still can not mount it :-/
14:52 JoeJulian spiekey: fpaste.org your client log
14:52 Alpinist joined #gluster
14:53 JoeJulian ... and I'll look at it in about a half hour. I need to check out of this hotel, get some coffee, and get over to the datacenter.
14:53 ekuric joined #gluster
14:53 spiekey JoeJulian: http://fpaste.org/107806/40206637/
14:53 spiekey thanks!
14:53 glusterbot Title: #107806 Fedora Project Pastebin (at fpaste.org)
14:53 JoeJulian did you start your volume?
14:58 theron joined #gluster
15:00 spiekey JoeJulian: argh!!! i F** used the wrong mount syntax…
15:00 spiekey mount -t glusterfs 192.168.41.94:/raidvol/volb /mnt/  BUT it should be mount -t glusterfs 192.168.41.94:/volb /mnt/
15:00 spiekey i used the full physical path insted the vol name
15:00 spiekey it frickin works now :D
15:02 coredump joined #gluster
15:18 _dist JoeJulian: nothing but info stuff inthe log so far
15:18 cvdyoung left #gluster
15:24 andreask joined #gluster
15:26 bala joined #gluster
15:26 daMaestro joined #gluster
15:28 harish joined #gluster
15:30 bala1 joined #gluster
15:31 spiekey left #gluster
15:36 jag3773 joined #gluster
15:42 theron joined #gluster
15:43 [o__o] joined #gluster
15:43 sputnik1_ joined #gluster
15:46 social joined #gluster
15:49 chirino_m joined #gluster
16:03 pdrakeweb joined #gluster
16:11 bala joined #gluster
16:12 _dist Is it normal that the glustershd.log is filled with "I: Another crawl is in progress" ? an entry every 2-10 min
16:23 ndk joined #gluster
16:29 bala joined #gluster
16:32 pdrakewe_ joined #gluster
16:37 MeatMuppet joined #gluster
16:56 zaitcev joined #gluster
17:10 daMaestro joined #gluster
17:10 daMaestro joined #gluster
17:13 zero_ark joined #gluster
17:31 sputnik1_ joined #gluster
17:35 jbd1 Anyone upgraded to 3.4.4 yet?
17:40 japuzzo joined #gluster
17:57 theron joined #gluster
18:00 haomaiw__ joined #gluster
18:01 tdasilva joined #gluster
18:08 bet_ joined #gluster
18:14 sjm left #gluster
18:20 shyam1 left #gluster
18:20 shyam1 joined #gluster
18:20 shyam1 left #gluster
18:21 chirino joined #gluster
18:35 shyam joined #gluster
18:47 haomaiwa_ joined #gluster
18:52 chirino_m joined #gluster
19:07 calum_ joined #gluster
19:07 theron joined #gluster
19:11 Matthaeus joined #gluster
19:15 Matthaeus joined #gluster
19:17 davinder6 joined #gluster
19:21 SFLimey joined #gluster
19:22 SFLimey Hello all, I'm trying to migrate data from a gluster node to a new node.
19:22 SFLimey I tried using
19:22 SFLimey gluster volume replace-brick gv0 oldserver:/data/gv0/brick1 newserver:/data/gv0/brick1 commit force
19:22 SFLimey but it says brick1 does not exist in volume: gv0
19:23 SFLimey When I do a gluster vol info it shows its there so I'm confused.
19:23 SFLimey Brick1: oldserver/brick1
19:24 SFLimey Any idea what I'm missing?
19:34 bennyturns joined #gluster
19:35 plarsen joined #gluster
19:37 ackjewt joined #gluster
19:40 semiosis SFLimey: both old & new servers are online & peers in the pool?
19:42 SFLimey Yup
19:43 semiosis SFLimey: ,,(pasteinfo)
19:43 glusterbot SFLimey: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:46 SFLimey http://ur1.ca/hgvly
19:46 glusterbot Title: #107913 Fedora Project Pastebin (at ur1.ca)
20:00 SFLimey Any thoughts semiosis?
20:01 dblack joined #gluster
20:02 systemonkey joined #gluster
20:02 semiosis a one-brick volume?
20:03 SFLimey Yeah this is our staging environment so just the single node.
20:03 Intensity joined #gluster
20:03 semiosis why don't you just create a new volume & cp the data from one to the other?
20:03 SFLimey Production we have four.
20:03 semiosis oh
20:03 SFLimey Can't have down time.
20:03 semiosis hm
20:03 SFLimey At least at the moment.
20:04 SFLimey I can schedule it for another day and just do the cp.
20:04 semiosis humor me & try setting up a staging volume that more closely resembles your prod vol
20:04 SFLimey Once I did that how would I replace one of the bricks?
20:05 semiosis with the replace-brick command, i presume?
20:05 semiosis am i missing something?
20:05 SFLimey This is my production environment
20:05 SFLimey http://ur1.ca/hgvs1
20:05 glusterbot Title: #107918 Fedora Project Pastebin (at ur1.ca)
20:05 SFLimey Well I guess my thought process is that the replace brick command wasnt working.
20:06 SFLimey For me at least.
20:06 semiosis that one-brick volume worries me.  most people use more than one brick
20:07 semiosis anyway, if your servers are really named gluster-N, then you could just remap that hostname
20:07 semiosis to another server
20:07 semiosis copy over the gluster config in /var/lib/glusterd
20:07 semiosis and be on your way
20:07 semiosis see ,,(replace) for same hostname
20:07 glusterbot Useful links for replacing a failed server... if replacement server has different hostname: http://web.archive.org/web/20120508153302/http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/ ... or if replacement server has same
20:07 glusterbot hostname: http://goo.gl/rem8L
20:08 SFLimey The data is disposable in staging so not really worried about losing data. Gluster is in stage just to use a simular configuration from the servers to access the gluster volume.
20:08 semiosis since you have replica 4 you can do this without downtime, you'll just have one replica down for a bit (though you could minimize that time to just a few seconds if you're careful)
20:09 zero_ark joined #gluster
20:11 SFLimey So looking at that guide semiosis, that's what I originally tried but it complained that the brick did not exist.
20:12 semiosis http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
20:12 glusterbot Title: Gluster 3.2: Brick Restoration - Replace Crashed Server - GlusterDocumentation (at www.gluster.org)
20:20 semiosis that guide?
20:20 semiosis maybe if we back up a bit.  what problem are you trying to solve?
20:20 rotbeard joined #gluster
20:21 natgeorg joined #gluster
20:22 _dist same question from earlier today :) : Is it normal that the glustershd.log is filled with "I: Another crawl is in progress" ? an entry every 2-15 min
20:22 _dist can someone maybe cat theirs and see?
20:24 chirino joined #gluster
20:26 mwoodson_ joined #gluster
20:30 meridion_ joined #gluster
20:37 shyam left #gluster
20:46 semiosis _dist: dont see that in my shd log
20:53 tdasilva left #gluster
20:53 Matthaeus joined #gluster
20:55 theron joined #gluster
20:55 _dist semisos: it only does it during healing, I think only for replicas (sorry I definitely should have mentioned that)
20:56 doekia only one occurence in mine ./glustershd.log.1:[2014-05-30 20:26:41.876580] I [afr-self-heald.c:1180:afr_dir_exclusive_crawl] 0-www-replicate-0: Another crawl is in progress for www-client-0
20:58 semiosis _dist: if you dont spell my nick right, i dont get the notification that you're talking to me.  protip: use tab completion (sem-TAB) in your irc client instead of typing it all out
21:00 zero_ark joined #gluster
21:12 _dist semiosis: sorry :) thanks doekia, does anyone know what it means?
21:13 semiosis got that one :)
21:16 doekia _dist, seems pretty obvious I guess, the routine afr_dir_exclusive_crawl get triggered ... from the file afr-sel-heald.c basically the node crawl a directory tree / branch in order to ensure integrity check/repair ... more detail certainly reading the source ... line 1100
21:16 7F1AAUR7F joined #gluster
21:19 doekia ok went tru ... the routine is about to crawl a directory tree and realize that  another crawl runs at the same time ... log the message and leave
21:23 andreask joined #gluster
21:24 XpineX joined #gluster
21:39 _dist doekia: thanks, it is probably happening when a self heal is running but it does the check every so many min
21:39 _dist sound reasonable ?
21:40 doekia _dist, I'm not a gluster internalist ... just reading the portion of code that raise this log message
21:49 gmcwhist_ joined #gluster
21:51 calum_ joined #gluster
22:03 mortuar joined #gluster
22:05 JoeJulian SFLimey: I would schedule downtime. There's currently a migration bug that can cause all the clients to crash when doing a replace-brick/remove-brick/rebalance.
22:05 SFLimey In which version JoeJulian?
22:06 JoeJulian I found it in 3.4.3. Still working on isolating where the problem originates.
22:06 SFLimey In production we run 3.4.2 still.
22:07 JoeJulian if it's in 3.4.3, it's in all the 3.4 series, maybe more.
22:07 SFLimey Gotcha, really good to know.
22:07 SFLimey thxs
22:08 JoeJulian bug 1022510
22:08 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1022510 urgent, urgent, ---, rgowdapp, NEW , GlusterFS client crashes during add-brick and rebalance
22:12 JoeJulian JustinClift: I can see good and bad to the consultants list idea. The downside could be a perceived endorsement. Unless we have some sort of certification process, I would suggest leaving it to Google.
22:28 Matthaeus joined #gluster
22:45 MeatMuppet left #gluster
22:53 JoeJulian @ppa
22:53 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
22:53 JoeJulian hmm...
22:55 glusterbot joined #gluster
23:10 jcsp joined #gluster
23:26 glusterbot New news from resolvedglusterbugs: [Bug 1091777] Puppet module gluster (purpleidea/puppet-gluster) to support RHEL7/Fedora20 <https://bugzilla.redhat.com/show_bug.cgi?id=1091777> || [Bug 868792] baseurl in the yum repo file is incorrect. <https://bugzilla.redhat.com/show_bug.cgi?id=868792> || [Bug 989038] backupvolfile-server does not work with automount <https://bugzilla.redhat.com/show_bug.cgi?id=989038> || [Bug 961615
23:26 glusterbot New news from newglusterbugs: [Bug 1105415] [SNAPSHOT]: Auto-delete should be user configurable <https://bugzilla.redhat.com/show_bug.cgi?id=1105415> || [Bug 962169] prove ./tests/basic/rpm.t fails on non x86_64 architectures <https://bugzilla.redhat.com/show_bug.cgi?id=962169> || [Bug 1093594] Glfs_fini() not freeing the resources <https://bugzilla.redhat.com/show_bug.cgi?id=1093594> || [Bug 1075611] [FEAT] log: enhance
23:50 mortuar joined #gluster
23:55 ProT-0-TypE joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary