Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-12-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 MattJ_EC Anyone able to assist with a "0-mgmt: failed to fetch volume file" error?
00:12 partner_ did the /var fill up? or perhaps a firewall issue?
00:20 abyss^ joined #gluster
00:21 MattJ_EC Neither :(
00:21 MattJ_EC Other servers in same subnet with same config connect fine
00:30 harish joined #gluster
00:32 T3 joined #gluster
00:57 DV joined #gluster
00:58 MattJ_EC Now all my PHP processes are hanging on FUSE requests to Gluster files - anyone able to assist? I can provide traces etc
01:07 shubhendu joined #gluster
01:28 bala joined #gluster
01:40 elico MattJ_EC: can you start from 0?
01:40 MattJ_EC Start from 0?
01:40 elico what is the issue?(I do not have the whole history)
01:44 MattJ_EC A million things :P. Basically, I have 3-5 web servers with three directories mounted to two replicated gluster servers
01:44 MattJ_EC Performance has suddenly got incredibly incredibly bad last day or so to the point where it's hanging on every single read
01:45 MattJ_EC ls on a small directory taking >15 minutes
01:45 MattJ_EC I believe partial cause might have been recent addition of the replica server
01:46 elico MattJ_EC: and is there any specific reason why you are using GlusterFS for this environment?
01:46 elico How much space is being used from ther GlusterFS cluster?
01:47 MattJ_EC Couldn't find a better alternative? Although starting to think there probably isn't one
01:47 elico From what I understood the mounted FS are FUSE and not NFS?
01:47 MattJ_EC Yep
01:48 MattJ_EC My requirement is multiple directories able to be served by a redundant set of storage servers with real-time updates
01:48 elico Hmm ok seems reasonable enough.
01:48 elico is it in a private servers or cloud based?
01:48 MattJ_EC For now I've just migrated my web hosts to point to a single NFS share (not gluster) to resolve outage
01:48 MattJ_EC AWS
01:49 elico OK.. @_@
01:50 elico I would say that if you are running ontop of AWS I would go one step at a time and not run a fully  redundant system.
01:51 danku joined #gluster
01:51 elico I think that using NFS as a starter point is good enough if the mounted FS is being backed by a GlusterFS peering.
01:52 elico how busy are these web servers?
01:53 elico MattJ_EC:
01:53 MattJ_EC Currently there's no backing - I've temp switched it to NFS straight onto a mounted EBS volume
01:54 MattJ_EC Quite busy - serving a number of large eCommerce sites
01:54 MattJ_EC I'm tempted to do just a cold backup server in the other zone - copy of NFS files, then it's simply a matter of remounting if a (rare) outage occurs
01:55 MattJ_EC After a couple days of intense firefighting with this stuff I'm thinking a full DFS is a bit too much management :P
01:56 elico MattJ_EC: Well since you are running ontop of AWS I assume you would not need all GlusterFS glory since their system is stable...
01:56 _Bryan_ joined #gluster
01:57 MattJ_EC In theory yeah - although they only guarantee full uptime if you use multiple availability zones (so >1 server)
01:58 MattJ_EC Although I guess the chance of a full failure that would require that is pretty low
01:58 elico I would go with GlusterFS servers peering while using only one with a NFS mount so you could benefit from GlusterFS replication and not loose performance in a case the two GlusterFS are far from each other and the clients nodes.
01:59 MattJ_EC Yeah, that sounds like a good approach - will look into doing it that way :)
01:59 elico MattJ_EC: have you used GlusterFS nfs export? it's not the nfs-common one...
02:00 MattJ_EC No but seen the docs for it
02:00 elico Great. I hope you will get all fixed up.
02:00 elico by the way what OS? what version of GlusterFS?
02:01 MattJ_EC Ubuntu 14.04 with latest repo version of glusterfs (3.4.2)
02:02 elico I am using 14.04 also in most of my servers.
02:02 elico (not that much just a bunch)
02:03 MattJ_EC Only thing I don't like about 14.04 is the lack of PHP5.4 packages (only 5.5 available), but that's only a minor annoyance
02:04 elico If you have a specific code\application that uses 5.4 then you will have an issue but the developers needs to handle some issues if found.
02:07 MattJ_EC Yeah, we've yet to find any issues with compatibility but the devs keep complaining that there might be :P
02:08 elico To keep their job intact..
02:11 haomaiwa_ joined #gluster
02:12 MattJ_EC Yep I guess so :P
03:16 lalatenduM joined #gluster
03:16 danku joined #gluster
03:19 MattJ_NZ joined #gluster
03:29 shubhendu joined #gluster
03:32 nishanth joined #gluster
03:33 MattJ_EC joined #gluster
03:43 _Bryan_ joined #gluster
03:46 kshlm joined #gluster
03:48 itisravi joined #gluster
03:50 meghanam joined #gluster
03:51 MattJ_NZ joined #gluster
03:55 kanagaraj joined #gluster
04:03 elico joined #gluster
04:15 RameshN joined #gluster
04:15 glusterbot News from newglusterbugs: [Bug 1152956] duplicate entries of files listed in the mount point after renames <https://bugzilla.redhat.co​m/show_bug.cgi?id=1152956>
04:16 shubhendu joined #gluster
04:22 nbalacha joined #gluster
04:25 ndarshan joined #gluster
04:30 prasanth_ joined #gluster
04:30 anoopcs joined #gluster
04:34 rafi joined #gluster
04:34 aravindavk joined #gluster
04:35 atinmu joined #gluster
04:42 calisto joined #gluster
04:45 atalur joined #gluster
04:45 glusterbot News from newglusterbugs: [Bug 1176756] glusterd: remote locking failure when multiple synctask transactions are run <https://bugzilla.redhat.co​m/show_bug.cgi?id=1176756>
04:47 T3 joined #gluster
04:51 aravindavk joined #gluster
04:52 jiffin joined #gluster
04:58 jaank joined #gluster
04:59 poornimag joined #gluster
04:59 ppai joined #gluster
05:03 raghu joined #gluster
05:04 aravindavk joined #gluster
05:08 aravindavk joined #gluster
05:12 atinmu joined #gluster
05:12 aravinda_ joined #gluster
05:17 jaank joined #gluster
05:18 aravindavk joined #gluster
05:20 aravinda_ joined #gluster
05:22 jaank_ joined #gluster
05:23 poornimag joined #gluster
05:30 hagarth joined #gluster
05:30 hchiramm joined #gluster
05:38 sahina joined #gluster
05:44 prasanth_ joined #gluster
05:48 coredump|br joined #gluster
05:48 T3 joined #gluster
05:52 ramteid joined #gluster
05:55 bala joined #gluster
05:59 sputnik13 joined #gluster
06:03 atalur joined #gluster
06:03 anil_ joined #gluster
06:11 nshaikh joined #gluster
06:17 kdhananjay joined #gluster
06:18 lalatenduM joined #gluster
06:22 hagarth joined #gluster
06:31 atinmu joined #gluster
06:36 overclk joined #gluster
06:37 lalatenduM joined #gluster
06:40 kshlm joined #gluster
06:46 glusterbot News from newglusterbugs: [Bug 1176770] glusterd: Remove cruft in code base incrementally, improve readability <https://bugzilla.redhat.co​m/show_bug.cgi?id=1176770>
06:46 soumya joined #gluster
06:53 anil_ joined #gluster
06:57 ppai joined #gluster
06:58 meghanam joined #gluster
06:59 DV joined #gluster
07:16 ppai joined #gluster
07:16 DV joined #gluster
07:16 glusterbot News from resolvedglusterbugs: [Bug 1147953] Enabling Quota on existing data won't create pgfid xattrs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1147953>
07:18 kaushal_ joined #gluster
07:23 T3 joined #gluster
07:27 lalatenduM joined #gluster
07:30 kovshenin joined #gluster
07:32 SOLDIERz joined #gluster
07:34 SOLDIERz joined #gluster
07:37 rgustafs joined #gluster
07:46 glusterbot News from resolvedglusterbugs: [Bug 1032122] glusterd getting oomkilled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1032122>
07:51 Philambdo joined #gluster
07:56 hagarth joined #gluster
08:33 fsimonce joined #gluster
08:35 coredump joined #gluster
08:38 RaSTar joined #gluster
08:47 anoopcs joined #gluster
08:47 mator joined #gluster
08:48 anil joined #gluster
08:53 saurabh joined #gluster
08:56 Fen2 joined #gluster
08:57 poornimag joined #gluster
08:57 raghu joined #gluster
08:59 vogon1 joined #gluster
09:04 nshaikh joined #gluster
09:09 ppai joined #gluster
09:18 deepakcs joined #gluster
09:21 lalatenduM joined #gluster
09:32 poornimag joined #gluster
09:36 anoopcs joined #gluster
09:39 anoopcs joined #gluster
09:46 vogon1 Hi
09:46 glusterbot vogon1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:46 vogon1 Has anyone a Centos 7 with glusterfs 3.6.1 fully working?
09:46 vogon1 I got it up and running, but snapshot does not work
09:47 vogon1 snapshot create: failed: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of gv01 are thinly provisioned LV.
09:47 vogon1 Doing exactly the same on CentOS 6 + glusterfs 3.6.1 _does_ work
09:48 harish joined #gluster
09:51 soumya joined #gluster
09:54 deniszh joined #gluster
09:54 RaSTar joined #gluster
09:57 huleboer joined #gluster
10:15 edward1 joined #gluster
10:18 hagarth joined #gluster
10:34 LebedevRI joined #gluster
10:35 Debloper joined #gluster
10:36 coredump|br joined #gluster
10:46 glusterbot News from newglusterbugs: [Bug 1175617] Glusterd gets killed by oom-killer because of memory consumption <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175617>
11:05 anoopcs joined #gluster
11:11 johndescs left #gluster
11:18 diegows joined #gluster
11:24 kkeithley1 joined #gluster
11:43 hagarth joined #gluster
11:45 vogon1 joined #gluster
11:46 ndevos REMINDER: Gluster Bug Triage meeting start in 15 minutes in #gluster-meeting
11:49 soumya_ joined #gluster
11:49 meghanam_ joined #gluster
11:53 rjoseph joined #gluster
11:56 RameshN joined #gluster
12:04 edward1 left #gluster
12:08 morse joined #gluster
12:14 itisravi joined #gluster
12:23 soumya_ joined #gluster
12:25 lalatenduM joined #gluster
12:25 lalatenduM joined #gluster
12:25 morse joined #gluster
12:25 fsimonce joined #gluster
12:25 overclk joined #gluster
12:25 abyss^ joined #gluster
12:25 cyberbootje joined #gluster
12:25 doekia joined #gluster
12:25 hybrid512 joined #gluster
12:25 XpineX joined #gluster
12:25 ninkotech joined #gluster
12:25 Intensity joined #gluster
12:25 sac`away joined #gluster
12:26 rafi joined #gluster
12:26 social joined #gluster
12:26 lanning joined #gluster
12:26 Bosse_ joined #gluster
12:26 uebera|| joined #gluster
12:26 glusterbot joined #gluster
12:26 Andreas-IPO joined #gluster
12:26 RobertLaptop joined #gluster
12:26 strata joined #gluster
12:26 sadbox joined #gluster
12:26 johnnytran joined #gluster
12:26 JordanHackworth joined #gluster
12:26 siel joined #gluster
12:26 Telsin joined #gluster
12:26 weykent joined #gluster
12:26 ndevos joined #gluster
12:26 deepakcs joined #gluster
12:26 Philambdo joined #gluster
12:26 wgao joined #gluster
12:26 mikedep333 joined #gluster
12:26 eightyeight joined #gluster
12:26 jonybravo30 joined #gluster
12:26 rjoseph joined #gluster
12:26 RaSTar joined #gluster
12:26 sauce joined #gluster
12:26 chirino joined #gluster
12:26 Arminder joined #gluster
12:26 eryc joined #gluster
12:26 yoavz joined #gluster
12:26 dastar joined #gluster
12:26 SmithyUK joined #gluster
12:26 DJClean joined #gluster
12:26 jbrooks joined #gluster
12:26 quydo joined #gluster
12:26 Ramereth joined #gluster
12:26 y4m4 joined #gluster
12:26 partner_ joined #gluster
12:26 tessier_ joined #gluster
12:26 stigchri1tian joined #gluster
12:26 stickyboy joined #gluster
12:26 UnwashedMeme joined #gluster
12:26 churnd joined #gluster
12:26 Guest75764 joined #gluster
12:26 rastar_afk joined #gluster
12:26 nixpanic_ joined #gluster
12:26 NuxRo joined #gluster
12:26 atrius` joined #gluster
12:27 samsaffron___ joined #gluster
12:27 ndevos joined #gluster
12:28 meghanam_ joined #gluster
12:28 coredump|br joined #gluster
12:28 badone joined #gluster
12:28 AaronGr joined #gluster
12:28 fleducquede joined #gluster
12:28 frankS2 joined #gluster
12:28 jvandewege joined #gluster
12:28 dockbram_ joined #gluster
12:28 _br_- joined #gluster
12:29 meghanam_ joined #gluster
12:30 elico joined #gluster
12:30 jonybravo30 please help deleting volumes
12:30 jonybravo30 i exec
12:30 jonybravo30 gluster volume stop gluster_volume
12:30 jonybravo30 gluster volume delete gluster_volume
12:30 jonybravo30 Volume gluster_volume does not exist
12:30 jonybravo30 then i exec gluster volume info
12:30 jonybravo30 and volume persists
12:31 jonybravo30 some ideas
12:31 jonybravo30 ?
12:36 SOLDIERz joined #gluster
12:45 warci joined #gluster
12:47 glusterbot News from newglusterbugs: [Bug 1175742] [USS]: browsing .snaps directory with CIFS fails with "Invalid argument" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175742>
12:47 glusterbot News from newglusterbugs: [Bug 1175711] os.walk() vs scandir.walk() performance <https://bugzilla.redhat.co​m/show_bug.cgi?id=1175711>
12:55 meghanam_ joined #gluster
12:55 cultav1x joined #gluster
13:00 Fen1 joined #gluster
13:03 hagarth joined #gluster
13:06 fandi joined #gluster
13:07 rjoseph joined #gluster
13:18 RaSTar joined #gluster
13:23 elico left #gluster
13:43 mator ndevos, what is USS bugs?
13:47 tdasilva joined #gluster
13:47 RameshN joined #gluster
13:54 kanagaraj joined #gluster
13:54 kkeithley_ mator: USS = User Serviceable Snapshots
13:57 mator thanks
14:05 elico joined #gluster
14:12 shubhendu joined #gluster
14:17 kshlm joined #gluster
14:20 msmith joined #gluster
14:20 virusuy joined #gluster
14:20 virusuy joined #gluster
14:23 squizzi joined #gluster
14:24 bala joined #gluster
14:28 meghanam joined #gluster
14:28 prasanth_ joined #gluster
14:29 bala1 joined #gluster
14:37 coredump joined #gluster
14:37 doekia joined #gluster
14:39 vogon1 joined #gluster
14:44 coredump joined #gluster
14:46 rjoseph joined #gluster
14:55 aravindavk joined #gluster
15:01 delhage joined #gluster
15:02 kshlm joined #gluster
15:07 bala joined #gluster
15:14 nbalacha joined #gluster
15:16 plarsen joined #gluster
15:17 nbalacha joined #gluster
15:26 lalatenduM joined #gluster
15:31 kshlm joined #gluster
15:41 bala joined #gluster
15:43 afics joined #gluster
15:57 UnwashedMeme left #gluster
16:00 calisto joined #gluster
16:00 rjoseph joined #gluster
16:01 tdasilva joined #gluster
16:01 pdurbin joined #gluster
16:03 pdurbin semiosis and JoeJulian: a quote about the GPL: https://botbot.me/freenode/positivepyth​on/2014-12-23/?msg=28210137&amp;page=1 :)
16:07 coredump joined #gluster
16:17 semiosis pdurbin: ugh. not another gpl debate. i feel like everything that can be said about the gpl has already been said, on both sides.
16:17 semiosis pdurbin: gpl debates no longer introduce new ideas, they only declare alliances
16:18 glusterbot News from newglusterbugs: [Bug 1176948] Geo-replication no longer preserves ownership group <https://bugzilla.redhat.co​m/show_bug.cgi?id=1176948>
16:24 and` joined #gluster
16:26 pdurbin hmm. well, I'm not quite ready to declare an alliance. trying to keep an open mind.
16:30 fubada purpleidea: hi, trying to get more warnings/details from master side
16:31 purpleidea fubada: do you see this warning: https://github.com/purpleidea/puppet-glu​ster/blob/master/manifests/params.pp#L74
16:32 lmickh joined #gluster
16:32 purpleidea you'll see that text on your puppet MASTER
16:32 purpleidea i'm guessing that's your issue to the first question you asked
16:32 coredump joined #gluster
16:35 fubada purpleidea: https://gist.github.com/aa​merik/fae9b65a0cc73d99cec4
16:35 purpleidea fubada: yep, you got it
16:35 fubada yes i see that warning
16:35 purpleidea fubada: easy fix though
16:35 purpleidea fubada: https://github.com/purpleidea/puppet-gluster/b​lob/master/DOCUMENTATION.md#how-do-i-get-the-o​s-independent-aspects-of-this-module-to-work
16:36 purpleidea fubada: once you're done that, ping me and we'll debug your #2 problem
16:36 fubada so just install the data module
16:37 purpleidea fubada: yep
16:37 purpleidea requires puppet 3.x or greater
16:37 harish_ joined #gluster
16:38 fubada do you know if I want the latest form github? or one on the forge is from last march
16:38 fubada 0.0.4 vs 0.0.3
16:38 purpleidea fubada: just use whatever's on github
16:39 fubada thanks
16:39 purpleidea yw
16:40 sputnik13 joined #gluster
16:40 fubada purpleidea: can you tell me the exact way it appears under your modules dir
16:40 fubada not sure how to put it in r10k
16:40 purpleidea semiosis taught me 'yw' it's pretty useful. semiosis is a gluster wizard
16:40 purpleidea semiosis ++
16:40 purpleidea semiosis: ++
16:40 glusterbot purpleidea: semiosis's karma is now 2000009
16:40 fubada mod 'module-data'?
16:40 purpleidea fubada: me neither :P
16:42 squizzi joined #gluster
16:44 * semiosis just an irc wizard
16:48 vimal joined #gluster
16:48 fubada purpleidea: installed
16:49 purpleidea fubada: your warning should go away, and so should your service restart issue
16:50 lpabon joined #gluster
16:50 fubada purpleidea: checking this nw
16:51 fubada purpleidea: unfortunately still there
16:51 purpleidea fubada: what about the warning
16:52 purpleidea fubada: if you still see it, then: ls /your/puppet/modules/directory
16:52 purpleidea usually /etc/puppet/modules/
16:53 fubada you want to see the list?
16:54 purpleidea fubada: you can 'grep module' if you prefer to be more private
16:55 fubada still see [puppet-server] Scope(Class[Gluster::Params]) Unable to load yaml data/ directory!
16:55 purpleidea ls  /your/puppet/modules/directory
16:55 purpleidea 11:57 < purpleidea> usually /etc/puppet/modules/
16:55 fubada from my modules:
16:55 fubada 4 drwxr-xr-x  4 root root 4096 Dec 23 11:47 module_data
16:55 fubada heres the entire list
16:55 purpleidea fubada: restart puppetmaster
16:56 fubada https://gist.github.com/aa​merik/9262d978aa016ad969af
16:56 purpleidea fubada: that's the issue...
16:57 purpleidea fubada: you need to name it 'module-data' not 'module_data' ... rename, restart pm, and try again
16:57 purpleidea afaict
16:57 fubada thanks
16:58 purpleidea yw
17:00 ama joined #gluster
17:00 ama Hello.
17:00 glusterbot ama: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:03 ama I need a pointer to some documentation I can't find.  I have reinstalled a server and need to reconstruct my GlusterFS volumes.  I have all bricks and I kept a copy of the /etc dir of the server.  I need to rebuild the whole thing without loosing the data.  I remember I read how to do it some time ago, but I don't seem to be able to find the doc again.  Any help finding it, please?
17:03 calisto1 joined #gluster
17:06 coredump joined #gluster
17:09 semiosis ama: what version of glusterfs did you have?  do you intend to keep that same version?
17:10 fubada purpleidea: r10k refuses to deploy a module with a dash in its name
17:10 fubada truncates module-data to just 'data'
17:10 fubada ;/
17:10 ama Not necesarilly, semiosis, I'm running Debian GNU/Linux stable, I'd rather use GlusterFS from it, if possible.
17:10 semiosis fubada: even without r10k i've had problems with hyphenated modules.
17:11 fubada semiosis: yes hyphens arent technically allowed in class names or module names
17:11 fubada underscores
17:11 wushudoin joined #gluster
17:12 semiosis ama: your glusterfs config was most likely in /var/lib/gluster, not /etc.  but regardless, you can just create a new volume reusing the same bricks, as long as you connect the bricks in the same order as they were before
17:13 semiosis you may need to tweak some xattrs to avoid path or a prefix error
17:13 ama What do you mean by "the same orther", semiosis?   Any documentation I can read to create the volume again, please?
17:13 semiosis you may need to tweak some xattrs to avoid path or a prefix of it error
17:13 glusterbot semiosis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
17:13 semiosis ^^^
17:13 semiosis how many bricks do you have?
17:14 ama I have 3 volumes with 6 bricks.
17:14 purpleidea fubada: sounds like an r10k bug to me
17:14 purpleidea semiosis: this is a 'special' module
17:14 semiosis ama: could you please be less ambiguous
17:15 semiosis is that 3 volumes, with two bricks each, in 2-way replication?
17:15 ama Less ambiguous how, semiosis?
17:15 fubada purpleidea: hyphens arent allowed in module names :P
17:15 semiosis or is that 18 bricks?
17:16 ama I have six (6) hard drives.  Each drive has three (3) directories.  Each of the directories on each of the drives was part of a different Gluster volume, semiosis.
17:16 ama So, I had three (3) different volumes, each of them using a directory on each of the 6 HDDs.
17:17 semiosis ok, so 18 bricks
17:17 ama Yhey're all striped, no replication.
17:17 purpleidea fubada: it works for me, and ripienaar isn't an idiot, so if you're really having a problem getting it to work, i'd open an issue at https://github.com/ripienaar/puppet-module-data/
17:17 semiosis you'll need to run the gluster volume create command with the brick paths in the same order as they were previously, otherwise all the files will be in the "wrong" place
17:18 fubada purpleidea: youre right, but ya im having trouble deploying it with r10k, ill raise it with those folks
17:18 fubada thanks for your help
17:19 ama What do you mean by "the wrong place", semiosis?  It's going to be very difficult for me to know the order in which they where added before.
17:19 purpleidea fubada: sorry, it never occured to me that that would be an issue... just dump it on your server manually for now, and test. it should bring magic.
17:19 semiosis distribute places a file on a brick by hashing the filename.  if you shuffle the bricks, then files will not be on the brick determined by the hash of the filename
17:19 fubada purpleidea: if you use 'puppet module install ripienaar-module_data'
17:19 fubada it will install as module_data
17:20 purpleidea fubada: well maybe it works that way too... idk
17:20 fubada as per https://github.com/ripienaar/puppet​-module-data/blob/master/Modulefile
17:20 fubada purpleidea: when I deployed it as module_data it didnt work with the gluster module
17:20 fubada the fix didnt work
17:20 semiosis ama: glusterfs will cope with this but it's not ideal.  you'll take a slight performance hit, and you'll get "linkfiles" everywhere on your bricks
17:21 ama I see, semiosis.  I'll find out the UUIDs from the old /etc/fstab file, then.
17:21 semiosis ama: that should work.  good thinking!
17:21 purpleidea fubada: getting module data to work is one issue. it works for me, but if it doesn't then you need to fix that. the only fix i know i already told you. after that works, everything else should be fine.
17:21 ama But how do I create the new volume, semiosis?
17:21 fubada purpleidea: how did you install the module? just copy it in place?
17:21 purpleidea fubada: yep
17:21 semiosis ,,(rtfm)
17:21 glusterbot Read the fairly-adequate manual at https://github.com/gluster/glusterfs/tre​e/master/doc/admin-guide/en-US/markdown
17:22 ama Are you talking to me, semiosis?
17:22 fubada purpleidea: i suspect thats where the problem is, if you used 'puppet module install' or anything like that it would change the hyphen to _
17:22 semiosis ama: https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_setting_volumes.md
17:22 semiosis ama: i was getting the link to the index so i could find the page you needed, ^^^
17:22 fubada purpleidea: so you may not be using it right :(
17:22 fubada inside gluster
17:22 ama Am I suppossed to create the volumes as if it was the first time I did?  As if the bricks were empty?
17:22 purpleidea fubada: it works just great for me
17:23 fubada purpleidea: i dont doubt that, but you copied the dir with a hyphen into the modules/ dir
17:23 purpleidea yep
17:23 fubada thats not how people typically deploy modules
17:23 semiosis ama: yes, although you'll probably need to fiddle with xattrs to avoid the path or a prefix of it error vvvvv
17:23 glusterbot semiosis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
17:23 purpleidea fubada: if you say so :)
17:24 fubada purpleidea: aw :)
17:24 ama What do you mean by "fiddle with xattrs to avoid the path or a prefix of it error vvvvv", semiosis?
17:24 glusterbot ama: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
17:24 semiosis read that article glusterbot just linked (not the bug report)
17:27 ama Si, semiosis, am I suppossed to setfattr and the build the volumes as if the bricks where empty?
17:27 ama s/Si/So
17:28 semiosis well you never told me what version of glusterfs you're using, so i'm hedging my advice.  you *may* encounter an error, and if you do, setfattr will resolve it.
17:28 semiosis but in general, yes, you'll need to create new volumes using your existing bricks
17:29 semiosis essentially you need to run the exact same gluster volume create command as you did before (assuming this is >= 3.1.0)
17:29 aulait joined #gluster
17:29 ama Would it be an alternative to create totally new bricks and rsync ovher the contents of individual old bricks to the new volumes?
17:29 semiosis and have your bricks mounted at the same paths as before
17:29 semiosis ama: yes you could do that too
17:29 semiosis although it would take longer
17:30 semiosis and require 2x the space
17:30 ama I don't mind how much time it takes, as long as it works.
17:30 semiosis and you might as well upgrade to the ,,(latest) version while you're at it
17:30 glusterbot The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
17:31 semiosis er, maybe not the *latest* but a more recent one than is in debian stable
17:31 ama Debian jessie?
17:31 semiosis we make packages for debian
17:31 semiosis at the download link above
17:31 ama Ok, thanks.
17:32 semiosis ama: your feedback is welcomed... ,,(ubuntu) -- applies to debian packages as well
17:32 glusterbot ama: semiosis is gearing up to improve the ubuntu (and debian) packages. if you have an interest in glusterfs packages for ubuntu please ping semiosis. if you have an issue or bug to report regarding the ubuntu packages (even if you've already told semiosis about it) please open an issue on github: https://github.com/semiosis/glusterfs-debian
17:32 semiosis note, that's not for glusterfs bugs, only for .deb package bugs
17:32 ama I don't use Ubuntu, I use Debian.
17:32 semiosis whatever :)
17:33 semiosis pretty much the same package, and i build em all
17:33 semiosis at the very least, if you run into any trouble with the packages from download.g.o please let me know here
17:34 ama Sure, I will.
17:34 semiosis thanks
17:34 ama But before I do it I need to make sure what I'm going to do will (probably) work.
17:35 semiosis well i've already given you all the info you need.  now you get to try it out :)
17:36 semiosis i suppose the rsync is the absolute safest way to proceed
17:36 semiosis since you can mount your old bricks read-only
17:36 ama I'm not sure I've understood 100%.  My question would be, if I create new volumes with new bricks, how should I procceed to get the files synced from the old bricks to the new volumes exactly?
17:37 semiosis you suggested rsync yourself
17:37 semiosis what dont you understand about that?
17:37 fubada purpleidea: i copied the module into the modules dir using 'module-data' as the name, but same issue
17:37 ama Sorry, but my English sucks and I'm not sure whether or not you said it was Ok to do it the way I said or not, semiosis.
17:37 purpleidea fubada: and you restarted the puppet master ?
17:38 semiosis yes, rsync is OK
17:38 ama Yes, semiosis, I suggested rsync, but I need to rsync each old brick to the new volume or how?  I'm just guessing here :(
17:38 fubada purpleidea: that i did not, one sec
17:39 semiosis ama: after you create & start the volume you need to mount it, mount -t glusterfs server:volume /mount/point, then you can rsync data from the old bricks into /mount/point
17:39 semiosis https://github.com/gluster/glusterfs/blo​b/master/doc/admin-guide/en-US/markdown/​admin_settingup_clients.md#manual-mount
17:40 ama Is that URL relevant to me, semiosis?
17:40 semiosis ama: probably
17:42 semiosis ama: at this point you're still very new to glusterfs.  you should start playing around with it.  create a simple test volume, mount it, create & delete some data, to get familiar with glusterfs
17:46 fubada purpleidea: confirmed, same issue with module-data, module_data, doesnt matter
17:46 ama I already did all that when I started using it, semiosis.  I'm familiar with it.  I've been using it for some months already.
17:47 purpleidea fubada: then, unfortunately, i don't know what the issue is, but it's not a puppet-gluster one :(
17:47 fubada purpleidea: the /var/lib/glusterd/glusterd.info keep changing
17:48 fubada with -operating-version=1
17:48 fubada causing a service restart
17:48 purpleidea fubada: yes i know, and it keeps changing because your puppet master for whatever reason is unable to read the data/ directory which needs the module-data module
17:49 fandi joined #gluster
17:49 purpleidea maybe perms on data/ are wrong? idk, but you should ask in #puppet why you can't get hiera 'data in modules' working
17:49 B21956 joined #gluster
17:50 fubada maybe it needs some gems in jruby
17:50 fubada thanks
17:52 fubada purpleidea: do you have /etc/puppet/hiera/backend/module_data_backend
17:52 fubada im missing this path and see it as a warning in the logs
17:56 msmith joined #gluster
17:56 purpleidea fubada: i don't have a puppet master up at the moment, sorry
17:56 purpleidea fubada: i imagine that hiera has to be installed though
17:57 purpleidea fubada: i use this: https://github.com/purpleidea/puppet-puppet to set it up. i don't think i do anything specific at that path
17:58 fubada thanks
17:59 purpleidea yw
18:05 fubada purpleidea: i guess this is the issue Puppet hiera(): Cannot load backend module_data: no such file to load -- hiera/backend/module_data_backend
18:23 elico joined #gluster
18:33 vogon1 joined #gluster
18:53 PeterA joined #gluster
18:54 PeterA anyone having quota mismatching?
18:54 PeterA i have an NFS export from gluster having quota mismatch
18:54 PeterA wonder how to reclaim the space…
18:55 PeterA even though du shows only using 110G, the df still showing 900G used.
18:55 PeterA unmount and remount doesn't help...
18:56 JonathanD joined #gluster
18:59 pdurbin left #gluster
19:00 JoeJulian @quota
19:00 * JoeJulian pokes glusterbot
19:00 JoeJulian @factoid quota
19:02 JoeJulian PeterA: Set features.quota-deem-statfs on
19:02 JonathanD joined #gluster
19:02 JoeJulian @learn quota mismatch as To have df match du, set features.quota-deem-statfs on for your volume.
19:02 glusterbot JoeJulian: The operation succeeded.
19:03 PeterA i already that features.quota-deem-statfs on
19:04 PeterA and no hard link on the mount
19:06 deniszh joined #gluster
19:06 msmith joined #gluster
19:09 JoeJulian kill and restart the nfs service, maybe?
19:09 PeterA on the client?
19:09 PeterA i unmounted and remounted and still the same
19:09 JoeJulian That would be a server thing.
19:09 PeterA u mean gluster server?
19:10 PeterA this only happening on some of the NFS exports
19:10 JoeJulian pkill -f nfs.log ; systemctl restart glusterd (or whatever your restart is for your distro)
19:11 PeterA tried...
19:11 JoeJulian It was just a guess anyway. The only bug I was aware of was the improper default to quota-deem-statfs.
19:12 PeterA the volume is for a data base dump dir
19:12 PeterA seems like files got deleted still got the quota count...
19:16 l0uis semiosis: ping
19:16 glusterbot l0uis: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
19:17 l0uis semiosis: there are still stale references to your ppa, notable the Ubuntu.README in the download dir
19:17 l0uis semiosis: http://download.gluster.org/pub/gluster/​glusterfs/3.5/3.5.3/Ubuntu/Ubuntu.README
19:18 glusterbot News from newglusterbugs: [Bug 1023134] Used disk size reported by quota and du mismatch <https://bugzilla.redhat.co​m/show_bug.cgi?id=1023134>
19:18 glusterbot News from newglusterbugs: [Bug 917901] Mismatch in calculation for quota directory <https://bugzilla.redhat.com/show_bug.cgi?id=917901>
19:18 squizzi joined #gluster
19:22 calisto joined #gluster
19:37 l0uis semiosis: I opened a bug on the github repo.
19:39 coredump joined #gluster
19:39 msmith joined #gluster
19:42 squizzi joined #gluster
19:53 calisto joined #gluster
19:56 fubada purpleidea: i got the module-data to load, but im still getting 'Unable to load yaml data/ directory!', any other suggestions or debug tricks you can think of?
20:02 deniszh joined #gluster
20:08 ekuric joined #gluster
20:13 ekuric left #gluster
20:38 fubada purpleidea: could you show me your hiera.yaml
20:50 fubada purpleidea: https://gist.github.com/aa​merik/82273219b8f3e65d9ee5 check out this debug output from puppetmaster and hiera on gluster::versions
21:01 al joined #gluster
21:03 edwardm61 joined #gluster
21:36 badone joined #gluster
21:48 badone joined #gluster
21:56 m0zes joined #gluster
21:58 fubada purpleidea: just did notice ($operating_version) and got INFO  [puppet-server] Scope(Class[Gluster::Versions]) operating_version = 30600. foo
21:58 fubada but still the config file gets -1
21:59 fubada i think theres a bug in the puppet-gluster module
22:00 badone joined #gluster
22:04 fubada purpleidea: the bug is on https://github.com/purpleidea/puppet-glu​ster/blob/master/manifests/host.pp#L103
22:04 fubada you have $operating_version = "${::gluster::versions::operating_versions}"
22:04 fubada and it should be $operating_version = "${::gluster::versions::operating_version}"
22:04 fubada working now
22:05 plarsen joined #gluster
22:05 plarsen joined #gluster
22:11 fubada purpleidea: PR sent
22:37 purpleidea fubada: look at that! good catch, merged, thanks!
22:37 fubada nice thanks for your help
22:38 fubada do you know what this Notice: /Stage[main]/Gluster::Vardir/File[/var​/lib/puppet/tmp/gluster/vrrp]/ensure: removed
22:38 fubada this happens every run as well
22:40 purpleidea fubada: hmmm, it shouldn't keep happening once your cluster has converged...
22:40 purpleidea brb
22:58 badone joined #gluster
23:08 ama Is there anybody with writting access to the web server (gluster.org) around?  The "Gluster Architecture and Concepts" (/architetucre.html) seems to be broken at /documentation.
23:40 verdurin_ joined #gluster
23:40 verdurin_ joined #gluster
23:42 daMaestro joined #gluster
23:44 doekia joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary