Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 Mr_Psmith joined #gluster
00:20 ghenry joined #gluster
00:21 bennyturns joined #gluster
00:22 dlambrig joined #gluster
00:22 Mr_Psmith joined #gluster
00:32 jalljo joined #gluster
00:32 Mr_Psmith joined #gluster
00:38 gildub joined #gluster
00:47 gem joined #gluster
00:50 Mr_Psmith_ joined #gluster
00:53 Mr_Psmith joined #gluster
01:02 johndescs_ joined #gluster
01:04 Mr_Psmith joined #gluster
01:17 baojg joined #gluster
01:19 baojg joined #gluster
01:23 julim joined #gluster
01:25 nangthang joined #gluster
01:26 Lee1092 joined #gluster
01:29 haomaiwa_ joined #gluster
01:45 stickyboy joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:49 badone__ joined #gluster
01:56 dlambrig joined #gluster
01:58 harish_ joined #gluster
02:01 haomaiwa_ joined #gluster
02:12 amye joined #gluster
02:32 ir8 joined #gluster
02:32 ir8 Can someone help me with issue I am having on my servers?
02:33 ir8 I have gluster configured and replacated across two servers. The CPU are spiking rather high.
02:33 ir8 23722 root      20   0 1519m 1.0g 3340 R 151.9  3.2   1300:17 glusterfs
02:33 ir8 23640 root      20   0 1560m  84m 2676 S 142.0  0.3 605:27.74 glusterfsd
02:34 ir8 is there any reasons for this?
02:34 ir8 the web directory for the website virtual continer is glusterfs mount.
02:42 bharata-rao joined #gluster
02:45 Mr_Psmith joined #gluster
02:46 ir8 Hello guys.
02:57 JoeJulian ir8: Not enough information to answer that question.
02:59 skoduri joined #gluster
02:59 ir8 JoeJulian: what information did you need?
03:00 ir8 JoeJulian: which would you to see to provide more assistance by chance.
03:01 haomaiwang joined #gluster
03:11 [7] joined #gluster
03:26 hchiramm_home joined #gluster
03:36 overclk joined #gluster
03:37 jbautista- joined #gluster
03:40 vmallika joined #gluster
03:41 shubhendu joined #gluster
03:42 auzty joined #gluster
03:42 jbautista- joined #gluster
03:46 wushudoin| joined #gluster
03:54 wushudoin| joined #gluster
04:00 itisravi joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 sakshi joined #gluster
04:05 jcastill1 joined #gluster
04:06 atinm joined #gluster
04:11 jcastillo joined #gluster
04:20 ppai joined #gluster
04:22 poornimag joined #gluster
04:24 atalur joined #gluster
04:27 calavera joined #gluster
04:29 jbautista- joined #gluster
04:31 nishanth joined #gluster
04:34 jbautista- joined #gluster
04:38 kanagaraj joined #gluster
04:42 nbalacha joined #gluster
04:49 aravindavk joined #gluster
04:50 rafi joined #gluster
04:50 poornimag joined #gluster
04:51 ramteid joined #gluster
04:52 raghu joined #gluster
04:56 jiffin joined #gluster
05:01 haomaiwang joined #gluster
05:03 ndarshan joined #gluster
05:04 pppp joined #gluster
05:09 yazhini joined #gluster
05:16 woakes070048 joined #gluster
05:19 neha joined #gluster
05:20 hgowtham joined #gluster
05:24 vmallika joined #gluster
05:24 rafi joined #gluster
05:27 atalur joined #gluster
05:28 Bhaskarakiran joined #gluster
05:28 kotreshhr joined #gluster
05:30 skoduri joined #gluster
05:31 Bhaskarakiran joined #gluster
05:33 cabillman joined #gluster
05:33 vimal joined #gluster
05:33 yazhini joined #gluster
05:34 badone__ joined #gluster
05:34 lalatenduM joined #gluster
05:34 TheSeven joined #gluster
05:35 JoeJulian ir8: To identify your problem, you should start by knowing what kind of load you're expecting. What version of software you're using. Look in the client and brick logs for clues. ,,(pasteinfo)
05:35 glusterbot ir8: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
05:40 Manikandan joined #gluster
05:43 raghu` joined #gluster
05:43 ashiq joined #gluster
05:46 PatNarciso joined #gluster
05:47 yazhini joined #gluster
05:47 PatNarciso_also joined #gluster
05:47 kdhananjay joined #gluster
05:47 kshlm joined #gluster
05:49 hchiramm_home joined #gluster
05:51 zhangjn joined #gluster
05:52 zhangjn joined #gluster
05:52 beeradb joined #gluster
06:01 haomaiwa_ joined #gluster
06:01 kalzz joined #gluster
06:03 karnan joined #gluster
06:07 beeradb joined #gluster
06:19 maveric_amitc_ joined #gluster
06:21 mhulsman joined #gluster
06:26 jtux joined #gluster
06:28 mhulsman1 joined #gluster
06:31 hagarth joined #gluster
06:32 TvL2386 joined #gluster
06:34 Bhaskarakiran joined #gluster
06:40 PatNarciso joined #gluster
06:42 rafi joined #gluster
06:43 rgustafs joined #gluster
06:47 maveric_amitc_ joined #gluster
06:49 hagarth joined #gluster
06:54 spalai joined #gluster
06:55 mhulsman joined #gluster
06:57 ramky joined #gluster
06:58 PatNarciso_also joined #gluster
07:01 haomaiwa_ joined #gluster
07:04 shubhendu joined #gluster
07:06 nishanth joined #gluster
07:21 rafi joined #gluster
07:22 baojg joined #gluster
07:23 Bhaskarakiran joined #gluster
07:30 fsimonce joined #gluster
07:31 shubhendu joined #gluster
07:33 deepakcs joined #gluster
07:33 DV joined #gluster
07:33 PatNarciso joined #gluster
07:36 yazhini joined #gluster
07:37 rjoseph joined #gluster
07:42 [Enrico] joined #gluster
07:51 sac joined #gluster
07:51 baojg joined #gluster
07:58 baojg joined #gluster
08:01 haomaiwa_ joined #gluster
08:05 sripathi joined #gluster
08:05 necrogami joined #gluster
08:07 yangfeng joined #gluster
08:18 muneerse joined #gluster
08:19 arcolife joined #gluster
08:22 itisravi joined #gluster
08:23 mattmulhern joined #gluster
08:26 rjoseph joined #gluster
08:27 itisravi_ joined #gluster
08:31 PatNarciso joined #gluster
08:40 anil joined #gluster
08:52 Saravana_ joined #gluster
08:53 hagarth joined #gluster
08:56 jwd joined #gluster
09:01 haomaiwa_ joined #gluster
09:06 necrogami joined #gluster
09:08 kshlm joined #gluster
09:21 harish_ joined #gluster
09:24 PatNarciso joined #gluster
09:27 baojg joined #gluster
09:38 skoduri joined #gluster
09:39 jvn joined #gluster
09:43 Bhaskarakiran joined #gluster
09:46 jvn Hi there! Any tips for controlling overloading glfsheal processes on a gluster node? They're currently suffocating the node and raising the loads on it quite high at the times
09:47 jvn we're using gluster 3.6
09:47 LebedevRI joined #gluster
09:47 jvn I've tried disabling self healing on the volume temporarily, but gluster still spawns glfsheal processes
09:49 jvn also I've tried using ionice to control the load, but it's not much of help unfortunately
09:49 hchiramm_home joined #gluster
09:50 itisravi joined #gluster
09:55 kshlm joined #gluster
10:01 haomaiwa_ joined #gluster
10:05 Manikandan joined #gluster
10:16 PatNarciso joined #gluster
10:18 nishanth joined #gluster
10:34 yazhini joined #gluster
10:35 yazhini joined #gluster
10:36 Manikandan joined #gluster
10:43 kkeithley1 joined #gluster
10:46 Bhaskarakiran joined #gluster
10:53 kokopelli joined #gluster
10:54 overclk joined #gluster
10:57 mhulsman1 joined #gluster
10:57 mhulsman2 joined #gluster
11:01 haomaiwa_ joined #gluster
11:03 mhulsman joined #gluster
11:06 jcastill1 joined #gluster
11:08 kshlm joined #gluster
11:09 ira joined #gluster
11:10 PatNarciso joined #gluster
11:11 jcastillo joined #gluster
11:13 arcolife joined #gluster
11:14 morse joined #gluster
11:16 Bhaskarakiran joined #gluster
11:25 yazhini joined #gluster
11:36 yazhini joined #gluster
11:38 ninkotech joined #gluster
11:38 ninkotech_ joined #gluster
11:42 kotreshhr left #gluster
11:50 aravindavk joined #gluster
11:53 _Bryan_ joined #gluster
11:55 squaly joined #gluster
11:56 kanagaraj joined #gluster
11:59 Mr_Psmith joined #gluster
12:01 haomaiwa_ joined #gluster
12:06 PatNarciso joined #gluster
12:12 unclemarc joined #gluster
12:12 zhangjn joined #gluster
12:12 shubhendu joined #gluster
12:12 jtux joined #gluster
12:13 zhangjn joined #gluster
12:14 pdrakeweb joined #gluster
12:21 ppai joined #gluster
12:22 nishanth joined #gluster
12:27 TvL2386 joined #gluster
12:29 plarsen joined #gluster
12:32 hagarth joined #gluster
12:36 haomaiwang joined #gluster
12:38 shyam joined #gluster
12:42 RayTrace_ joined #gluster
12:44 kotreshhr joined #gluster
12:45 atinm joined #gluster
12:46 Slashman joined #gluster
12:47 bennyturns joined #gluster
12:49 PatNarciso joined #gluster
12:50 plarsen joined #gluster
12:56 B21956 joined #gluster
12:56 firemanxbr joined #gluster
12:58 calisto joined #gluster
13:01 DV__ joined #gluster
13:03 DV joined #gluster
13:07 Mr_Psmith joined #gluster
13:09 Manikandan joined #gluster
13:17 maveric_amitc_ joined #gluster
13:18 ppai joined #gluster
13:19 shubhendu joined #gluster
13:19 plarsen joined #gluster
13:21 TvL2386 joined #gluster
13:26 shaunm joined #gluster
13:30 atinm joined #gluster
13:35 harold joined #gluster
13:37 LebedevRI joined #gluster
13:37 Manikandan joined #gluster
13:37 dgandhi joined #gluster
13:42 maveric_amitc_ joined #gluster
13:43 zhangjn joined #gluster
13:44 plarsen joined #gluster
13:44 rwheeler joined #gluster
13:49 jobewan joined #gluster
13:52 hagarth joined #gluster
13:53 hchiramm_home joined #gluster
13:54 shubhendu joined #gluster
13:56 LebedevRI joined #gluster
13:57 cae_ joined #gluster
13:58 cae_ hi, www-admin here? The https://www.gluster.org/ certifcate seems to have expired some hours ago and should be renewed
13:58 zhangjn joined #gluster
13:59 hagarth cae_: thanks for pointing that out. csim - can you please check this? ^^
13:59 spalai left #gluster
14:00 cae_ hagarth: you're welcome. Bye all.
14:00 cae_ left #gluster
14:01 jvn left #gluster
14:03 RedW joined #gluster
14:04 kotreshhr left #gluster
14:05 jcastill1 joined #gluster
14:10 jcastillo joined #gluster
14:13 nishanth joined #gluster
14:13 Manikandan joined #gluster
14:14 shubhendu joined #gluster
14:16 haomaiwa_ joined #gluster
14:17 neofob joined #gluster
14:17 rafi joined #gluster
14:18 cyberbootje joined #gluster
14:23 papamoose left #gluster
14:40 LebedevRI joined #gluster
14:42 ekuric joined #gluster
14:54 DJClean joined #gluster
14:57 DJClean joined #gluster
14:58 LebedevRI joined #gluster
15:00 haomaiwa_ joined #gluster
15:08 aravindavk joined #gluster
15:14 cholcombe joined #gluster
15:18 _maserati joined #gluster
15:21 kanarip joined #gluster
15:22 squizzi joined #gluster
15:24 kshlm joined #gluster
15:26 CyrilPeponnet https://download.gluster.org/pub/gluster/​glusterfs/3.6/3.6.4/CentOS/epel-6/x86_64/ https cert just expired
15:26 CyrilPeponnet leading to yum errors
15:26 CyrilPeponnet any plans to renew this cert ?
15:27 CyrilPeponnet (*.gluster.org Not valid after Thursday, September 10, 2015 at 5:00:00 AM Pacific Daylight Time)
15:27 csim crap, gonna renew it
15:27 jiffin joined #gluster
15:27 CyrilPeponnet :)
15:29 CyrilPeponnet I was a bit scared this morning when I saw that all my puppet managed server using glusterfs as backend storage were reporting puppet run failure :p
15:30 csim yeah, we should monitor that, when we will have a monitoring system :/
15:32 CyrilPeponnet actually putting the end date into a shared calendar is a simple way :)
15:33 csim provided someone look
15:34 ramky joined #gluster
15:34 CyrilPeponnet sure
15:34 CyrilPeponnet or a bot plugin to check the end date and report here :p
15:35 csim I do not like bot, they might try to take over humanity
15:36 [Enrico] joined #gluster
15:36 CyrilPeponnet :p
15:36 nbalacha joined #gluster
15:37 rafi joined #gluster
15:44 LebedevRI joined #gluster
15:48 LebedevRI joined #gluster
15:50 LebedevRI joined #gluster
15:50 jwaibel joined #gluster
15:54 shyam joined #gluster
15:55 kshlm joined #gluster
15:56 kokopelli joined #gluster
15:59 plarsen joined #gluster
16:10 rich0dify joined #gluster
16:16 rich0dify hi guys, just been reading about glusterfs, is it possible to do 3 node, stripe\distribute, essentially creating some form of network raid5 like redundancy?
16:18 calavera joined #gluster
16:21 _maserati sort of. first misconception, glusterfs does not stripe. it distributes.
16:21 _maserati say you write two files to a 2-node glusterfs, there's a high probability that 1 file will goto 1 node, the other file will goto the other node.
16:22 _maserati But, yes, you can acheive raid5 (and BETTER) security with glusterfs with great throughput
16:24 wonko joined #gluster
16:24 rich0dify what model\setting would I use with getting a raid5-like build?
16:24 wonko ok, so I'm trying to wrap my brain around Gluster and the docs aren't being outwardly helpful (at least not that I've found)
16:25 _maserati_ joined #gluster
16:26 hagarth rich0dify: you can try the erasure coded volume type for raid 5 over the network
16:26 wonko is the idea that I setup keepalived and have a VIP floating between all my servers? If so, what does that do, exactly? Are the all able to access everything everywhere?
16:26 wonko so: mount -t glusterfs VIP_IP:/vol /vol
16:27 wonko is the best way to do things?
16:28 _maserati_ yes
16:28 _maserati_ its what i do
16:29 _maserati_ just as long as your data is sufficiently available (replicated)
16:30 wonko oh, reading a bit more
16:30 wonko there is no separate metadata service, so they all can do it?
16:30 wonko nifty
16:30 wonko ok, so one more (possibly odd?) question
16:31 wonko can I set priority on servers?
16:31 wonko could I for instance put a couple nodes in a remote location for off-site
16:31 wonko but not use them by default?
16:31 dijuremo wonko: read about georeplication
16:32 rafi joined #gluster
16:32 wonko dijuremo: sweet, thanks!
16:32 wonko that's perfect
16:32 dijuremo http://blog.gluster.org/category/geo-replication/
16:33 anil_ joined #gluster
16:43 cc1 joined #gluster
16:54 kokopelli joined #gluster
16:54 rafi joined #gluster
16:54 bennyturns joined #gluster
16:55 jiffin joined #gluster
17:01 shyam joined #gluster
17:09 coreping joined #gluster
17:09 chirino joined #gluster
17:10 Rapture joined #gluster
17:19 kokopelli joined #gluster
17:21 RayTrace_ joined #gluster
17:29 shyam joined #gluster
17:40 skoduri joined #gluster
17:45 Manikandan joined #gluster
18:14 ayma_ joined #gluster
18:19 shyam joined #gluster
18:32 coreping joined #gluster
18:32 ghenry joined #gluster
18:37 julim joined #gluster
18:37 cc1 joined #gluster
18:39 julim_ joined #gluster
19:04 shaunm joined #gluster
19:05 calisto joined #gluster
19:10 _maserati joined #gluster
19:12 RayTrace_ joined #gluster
19:18 _maserati_ joined #gluster
19:19 surya joined #gluster
19:24 kanagaraj joined #gluster
19:31 shyam joined #gluster
19:32 hagarth joined #gluster
19:46 johndescs joined #gluster
19:57 ghatty joined #gluster
19:59 cc1 @JoeJulian: you pointed me to the geo rep script yesterday. Will it honor ssh keys or do we have to use root pw?
19:59 dijuremo cc1: see: http://blog.gluster.org/category/geo-replication/
20:00 dijuremo Talks about ssh keys... ;)
20:02 plarsen joined #gluster
20:02 calisto joined #gluster
20:05 cc1 @dijuremo: "It prompts for the Root's Password of Slave node specified in the command." Because of our security policy, the best I can do is PermitRootLogin by ssh key only, so that would fail if it requires password. I guess I could set PermitRootLogin, get keys generated then change back
20:11 purpleidea wonko: o hai
20:12 wonko purpleidea: hola!
20:12 jwd joined #gluster
20:13 rotbeard joined #gluster
20:16 purpleidea so ?
20:17 purpleidea JoeJulian: do you know what semiosis has been up to lately? Still glustering? I haven't heard from him in a while...
20:17 wonko purpleidea: well I fixed the fact errors
20:17 wonko now on to why it's not actually doing anything. :)
20:17 purpleidea wonko: sweet... patch?
20:18 purpleidea 2 liner i think :)
20:18 wonko yeah
20:18 wonko I'll submit it
20:20 wonko travis is running now
20:25 calavera joined #gluster
20:28 wonko purpleidea: what is puppet::facter, does that come from puppet-puppet?
20:29 purpleidea wonko: yes
20:29 purpleidea patch merged btw
20:30 purpleidea g2g meeting, bbl
21:09 w00_ left #gluster
21:10 anuj joined #gluster
21:12 gildub joined #gluster
21:14 badone__ joined #gluster
21:22 _maserati wonko: sorry I couldnt answer earlier had to goto meeting. Just be aware of one thing: Geo-replication is UNIDIRECTIONAL. So no writing from the other side and expecting that data to come back over
21:23 _maserati It's the ONE thing I wish gluster had right now :)
21:23 _maserati it's on their to-do list, but i don't think anyone is tackling it right yet
21:24 _maserati I am however testing adding replicated bricks over the WAN, could always let you know how that performs (not expecting awesome performance)
21:38 sghatty joined #gluster
21:39 wonko _maserati: this is for OMGWTFBBQ disaster recovery though, so I guess that's ok. Can it be re-configured to push the other way in the case that failover were needed?
21:39 wonko _maserati: replicated bricks meaning just "normal" bricks?
21:39 wonko not some sort of geolocation thing?
21:39 _maserati 1. Not sure. You'll have to ask a more knowledgable source
21:40 _maserati 2. Replicated bricks means multiple bricks that share the same data
21:40 wonko right, so just the normal concept of a cluster only with a WAN link in between the "main" cluster and some remote bits
21:40 _maserati i.e. if i have node1:brick1 and node2:brick1 replicated, they have the same data
21:41 _maserati yeah, but the problem introduced here is... gluster does not ACK the write until all bricks have acknowledged the write... so by putting a replicated brick across the wan... my writes are going to have to wait for that other site to report back the write was successful.... everytime...
21:41 wonko purpleidea: I think your docs need some tweaking. most of the modules marked 'optional' really aren't. :)
21:41 wonko for my use case that's probably ok
21:41 purpleidea wonko: could be, if so, lmk which
21:42 wonko purpleidea: i'll make a list as I get through it
21:42 purpleidea but i think the list is accurate
21:42 wonko _maserati: my only concern is reads. Can you exclude or somehow prioritize which nodes/bricks are read from?
21:42 _maserati i do beleive the read will come from the fastest source, so none issue
21:43 wonko hmm, that *might* work
21:43 wonko I'll consider it
21:43 purpleidea wonko: i think the list of which are optional is correct. it's true that you might not have some functionality without those modules, but they can be optional
21:43 _maserati that's what im hoping for too
21:43 cc1 left #gluster
21:43 _maserati my writes dont need to be super fast, but my reads do
21:43 purpleidea wonko: for example, shorewall is recommended, but totally optional. to have it act that way you need to set shorewall => false in the gluster::simple or gluster::server class (depending on how you use the module)
21:43 _maserati i plan to have this all tested by tommorrow so drop by and ask me then
21:49 MisterFix joined #gluster
21:49 MisterFix Hey guys.  :)
21:51 _maserati hola
21:54 MisterFix Quick (hopefully) question...  :)  So I've got a simple distributed setup here... 6 bricks.  I want to add snapshot capabilities, so I've got another brick that I've done the whole thin-provisioned LVS thing with for snapshots. So. I add the 7th node in, and then do a remove-brick on the 6th node, as once it's out, I'll format it with LVS, and get it back on. Things go wrong ALL the time, though, through the remove-brick, as this
21:55 MisterFix There were growing pains with it, and there are problems in the .gluster directory, I think...  Broken links, etc.
21:55 _maserati Did you fix the extended attributes on the volume after the format!?
21:56 _maserati i learned about this yesterday!
21:56 MisterFix So after going through some files with remove-brick, it gets the error(s)  Fix layout failed for xxxxx and stops the remove-brick.
21:56 MisterFix I'm not sure.  :)  I don't know how to do that.  :P
21:56 JoeJulian Yeah, I've never yet been completely satisfied with the remove-brick implementation.
21:56 MisterFix I think I have to have it regenerate a bad .glusterfs directory...
21:57 badone_ joined #gluster
21:57 MisterFix Anyhow, when that happens, I have to go and look at that directory, and I really don't know how to fix it.  :P  Should I delete the .glusterfs directory with everything shut down?  I'm not sure what the implications of that are, though.  :D
21:57 MisterFix Basically, I just would like an easy way to get the files off of brick 6 (and all the others, frankly, but I'm doing one at a time).  :)
21:58 JoeJulian So the theory is that it creates a new volume definition, one in which the brick being removed is added but marked at retired. Then it does a rebalance. That should create a new layout then any files that need moved to match that new layout get moved.
21:58 wonko purpleidea: then maybe better docs on what needs to be set to not have a module required. I'm going with a very basic setup and so far it's requiring puppet-puppet and puppet-yum.
21:59 JoeJulian Not sure why that seemingly simple process always seems to fail.
21:59 MisterFix Right, and the fix-layout is part of the rebalancing, right?
21:59 shyam joined #gluster
21:59 JoeJulian right
22:00 _maserati What exactly is the .glusterfs directory storing?
22:00 JoeJulian @lucky what is this new .glusterfs directory
22:00 glusterbot JoeJulian: https://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
22:00 MisterFix Hrm. I'm sure over the last few weeks of furious googling, I remember someone saying to shut all the bricks down, and delete the .glusterfs directory, and it would rebuild it after...  Is that right?  :P
22:00 JoeJulian It will, yes.
22:01 MisterFix And that would maybe fix the problem with all the fix-layout errors killing my remove-brick?  :D
22:01 * MisterFix crosses his fingers.
22:02 JoeJulian The layout is stored on the directory extended attributes.
22:02 JoeJulian @lucky dht misses are expensive
22:02 glusterbot JoeJulian: https://joejulian.name/blog​/dht-misses-are-expensive/
22:02 JoeJulian You can see how they work in that article.
22:03 MisterFix Haha awesome!  :)  I've been using your site for reference, didn't even realize I was talking to you!  :D
22:03 MisterFix Nice, thank you.  :)
22:03 wonko purpleidea: Invalid resource type shorewall::rule
22:03 JoeJulian You're welcome. :)
22:03 wonko i set shorewall false
22:03 wonko is there something else that's being missed?
22:03 _maserati ahh just a more efficient way to track balance
22:04 _maserati or consistancy rather
22:05 _maserati should i be considered at all that I have 10x the amount of gfid's than actual files in my glusterfs? :X
22:06 _maserati s/considered/concerned/
22:06 glusterbot What _maserati meant to say was: should i be concerned at all that I have 10x the amount of gfid's than actual files in my glusterfs? :X
22:07 MisterFix Is there a script, out of curiosity, that would fix the "Fix layout failed" problem? :P
22:07 wonko (bots that do regex)++
22:07 glusterbot wonko: regex)'s karma is now 1
22:07 wonko (bots that don't do inclusion for factoids)--
22:07 glusterbot wonko: factoids)'s karma is now -1
22:07 wonko :)
22:07 _maserati you done messed up kid.....
22:07 _maserati wonko--
22:07 glusterbot _maserati: wonko's karma is now -1
22:07 wonko boo!
22:07 _maserati wonko++   =P
22:07 glusterbot _maserati: wonko's karma is now 0
22:08 wonko glusterbot is what bot?
22:08 _maserati dunno
22:08 JoeJulian supybot
22:08 wonko not familiar with that one
22:08 JoeJulian _maserati: if your /files/ in .glusterfs have < 2 links, then they're stale and can be removed.
22:09 _maserati ok
22:09 JoeJulian As in find .glusterfs -type f -links 1
22:10 MisterFix Yeah, I tried that...  But it didn't work.
22:10 JoeJulian MisterFix: I don't even know how it can fail, so I don't know any way to fix it.
22:10 purpleidea wonko: you have to use shorewall => false as i explained
22:11 wonko purpleidea: i had done that. maybe it's not reading hiera properly
22:11 sghatty Hi All: A Question. I have trusted nodes, and two volumes gvol1 and gvol2 setup. I am running into the following error with the gluster volume nfs.export directory. Any pointers?
22:11 purpleidea for puppet-puppet to be optional you need to include_puppet_facter => false
22:11 sghatty [root@qint-tor01-c9 ~]# gluster volume set gvol2 nfs.export-dir '/share-8fcaa3be-8dfa-46ed-b​05b-6f2848c61085(10.0.0.11)' volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again
22:11 purpleidea wonko: print out the vars to be sure, and lmk
22:11 wonko purpleidea: yeah, let me get some debugging in there
22:12 purpleidea wonko: and once it all works, maybe you can add a docs patch about documenting how to make those two things optional by adding an FAQ entry :)
22:12 wonko absolutely!
22:12 MisterFix Alright, I'm going to hammer at this again tomorrow.  Take care, guys.  :)
22:12 MisterFix left #gluster
22:13 _maserati JoeJulian: well look at that, only 2 stale links
22:15 JoeJulian purpleidea: A docs patch about documenting?
22:15 purpleidea JoeJulian: yeah, nope, i suck at english...
22:15 purpleidea i meant an FAQ docs patch about the thing :)
22:15 _maserati Well clearly. We need a standard standard on standards
22:16 purpleidea _maserati: and a committee to elect the standards committee about committee standard
22:16 purpleidea s
22:17 _maserati circular logic, brain frying, send help
22:39 _Bryan_ joined #gluster
22:42 calavera joined #gluster
22:42 necrogami joined #gluster
22:47 wushudoin| joined #gluster
22:52 wushudoin| joined #gluster
23:20 calavera joined #gluster
23:35 Mr_Psmith joined #gluster
23:59 Mr_Psmith joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary