Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 raven-np joined #gluster
00:10 partner joined #gluster
00:29 yinyin joined #gluster
00:56 vex hi #gluster friends. Anyone have any smart ideas for mounting a gluster share via ssh? Forwarding ports and trying to mount localhost gives me DNS resolution errors
00:56 vex (there's a good chance I'm forgetting something)
01:04 kevein joined #gluster
02:37 hagarth joined #gluster
02:38 raven-np joined #gluster
02:38 Humble joined #gluster
02:46 raven-np joined #gluster
03:02 pipopopo joined #gluster
03:08 pipopopo_ joined #gluster
03:11 eryc joined #gluster
03:11 eryc joined #gluster
03:33 bulde joined #gluster
03:49 vshankar joined #gluster
03:55 flrichar joined #gluster
03:59 vigia joined #gluster
04:01 hagarth joined #gluster
04:11 sgowda joined #gluster
04:17 sahina joined #gluster
04:20 JoeJulian vex: Can't be done. Well, it's always possible, but it would be extremely difficult. If you want a secure remote connection you'll probably have to use ipsec.
04:21 sripathi joined #gluster
04:49 anmol joined #gluster
05:01 _pol joined #gluster
05:09 ramkrsna joined #gluster
05:09 ramkrsna joined #gluster
05:13 deepakcs joined #gluster
05:16 vpshastry joined #gluster
05:31 lala joined #gluster
05:35 test_ joined #gluster
05:36 bala joined #gluster
05:43 rotbeard joined #gluster
05:44 glusterbot New news from newglusterbugs: [Bug 915153] auth.allow fails to resolve hostname <http://goo.gl/iPUR5>
05:45 rastar joined #gluster
05:49 mohankumar joined #gluster
05:55 overclk joined #gluster
06:00 Humble joined #gluster
06:05 raven-np joined #gluster
06:06 rastar joined #gluster
06:07 bulde1 joined #gluster
06:16 satheesh joined #gluster
06:21 ngoswami joined #gluster
06:39 raven-np joined #gluster
06:42 edong23 joined #gluster
06:43 guigui1 joined #gluster
06:44 raghu joined #gluster
06:51 Nevan joined #gluster
07:05 17WAA70PK joined #gluster
07:11 kanagaraj joined #gluster
07:14 aravindavk joined #gluster
07:21 jtux joined #gluster
07:21 sas joined #gluster
07:32 rgustafs joined #gluster
07:41 ThatGraemeGuy joined #gluster
07:43 sripathi joined #gluster
07:44 ctria joined #gluster
07:51 mohankumar joined #gluster
07:56 rastar joined #gluster
08:01 jtux joined #gluster
08:14 xavih joined #gluster
08:23 Staples84 joined #gluster
08:28 samu60 joined #gluster
08:34 sripathi joined #gluster
08:35 raven-np joined #gluster
08:37 sripathi joined #gluster
08:49 tryggvil joined #gluster
08:50 sas joined #gluster
08:56 shireesh joined #gluster
08:57 gbrand_ joined #gluster
08:57 Humble joined #gluster
09:04 tjikkun_work joined #gluster
09:06 cw joined #gluster
09:10 raven-np1 joined #gluster
09:11 hflai joined #gluster
09:13 tryggvil_ joined #gluster
09:15 rastar joined #gluster
09:15 sas joined #gluster
09:17 raven-np joined #gluster
09:18 sgowda joined #gluster
09:18 satheesh joined #gluster
09:19 shireesh joined #gluster
09:20 sahina joined #gluster
09:23 tryggvil joined #gluster
09:32 tryggvil joined #gluster
09:35 deepakcs joined #gluster
09:36 mohankumar joined #gluster
09:37 mohankumar joined #gluster
09:39 ramkrsna joined #gluster
09:39 ramkrsna joined #gluster
09:41 cyberbootje joined #gluster
09:46 hflai joined #gluster
09:48 Footur joined #gluster
09:48 Footur hello
09:48 glusterbot Footur: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:49 Footur is there a version of glusterfs with LTS?
09:53 samppah Footur: i think that's what Red Hat Storage is for
09:59 dobber_ joined #gluster
10:00 lkoranda_ joined #gluster
10:03 tryggvil joined #gluster
10:04 bulde joined #gluster
10:13 nocturn left #gluster
10:14 duerF joined #gluster
10:15 rastar joined #gluster
10:18 cyberbootje joined #gluster
10:19 timothy joined #gluster
10:22 sas joined #gluster
10:30 cyberbootje joined #gluster
10:35 17WAA71ZM joined #gluster
10:35 sgowda joined #gluster
10:38 anmol joined #gluster
10:40 cyberbootje1 joined #gluster
10:41 test_ joined #gluster
10:42 raven-np joined #gluster
10:59 anmol joined #gluster
11:03 glusterbot New news from resolvedglusterbugs: [Bug 836101] Reoccuring unhealable split-brain <http://goo.gl/FRmIs>
11:11 rgustafs joined #gluster
12:07 overclk joined #gluster
12:11 VSpike joined #gluster
12:13 tomsve joined #gluster
12:17 hagarth joined #gluster
12:19 spai left #gluster
12:27 hflai joined #gluster
12:27 raven-np joined #gluster
12:30 Staples84 joined #gluster
12:31 msvbhat joined #gluster
12:31 raven-np1 joined #gluster
12:31 edward1 joined #gluster
12:42 theron joined #gluster
12:45 hflai joined #gluster
12:46 andreask joined #gluster
12:53 bulde joined #gluster
12:55 hflai_ joined #gluster
12:59 deepakcs joined #gluster
13:01 joe- joined #gluster
13:03 hagarth joined #gluster
13:08 Staples84 joined #gluster
13:08 tryggvil joined #gluster
13:24 plarsen joined #gluster
13:25 dustint joined #gluster
13:28 sripathi joined #gluster
13:31 joe-_ joined #gluster
13:43 rgustafs joined #gluster
13:44 plarsen joined #gluster
14:01 bulde joined #gluster
14:03 guigui joined #gluster
14:10 joe- joined #gluster
14:10 aliguori joined #gluster
14:11 joehoyle joined #gluster
14:12 balunasj joined #gluster
14:30 wN joined #gluster
14:33 Staples84 joined #gluster
14:35 mohankumar joined #gluster
14:39 joe- joined #gluster
14:41 bennyturns joined #gluster
14:46 rastar left #gluster
14:46 glusterbot New news from newglusterbugs: [Bug 915329] Crash in glusterd <http://goo.gl/abe93>
14:46 Humble joined #gluster
14:49 disarone joined #gluster
14:49 vpshastry joined #gluster
14:54 stopbit joined #gluster
14:54 vpshastry left #gluster
14:55 aliguori joined #gluster
14:55 jruggiero joined #gluster
14:58 jruggiero joined #gluster
15:11 jruggiero joined #gluster
15:14 rodlabs joined #gluster
15:15 lpabon joined #gluster
15:16 guigui joined #gluster
15:21 ramkrsna joined #gluster
15:21 ramkrsna joined #gluster
15:26 tryggvil joined #gluster
15:33 nueces joined #gluster
15:34 bugs_ joined #gluster
15:39 __Bryan__ joined #gluster
15:45 Staples84 joined #gluster
15:50 jdarcy joined #gluster
15:55 lala joined #gluster
15:57 jbrooks joined #gluster
15:58 ThatGraemeGuy @later tell semiosis hey, just a quick update on that weird issue we've been looking at. unfortunately i've had to spend a lot of time on another issue today, but for the moment it seems that the ubuntu version (3.2.5-1ubuntu1) works OK on reboot in 12.04 and 12.04.1, but on 12.04.2 its a coin flip
15:58 glusterbot ThatGraemeGuy: The operation succeeded.
15:59 ThatGraemeGuy @later tell semiosis having said that, i must also say that i only rebooted the 12.04 and 12.04.1 VMs about 10 times. tomorrow i'm going to look at logging the mount status and rebooting 1 minute after bootup completes, then after several hours i'll have some meaningful stats
15:59 glusterbot ThatGraemeGuy: The operation succeeded.
16:00 ThatGraemeGuy @later tell semiosis as for your ppa version (3.3.1-ubuntu1~precise8), I had it fail on boot more often than not on 12.04, 12.04.1 and 12.04.2 :(
16:00 glusterbot ThatGraemeGuy: The operation succeeded.
16:01 satheesh joined #gluster
16:02 jag3773 joined #gluster
16:07 _pol joined #gluster
16:08 tqrst joined #gluster
16:11 satheesh1 joined #gluster
16:12 daMaestro joined #gluster
16:12 zaitcev joined #gluster
16:13 lpabon_ joined #gluster
16:14 semiosis :O
16:18 lpabon joined #gluster
16:19 semiosis johnmark: ping
16:21 tqrst Where is the pid file for the gluster self-heal daemon? glusterd is in /var/run/glusterd.pid, glusterfsd is in /var/lib/glusterd/vols/$volname/run/*.pid, but I can't find one for the self heal daemon. I am asking this because I want to rotate its logs with logrotate, and need to -HUP it.
16:22 tqrst (glustershd grew to ~1.5G over the weekend after I replaced a dead brick)
16:22 tqrst .log, that is
16:22 tqrst I guess the first question should be "does the gluster self-heal daemon accept -HUP to reopen log files?"
16:23 semiosis tqrst: logrotate copytruncate
16:24 tqrst semiosis: ah. I was going off the logrotate script created by the gluster rpm, which include a kill -HUP for glusterd
16:25 tqrst semiosis: so I can just use copytruncate and drop all kills?
16:25 semiosis you can
16:26 tqrst cool, thanks
16:26 semiosis yw
16:26 tqrst that simplifies my logrotate script greatly
16:30 tqrst according to the manpage, some data might be lost with that option, thogh
16:30 tqrst s/thogh/though
16:30 tqrst (there's a gap between the copy and the truncate)
16:32 semiosis tqrst: you can weigh the costs & decide if that is acceptable
16:33 lpabon joined #gluster
16:38 an joined #gluster
16:42 Humble joined #gluster
16:43 joehoyle joined #gluster
16:45 Mo___ joined #gluster
17:02 vshankar joined #gluster
17:02 jag3773 joined #gluster
17:06 hagarth joined #gluster
17:14 xian1 joined #gluster
17:19 Guest73841 left #gluster
17:22 _pol joined #gluster
17:23 _pol joined #gluster
17:27 Humble joined #gluster
17:27 _pol joined #gluster
17:28 andreask joined #gluster
17:28 _pol joined #gluster
17:29 cyberbootje joined #gluster
17:36 rotbeard joined #gluster
17:41 flrichar joined #gluster
17:41 cyberbootje joined #gluster
17:48 tziOm joined #gluster
17:48 tziOm How does gluster handle brics of different sizes and file-placement?
17:54 cyberbootje joined #gluster
18:09 Humble joined #gluster
18:21 aliguori joined #gluster
18:30 cw joined #gluster
18:35 glusterbot New news from resolvedglusterbugs: [Bug 831151] Self heal fails on directories with symlinks <http://goo.gl/U7BLt>
19:26 aliguori joined #gluster
19:26 ThatGraemeGuy joined #gluster
19:29 ThatGraemeGuy_ joined #gluster
19:33 joe- joined #gluster
19:38 andreask joined #gluster
19:45 r2 joined #gluster
19:46 r2 hey all, are there any good resources about how self-heal works in 3.3?
19:47 r2 i saw some very high cpu usage on one node this weekend, and i'm trying to figure out if it was self-heal or other.  any help is appreciated
19:58 layer7switch joined #gluster
20:02 joehoyle joined #gluster
20:10 _pol joined #gluster
20:11 _pol joined #gluster
20:24 cw joined #gluster
20:31 joehoyle joined #gluster
20:34 weplsjmas joined #gluster
20:40 JoeJulian r2: Nothing really. Your best bet is to check any client logs if the client showed the high usage, or the glustershd.log(s) if it was on servers.
20:44 r2 JoeJulian: thanks for the information.
20:46 r2 JoeJulian: I'm trying to decode my glustershd.log file from that time.  It's here: http://pastebin.com/CHiFU6i3
20:46 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
20:47 r2 fair enough, it's here: http://dpaste.org/yazdm/
20:47 glusterbot Title: dpaste.de: Snippet #219995 (at dpaste.org)
20:47 r2 would "server has not responded in the last 42 seconds" trigger a self-heal?
20:49 badone joined #gluster
21:00 lh joined #gluster
21:00 lh joined #gluster
21:16 Gilbs joined #gluster
21:18 glusterbot New news from newglusterbugs: [Bug 883785] RFE: Make glusterfs work with FSCache tools <http://goo.gl/FLkUA>
21:30 JoeJulian r2: yes. Looks like one of your servers rebooted, or you had a network problem: "Network is unreachable"
21:32 Gilbs Hey gang, when I set a volume option parameter on a gluster server, do I need to configure anything on a client to reflect the change?
21:34 fidevo joined #gluster
21:56 r2 JoeJulian: Interesting, so it looks like network issues caused self-heal.
21:56 r2 It seems surprising that System CPU usage (and not Disk wait/IO) was so high during self-heal.
22:03 cyberbootje joined #gluster
22:07 edward1 joined #gluster
22:12 ndevos joined #gluster
22:13 BSTR Hey guys, im benching a gluster config here in my lab with multi-site cascading geo-replication. Everything looks to be up and running here, but im unsure which host (preferably the first hop slave from the client) im writing to from the client's perspective. Is there an easy way to determine this information
22:13 flakrat joined #gluster
22:13 flakrat joined #gluster
22:14 BSTR running a quick tcpdump, im seeing connections to my two master nodes, and one slave (two slaves are configured in cascading fashion)
22:15 BSTR this is essentially what i have (except with two masters *site-A*):
22:15 BSTR https://access.redhat.com/knowledge/docs/resou​rces/docs/en-US/Red_Hat_Storage/2.0/html/Admin​istration_Guide/images/Geo-Rep04_Cascading.png
22:15 glusterbot <http://goo.gl/jBdHQ> (at access.redhat.com)
22:15 flakrat With GlusterFS 3.3.1, what's the proper procedure to remove a server and it's bricks from a pool without losing the data contained on the bricks? The only volume is a simple distributed vol
22:16 flakrat gluster volume remove-brick doesn't migrate the data best I can tell
22:20 elyograg flakrat: if you add "start" to it, tht will kick off a rebalance that pulls the data off that brick.  then you run it with 'status' until it's done, then do it with 'commit'
22:20 flakrat elyograg, thanks
22:21 elyograg if your brick is around (or over) half full, then you will likely run into bug 862347 ... the workaround is to let the process fail, then start it again, over and over until it finally finishes without failure.
22:21 glusterbot Bug http://goo.gl/QjhdI medium, medium, ---, sgowda, ASSIGNED , Migration with "remove-brick start" fails if bricks are more than half full
22:21 elyograg s/your brick/your volume/
22:21 glusterbot elyograg: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
22:22 flakrat hmm, for this first server i'm in good shape, each brick is only 22% used
22:27 tryggvil joined #gluster
22:39 Gilbs Where do I enable flush-behind, is this a volume option or a client option?
22:45 soukihei joined #gluster
22:47 BSTR can you just statically assign which nodes volfile you want to use and set this up in a series as opposed to using iptables?
22:49 JoeJulian r2: It was probably mostly due to hash comparisons. Very little data likely transferred.
22:51 JoeJulian BSTR: You'll write to the volume master. Mount the volume through a fuse client or over nfs and write to that.
22:53 BSTR JoeJulian : we are using the fuse client to mount up the volume, but i currently have the following schema: Master -> Slave -> Slave -> Client
22:53 BSTR JoeJulian : i would like to have my client write to the closest slave and replicate backwards towards the master
22:54 JoeJulian Gilbs: set performance.flush-behind on.
22:55 BSTR JoeJulian : i.e. Master (US) -> Slave (EU) -> Slave (London) -> Client Machine
22:55 JoeJulian BSTR: Currently geo-replication is one way.
22:55 JoeJulian So you can read all you want from those slaves, but you have to write to the master (though the volume mount)
22:56 BSTR JoeJulian : The client can write to the volume, but it HAS to be written to the master and propagaed to the slaves
22:56 JoeJulian Correct
22:58 BSTR JoeJulian : This is for read only data
22:59 BSTR JoeJulian : i want to make sure that i am only reading from the closest slave, if that slave is down, i want to be able to failover to the next one and the next one..
22:59 JoeJulian Oh... interesting...
23:00 BSTR JoeJulian : this will allow sustainability in a given datacenter / region if we loose a link to the master somehow
23:00 JoeJulian I can think of good ways to do that when mounting the volume, but for failover during live operation... maybe if you mount with tcp and use a vip a ucarp?
23:01 JoeJulian s/a ucarp/and ucarp/
23:01 glusterbot What JoeJulian meant to say was: I can think of good ways to do that when mounting the volume, but for failover during live operation... maybe if you mount with tcp and use a vip and ucarp?
23:01 JoeJulian Or something like that...
23:02 twx joined #gluster
23:03 Gilbs JoeJulian: Thanks, that's what I had but it finally took after you said the command.  :)
23:08 BSTR JoeJulian : So if the slave that i statically set the client to read from goes down, the client will then attempt to read from the master? Is there a way i can set a 'cost' on the nodes to read from?
23:15 BSTR JoeJulian : is there a configuration file somewhere that tells the client where to read from? Im assuming this isnt just a round robin scenario..
23:16 Gilbs left #gluster
23:20 JoeJulian When the client mounts the volume, it mounts it from a server. That server sends it the volume information for that server's volume. You can see it by looking on the server it's mounting from in /var/lib/glusterd/vols/$volname/*fuse*.vol
23:22 DWSR joined #gluster
23:22 DWSR joined #gluster
23:24 ehg joined #gluster
23:27 BSTR JoeJulian : i only see my master hosts in there, and it appears that my client is only reading from my master
23:28 BSTR JoeJulian : however, this was mounted off the slave
23:29 JoeJulian When you created that cascading geo-rep, you created volumes on each slave. That slave should have a volume definition that only shows it's own bricks, which is what your client would get.
23:29 cyberbootje joined #gluster
23:31 cyberbootje joined #gluster
23:34 cyberbootje joined #gluster
23:36 BSTR JoeJulian : when geo-rep was setup, i established the ssh keys and mounted with the following: gluster volume geo-replication <vol name> {slave host}:{mount point} start
23:37 BSTR JoeJulian : status shows OK, but are you saying i should have created new bricks on these slave hosts?
23:41 cyberbootje joined #gluster
23:43 JoeJulian Typically in that tree'd setup, each intermediate master has it's own volume, so you're syncing from master(0) volume to slave(1) volume, master(1) volume to slave(2) volume, etc. where each slave becomes the master for the next iteration of slaves.
23:44 JoeJulian And now I'm getting so bogged down in language, this is the point where I need a whiteboard and about another space hour in my day. ;)
23:44 JoeJulian s/space/spare/
23:44 glusterbot What JoeJulian meant to say was: And now I'm getting so bogged down in language, this is the point where I need a whiteboard and about another spare hour in my day. ;)
23:45 cyberbootje1 joined #gluster
23:46 JoeJulian Finally, a good reason for a video tutorial. Maybe I'll do that tonight.
23:46 JoeJulian I've been looking for an excuse to use my new HD webcam.
23:51 Humble joined #gluster
23:53 BSTR JoeJulian : i think i understand what your saying -- I'm going to tear this setup down and re-try this from scratch

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary