Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2013-09-13

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 awheeler joined #gluster-dev
00:20 hagarth joined #gluster-dev
01:38 awheeler joined #gluster-dev
02:10 asias joined #gluster-dev
02:19 bharata-rao joined #gluster-dev
02:27 awheeler joined #gluster-dev
03:19 kshlm joined #gluster-dev
03:27 awheeler joined #gluster-dev
03:27 kanagaraj joined #gluster-dev
03:32 shubhendu joined #gluster-dev
03:49 itisravi joined #gluster-dev
04:02 ajha joined #gluster-dev
04:27 awheeler joined #gluster-dev
04:27 jbrooks joined #gluster-dev
04:29 badone joined #gluster-dev
04:31 an joined #gluster-dev
04:31 lalatenduM joined #gluster-dev
04:43 ppai joined #gluster-dev
05:27 awheeler joined #gluster-dev
05:30 ndarshan joined #gluster-dev
05:30 ababu joined #gluster-dev
05:34 bala joined #gluster-dev
05:39 badone joined #gluster-dev
05:41 awheeler joined #gluster-dev
05:45 mohankumar joined #gluster-dev
05:46 bala joined #gluster-dev
05:49 awheeler joined #gluster-dev
05:57 aravindavk joined #gluster-dev
06:20 vshankar joined #gluster-dev
06:25 rgustafs joined #gluster-dev
06:40 vshankar joined #gluster-dev
06:54 raghu joined #gluster-dev
07:21 puebele joined #gluster-dev
07:38 lalatenduM joined #gluster-dev
07:42 puebele1 joined #gluster-dev
07:48 vshankar joined #gluster-dev
08:38 asias joined #gluster-dev
08:49 ababu joined #gluster-dev
08:55 odc left #gluster-dev
08:57 an joined #gluster-dev
09:03 ndarshan joined #gluster-dev
09:05 ababu joined #gluster-dev
09:12 sac joined #gluster-dev
09:30 hagarth joined #gluster-dev
09:39 vshankar joined #gluster-dev
09:58 bulde joined #gluster-dev
10:32 edward2 joined #gluster-dev
11:19 hagarth joined #gluster-dev
11:32 bulde joined #gluster-dev
11:38 vshankar joined #gluster-dev
11:38 aravindavk joined #gluster-dev
11:50 kanagaraj joined #gluster-dev
12:11 ndevos johnmark: what do you think of something like "Who wrote GLusterFS-3.4?" http://review.gluster.org/5912
12:12 * ndevos is not volunteering to write that, but getting the stats seems interesting :)
12:13 an joined #gluster-dev
12:16 hagarth ndevos: very nice!
12:17 ndevos hagarth: its just playing around a little, if there is interest for that, file a bug and assign it to me ;-)
12:18 hagarth ndevos: sure, we should have a dashboard for the community that exposes various stats
12:18 ndevos hagarth: that would be nice, but that'll be too much for me to work on :_/
12:19 hagarth yeah, I will get to it in my abundant spare time ;)
12:19 ndevos hagarth: maybe the forge.gluster.org software has a plugin for that already?
12:19 hagarth seriously,  we could look at pulling together some statistics .. would be fun to track that.
12:20 hagarth need to check if there's something readily available
12:20 ndevos yes, statistics are fun (sometimes), and johnmark likes to see who and what companies contribute
12:22 hagarth ndevos: on the brick failure monitoring patch, is the number of stat failures configurable?
12:23 ndevos hagarth: no, one failure is normally enough for the filesystem to be in an unrecoverable state
12:24 ndevos I wonder how stat can fail, but the fs comes back again
12:25 hagarth ndevos: ENOMEM?
12:25 ndevos on a stat()!? not sure, would the rest of the system not be in a worse state?
12:26 hagarth it is a possibility that you got an ENOMEM and kernel was able to reclaim memory later
12:27 ndevos note that there is no timeout on the stat() itself, in case reading from the disk hangs forever (SCSI timeout not hitting?), the issue may go undetected for a while
12:27 hagarth but it usually is a degraded mode of operation
12:28 ndevos well, adding a ENOMEM check can be added, but I was thinking about improving the design all-over
12:28 hagarth ndevos: should we consider an aio_read with a timeout?
12:30 ndevos hagarth: I was more thinking about a xlator->watchdog() function, one thread that executes those in a delayed-loop, and updates a timestamp on succes, one thread that checks the timestamps
12:31 hagarth ndevos: yeah, that is one possibility too.
12:31 ndevos that way it is possible to add heal-checkers for other xlators (bd?) and throw a failure in case any reading just starts to hang
12:32 ndevos like with multipath+queue_if_no_path
12:34 hagarth yeah, we would need checkers for multiple backends (posix/bd/<anything else that we evolve>).
12:35 ndevos yes, and I think its ugly to have them implement their own threading
12:35 hagarth yeah .. a generic re-usable framework would be elegant.
12:51 ndarshan joined #gluster-dev
12:55 awheeler joined #gluster-dev
12:55 awheeler joined #gluster-dev
12:55 hagarth joined #gluster-dev
13:01 vshankar joined #gluster-dev
13:03 bulde joined #gluster-dev
13:05 hagarth1 joined #gluster-dev
13:10 ababu joined #gluster-dev
13:19 lbalbalba joined #gluster-dev
13:41 lpabon joined #gluster-dev
13:41 ababu joined #gluster-dev
14:02 Technicool joined #gluster-dev
14:02 mohankumar joined #gluster-dev
15:41 lpabon joined #gluster-dev
16:06 an joined #gluster-dev
16:37 bulde joined #gluster-dev
16:47 hagarth joined #gluster-dev
16:55 bulde joined #gluster-dev
17:05 hagarth joined #gluster-dev
17:22 bulde joined #gluster-dev
18:09 an__ joined #gluster-dev
19:25 edward1 joined #gluster-dev
20:29 lkoranda joined #gluster-dev
21:11 sac`away` joined #gluster-dev
21:12 johnmark_ joined #gluster-dev
21:21 jbautista- joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary