Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-03-18

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:34 badone_ joined #gluster-dev
00:40 badone__ joined #gluster-dev
00:40 bala joined #gluster-dev
00:48 topshare joined #gluster-dev
02:19 topshare joined #gluster-dev
02:26 topshare joined #gluster-dev
02:36 topshare joined #gluster-dev
02:47 topshare joined #gluster-dev
03:26 kdhananjay joined #gluster-dev
03:33 shubhendu joined #gluster-dev
03:45 itisravi joined #gluster-dev
03:54 rafi joined #gluster-dev
04:04 atinmu joined #gluster-dev
04:18 bala joined #gluster-dev
04:20 Apeksha joined #gluster-dev
04:22 ppai joined #gluster-dev
04:26 rjoseph joined #gluster-dev
04:30 kanagaraj joined #gluster-dev
04:38 soumya_ joined #gluster-dev
04:39 anoopcs joined #gluster-dev
04:41 jiffin joined #gluster-dev
04:44 Apeksha joined #gluster-dev
04:45 Apeksha joined #gluster-dev
04:51 punit_ joined #gluster-dev
04:54 hagarth joined #gluster-dev
05:00 ppp joined #gluster-dev
05:15 kshlm joined #gluster-dev
05:18 Manikandan joined #gluster-dev
05:18 Manikandan_ joined #gluster-dev
05:18 Manikandan_ left #gluster-dev
05:18 mikedep333 joined #gluster-dev
05:19 lalatenduM joined #gluster-dev
05:21 nkhare joined #gluster-dev
05:28 deepakcs joined #gluster-dev
05:28 shubhendu joined #gluster-dev
05:34 Manikandan joined #gluster-dev
05:34 Apeksha joined #gluster-dev
05:42 gem joined #gluster-dev
05:52 gem joined #gluster-dev
06:01 raghu joined #gluster-dev
06:01 soumya joined #gluster-dev
06:16 kdhananjay joined #gluster-dev
06:20 soumya joined #gluster-dev
06:24 overclk joined #gluster-dev
06:47 rjoseph joined #gluster-dev
06:52 hagarth rjoseph: ping, have you reviewed http://review.gluster.org/#/c/9750/ ?
06:54 rjoseph hagarth: I reviewed it and gave comments as well. the comments are addressed. I will give +1
06:54 kdhananjay joined #gluster-dev
06:55 rjoseph hagarth: there are few enhancements identified, which can be taken later.
06:55 hagarth rjoseph: ok, I will merge after your +1.
06:55 rjoseph hagarth: sure, thanks
06:59 vimal joined #gluster-dev
06:59 dlambrig joined #gluster-dev
07:04 rjoseph hagarth: done
07:08 hagarth rjoseph: merged, thanks!
07:09 rjoseph hagarth: thanks :)
07:09 rjoseph hagarth++
07:09 glusterbot rjoseph: hagarth's karma is now 43
07:27 rjoseph joined #gluster-dev
07:41 anrao joined #gluster-dev
08:13 topshare joined #gluster-dev
08:15 ashiq joined #gluster-dev
08:21 hgowtham joined #gluster-dev
08:35 topshare joined #gluster-dev
08:38 hagarth joined #gluster-dev
08:44 kaushal_ joined #gluster-dev
08:47 topshare_ joined #gluster-dev
08:49 shubhendu joined #gluster-dev
08:59 lalatenduM joined #gluster-dev
09:04 atinmu joined #gluster-dev
09:04 topshare joined #gluster-dev
09:08 ppai joined #gluster-dev
09:12 topshare joined #gluster-dev
09:16 sankarshan_ joined #gluster-dev
09:21 misc so slave27 was cleaned
09:21 misc and I have no easy access to slave 20
09:25 topshare_ joined #gluster-dev
09:30 atinmu joined #gluster-dev
09:31 ashiq joined #gluster-dev
09:38 kaushal_ joined #gluster-dev
09:39 bala1 joined #gluster-dev
09:46 rjoseph joined #gluster-dev
09:47 lalatenduM_ joined #gluster-dev
09:48 Manikandan joined #gluster-dev
09:50 atinmu joined #gluster-dev
09:51 overclk joined #gluster-dev
09:56 rjoseph hagarth: can you merge snapshot scheduler patch? http://review.gluster.org/#/c/9788/ the regression is passing and we have the ack
09:56 hagarth rjoseph: will do soon
09:56 rjoseph thanks
09:58 sankarshan joined #gluster-dev
09:59 topshare joined #gluster-dev
10:05 Manikandan joined #gluster-dev
10:05 pranithk joined #gluster-dev
10:12 kaushal_ joined #gluster-dev
10:15 badone joined #gluster-dev
10:17 ppai joined #gluster-dev
10:17 hagarth ndevos: ping, do you happen to know if regression is queued for http://review.gluster.org/#/c/9365/ ?
10:18 hagarth ndevos: would you be able to host the community meeting today, I have a conflict for the first 30 minutes at least.
10:20 hagarth ndevos: never mind about 9365, just queued one more right away
10:22 topshare joined #gluster-dev
10:27 atinmu joined #gluster-dev
11:03 ira joined #gluster-dev
11:03 firemanxbr joined #gluster-dev
11:06 atinmu joined #gluster-dev
11:13 ppai joined #gluster-dev
11:17 ndevos hagarth... I wont be able to host it today, sorry!
11:17 ndevos JustinClift: could you? ^
11:17 overclk joined #gluster-dev
11:20 raghu joined #gluster-dev
11:23 rafi1 joined #gluster-dev
11:24 kkeithley_ ndevos: I hope you get your visa today.
11:24 ndevos kkeithley_: you're not the only one!
11:24 ndevos kkeithley_: if JustinClift can not host the meeting in 35 minutes, could you?
11:25 kkeithley_ yes
11:26 hagarth joined #gluster-dev
11:28 ndevos kkeithley++ thanks!
11:28 glusterbot ndevos: kkeithley's karma is now 56
11:29 ndevos hagarth: I wont be able to host the meeting, but kkeithley_ or JustinClift can
11:29 hagarth ndevos: great, I see kkeithley_ updating the agenda
11:29 ndevos hagarth: yeah, JustinClift did not respond yet, but kkeithley_ is awake :)
11:30 hagarth ndevos: I am also awaiting for JustinClift to clean up various slaves :)
11:30 ndevos kkeithley_: http://review.gluster.org/9924 actually passed regression tests! wohoo |o/
11:30 ndevos \o|
11:30 * ndevos waves
11:31 kkeithley_ bazinga
11:32 * kkeithley_ dances a little happy dance
11:32 atinmu joined #gluster-dev
11:33 kkeithley_ victory is ours
11:34 ndevos hagarth: did you know that you can reboot slaves if they are misbehaving? http://build.gluster.org/job/reboot-vm/build
11:34 ndevos hagarth: you may want to retrigger the aborted/failed jobs after doing that
11:35 rafi joined #gluster-dev
11:37 kkeithley_ Gluster Community Meeting in 25 minutes in #gluster-meeting
11:40 hgowtham joined #gluster-dev
11:46 hagarth ndevos:cool, good to know this!
11:48 overclk joined #gluster-dev
11:54 nishanth joined #gluster-dev
11:58 kkeithley_ Gluster Community Meeting starting now in #gluster-meeting
11:59 anoopcs joined #gluster-dev
12:06 jdarcy joined #gluster-dev
12:10 atinmu joined #gluster-dev
12:11 overclk joined #gluster-dev
12:16 rjoseph joined #gluster-dev
12:17 Manikandan joined #gluster-dev
12:28 Apeksha joined #gluster-dev
12:43 * JustinClift gets online finally
12:43 JustinClift Bleargh
12:52 shubhendu joined #gluster-dev
12:58 hagarth joined #gluster-dev
13:02 nkhare joined #gluster-dev
13:21 _Bryan_ joined #gluster-dev
13:28 soumya joined #gluster-dev
13:32 JustinClift Ugh.  Slaves are in a _bad_ state :(
13:34 kkeithley_ Nuke 'em from orbit. It's the only way to be sure. ;-)
13:38 kshlm joined #gluster-dev
13:41 JustinClift I'm tempted
13:41 JustinClift Having trouble with Rackspace login though.  OCSP https cert error
13:41 topshare joined #gluster-dev
13:41 JustinClift Pinging them via IRC now :/
13:48 Apeksha joined #gluster-dev
14:15 kdhananjay joined #gluster-dev
14:15 nishanth joined #gluster-dev
14:20 bala1 joined #gluster-dev
14:21 wushudoin| joined #gluster-dev
14:27 rafi joined #gluster-dev
14:29 atinmu joined #gluster-dev
14:48 Apeksha joined #gluster-dev
14:56 kkeithley_ ndevos: are you back from getting your visa?
15:00 ndevos kkeithley_: almost, still travelling
15:02 kkeithley_ you got it?
15:03 ndevos kkeithley_: yes, more or less, I can pick it up next week or so
15:04 ndevos I should be back home in ~20 minutes, and then I'll book my flight
15:08 * ndevos gets out of the train, ttyl!
15:12 JustinClift Converted 2 of the downed regression vms into pure smoke testers and rpm builders
15:12 JustinClift That should clear out the smoke / rpm backlog in an hour or two
15:12 JustinClift Lets see about the other weird-state VM's next
15:19 pcaruana joined #gluster-dev
15:22 kkeithley_ on the topic of NetBSD and FreeBSD makes /me wonder when was the last time we tried building on MacOS ;-)
15:24 ndevos JustinClift: could you verify that sqlite(-devel) is available on the FreeBSD systems?
15:28 kkeithley_ and libacl-devel
15:31 overclk joined #gluster-dev
15:31 atinmu joined #gluster-dev
15:31 hagarth kdhananjay: do you intend refreshing sharding patchset anymore today?
15:32 kdhananjay hagarth: pranithk told me he is reviewing my patch and has found an illegal memory access bug with dict_set_static_bin()
15:32 hagarth kdhananjay: ok
15:33 kdhananjay hagarth: And he is yet to review writev, which he would complete today
15:33 hagarth kdhananjay: ok, let us get that in with all known limitations today
15:33 kdhananjay hagarth: Besides the patch needs rebase after changes to glusterd-volume-set.c and configure.ac went in recently
15:33 hagarth kdhananjay: ok
15:33 kdhananjay hagarth: upcall patches i think
15:33 hagarth kdhananjay: right
15:34 overclk hagarth, I've sent the BitRot patches with the BZ id
15:34 hagarth overclk: noted. let us await regression run and merge if there are no new failures.
15:34 overclk hagarth, for the glusterd patches Atin has provided +1.
15:35 overclk Sure hagarth. Thanks!
15:35 hagarth overclk: ok, cool.
15:40 topshare joined #gluster-dev
15:46 ndevos JustinClift: I think kkeithley_ also has some FreeBSD skillz ;)
15:49 JustinClift Good point :)
15:49 JustinClift ndevos: Have you helped Joe F get into slave48?
15:49 * JustinClift just noticed a bunch of old messages from him, asking me for access
15:49 ndevos JustinClift: yeah, he got in there
15:49 JustinClift Cool
15:50 JustinClift ndevos: We use the same password for all the slaves so it's pretty easy for the dev's to login
15:50 JustinClift ndevos: We could do it with ssh public keys too
15:50 ndevos JustinClift: not sure if the system is still marked offline in Jenkins, but I think he should be done with it
15:50 JustinClift I'll email him to ask
15:51 ndevos JustinClift: I did not expect a single password for all, specially not since the jenkins user has sudo power
15:52 ndevos JustinClift: if you could provision some ssh-keys for access instead, that would allow us some tracking of who did what
15:55 kanagaraj joined #gluster-dev
15:58 JustinClift ndevos: Well, things are kinda setup hastily... ;)
15:58 JustinClift And I'm working on plans for Next version of stuff
15:58 JustinClift So, going to leave it until then
15:59 JustinClift Which shouldn't be far off
15:59 JustinClift eg 2-3 weeks at most
15:59 ndevos JustinClift: sure, thats fine, and when there are scripts in a gerrithub repo that would allow contributions, we can improve them
15:59 JustinClift Since we need to migrate before Google OpenID problem bites us
16:00 deepakcs joined #gluster-dev
16:00 ndevos I do not think we use Google OpenID for access to the slaves? that is only for Gerrit, right?
16:01 * JustinClift nods
16:02 JustinClift We need to migrate Gerrit to a new auth solution in the next few weeks anyway, which is what I'm working through
16:02 JustinClift Along with upgrading it + chucking it on a new server
16:03 JustinClift We'll prob use Salt, but I'm not up to that yet.  Just trying to make it work locally first, and then I can chat with Misc about salting that
16:11 Manikandan joined #gluster-dev
16:13 kshlm joined #gluster-dev
16:15 JustinClift Gah... I don't remember how to backtrace a corefile :(
16:18 hagarth JustinClift: gdb <binary> <core>
16:18 hagarth bt at the gdb prompt
16:21 JustinClift hagarth: I'm trying to work out which of the binaries matches
16:21 Hanefr joined #gluster-dev
16:21 hagarth JustinClift: file <core> will give you the binary
16:22 JustinClift Guessing it's likely the current ones on the system, and not the ones in the archived logs area
16:22 Hanefr Anybody active in here?  I tried posting a question on the #gluster channel and there’s 236 empty seats apparently.
16:23 Hanefr left #gluster-dev
16:23 JustinClift hagarth: http://fpaste.org/199626/69580514/
16:24 JustinClift I think Joe or Shyam posted instructions a while back on how to set up the environment correctly for getting proper backtraces
16:24 * JustinClift could have sworn we were going to add that to the output of regression failues
16:24 hagarth overclk: ^^ crash seems to be related to bitd
16:25 shyam JustinClift: yes we do add that to the output :)
16:26 JustinClift shyam: Not seeing it here? http://build.gluster.org/job/rackspace​-regression-2GB-triggered/5553/console
16:26 * shyam checks
16:26 JustinClift Am I looking at the wrong job
16:26 JustinClift ?
16:27 shyam JustinClift: there were no cores reported in that run
16:27 shyam oh wait
16:27 shyam yup no cores reported in that run, message appears if there were cores
16:27 JustinClift Ahhh
16:27 JustinClift Gotcha
16:27 JustinClift I just needed to keep looking
16:28 JustinClift Found the info here: https://forge.gluster.org/gluster-patc​h-acceptance-tests/gluster-patch-accep​tance-tests/blobs/master/regression.sh
16:28 shyam JustinClift: you that works as well ;)
16:31 atinmu joined #gluster-dev
16:35 hchiramm_ joined #gluster-dev
16:36 JustinClift hagarth: Here's another one: http://fpaste.org/199630/69656014/  It doesn't mention bitd
16:36 JustinClift They both do mention "event=RPC_CLNT_DISCONNECT" though
16:36 gem joined #gluster-dev
16:36 JustinClift So maybe they're related to the race condition that shyam mentioned a while back
16:36 JustinClift ?
16:37 hagarth JustinClift: related to bitrot nevertheless
16:37 JustinClift k. :)
16:37 hagarth bitd, scrub etc... keywords for bitrot :)
16:37 shyam JustinClift: Do you have the regression failure numbers, so that I can grab the tarball and check?
16:37 shyam the core tarball i.e
16:40 kanagaraj joined #gluster-dev
17:02 JustinClift shyam: Sorry, was replying to email
17:02 JustinClift I still haven't gotten to my main tasks today... ugh
17:03 JustinClift shyam: Just log into slave 20
17:03 JustinClift It's all there?
17:03 shyam JustinClift: ok, in a bit... is that fine?
17:03 JustinClift shyam: Yep
17:04 JustinClift shyam: The slave is disconnected from jenkins, so no urgency
17:04 shyam ok good... on another setup at the moment...
17:04 JustinClift We have enough VM's online that they'll get through the queue eventually themselves.  1 more won't make a huge difference ;)
17:09 JustinClift hagarth: It seems like it's from this CR: http://review.gluster.org/#/c/9915/
17:09 JustinClift Which seems to fit
17:09 JustinClift It's for the first revision of the CR though, and there have been a few since
17:12 JustinClift Yep, this is the failed (aborted) regression run for it: http://build.gluster.org/job/rackspace-r​egression-2GB-triggered/5450/consoleFull
17:16 overclk hagarth, sorry was away..
17:17 JustinClift overclk: 9915 seems to have generated a bunch of core files
17:17 JustinClift Big ones, mostly filled the disk for slave20. ;)
17:18 JustinClift overclk: Is that a known thing for CR 9915, first revision?
17:18 overclk JustinClift, 9915 generates the volfiles. I see in the fpaste above it's the daemon that segfaulted.
17:19 JustinClift k
17:19 JustinClift overclk: Both fpastes same thing?
17:20 JustinClift http://fpaste.org/199626/69580514/   and   http://fpaste.org/199630/69656014/
17:20 overclk JustinClift, seems similar "server->list.next == &ctx->cmd_args.volfile_servers" in the bt
17:20 JustinClift So, not really CR 9915's fault? :)
17:20 overclk JustinClift, one each for each bitrot daemons.
17:21 JustinClift There's a bunch more cores on the box, that's just the two I grabbed bt's from
17:21 * JustinClift isn't sure if there's more he needs to do here
17:22 JustinClift Actually, maybe lets just wait for Shyam to log into the box later, and see if this is the race condition he mentioned a few weeks ago
17:22 overclk JustinClift, OK. If you want I can too have a look.
17:22 JustinClift It's slave20.  Cores in /, and installed binaries in /build/ area
17:32 hagarth overclk: can you withdraw your -2 for http://review.gluster.org/#/c/9683/?
17:33 overclk hagarth, Sure. Shall I still keep it -1?
17:34 overclk hagarth, as I've not reviewed the latest patchset.
17:34 overclk hagarth, if you have, then I can default it to 0.
17:34 hagarth overclk: have reviewed, lgtm
17:35 overclk hagarth, OK. reseting...
17:36 overclk done
17:36 hagarth overclk: thanks! merging that now.
17:38 xavih joined #gluster-dev
17:46 shyam overclk: Are you investigating the cores on slave20?
17:46 overclk shyam, nope. logged in but tied elsewhere...
17:46 shyam overclk: k
17:59 shyam JustinClift: All cores are either scrub or bitd volfile based glusterfs processes, and have the same point of failure, mailed this to overclk for him to take a look at
18:03 soumya joined #gluster-dev
18:08 JustinClift shyam: Thanks. :)
18:09 JustinClift shyam: Am I'm ok to clean up the cores and put slave20 back online?
18:09 JustinClift Well, "back in service"
18:30 lalatenduM joined #gluster-dev
18:33 shyam JustinClift: Did the regression run capture the core tarballs? So that overclk does not need them to debug?
18:39 hagarth joined #gluster-dev
18:47 JustinClift shyam: Nope
18:48 shyam JustinClift: Hmmm... maybe we want to leave it for overclk to take a look at?
18:48 JustinClift Sure
18:48 shyam Or create the tarball by hand and leave it aside?
18:49 JustinClift It'd be a large tarball (GB's), so I'm inclined to just leave slave20 there for overclk to look at in his own time ;)
18:50 shyam JustinClift: ok :)
18:54 misc take a snapshot of the VM and start a clone ?
19:03 shyam left #gluster-dev
19:36 JustinClift misc: Not that urgent ;)
19:42 lpabon joined #gluster-dev
19:54 shyam joined #gluster-dev
20:10 JustinClift misc: Btw, when you say slave23 disappeared... any idea when?
20:11 ndevos any one knows if Malaysia Airlines is any good?
20:11 JustinClift misc: I suspect it's probably a case of "Justin nuked the VM a while ago and got distracted before making its replacement"
20:11 JustinClift ndevos: Do you like living?
20:11 JustinClift ndevos: Does your gf have a large life insurance payout on you?
20:12 JustinClift ndevos: You know those aircraft in SE Asia that crashed or just plain disappeared...?
20:12 JustinClift Pretty sure they were both Malaysia Airlines :(
20:12 ndevos JustinClift: ah, right, thats why its cheap!
20:12 * ndevos picks Emirates instead
20:13 JustinClift ;)
20:13 ndevos lol, and they are even cheaper... I wonder if there is a catch somewhere
20:17 misc JustinClift: no idea
20:19 JustinClift ndevos: If RH is paying... http://www.airlinequality.​com/StarRanking/5star.htm ?
20:20 JustinClift Heh, Malaysia Airlines is on that "Under Review" ;)
20:43 ndevos JustinClift: Emirates should be good :)
20:44 primusinterpares joined #gluster-dev
20:52 misc http://munin.gluster.org/munin/ quite interesting, just seeing which slaves is full is a color matching exercise
20:52 JustinClift ndevos: Ahhh, this one's yours: http://build.gluster.org/job/rack​space-regression-2GB/1043/console
20:53 JustinClift Well, kinda yours
20:53 JustinClift misc: That's useful.  Looks like slave40 is about to croak
20:54 ndevos JustinClift: is it?
20:54 JustinClift ndevos: Which question?
20:54 JustinClift Slave40 is out of disk space, but that's not releated to the hung job I posted a few lines up
20:54 ndevos JustinClift: the regression test, and I just sent a fix for it, nuke the test
20:55 JustinClift ndevos: Cool. :)
20:55 kkeithley_ Emirates is inexpensive because they refuel in Dubai, where fuel is cheap.
20:55 ndevos well, I had almost posted a fix for it... seems it needs a rebase *again*
20:55 JustinClift Heh
20:55 ndevos kkeithley_: ah, right, thats the big difference!
20:56 JustinClift I've aborted the test, and putting the vm back in service for other things to use
20:57 ndevos thanks
20:59 JustinClift np
21:02 JustinClift Disk space cleaned up on slave40
21:02 JustinClift It was the bitrot stuff making corefiles again
21:02 JustinClift It's a known problem (as of today), so nuked the files and rebooted it
21:12 JustinClift kkeithley_: Btw, with the regression spurious failures... I feel pretty strongly that we should get it down to 0 spurious before we release 3.7
21:12 JustinClift that's on all the branches we care about ;)
21:13 ndevos JustinClift: yes we should, and I think things are improving already :)
21:13 JustinClift It should then mean a better dev experience for post 3.7, a less buggy release for 3.7 itself, and perhaps even less support overhead for downstream
21:13 ndevos kkeithley_: please review http://review.gluster.org/9937
21:13 JustinClift ndevos: Cool
21:16 JustinClift misc: You ok to add the munin box to the Jenkins_Infrastructure page?
21:19 misc JustinClift: yeah, will do
21:23 misc in fact, just did
21:25 ndevos JustinClift: http://build.gluster.org/job/rackspace​-regression-2GB-triggered/5531/console doesnt look good, I think?
21:26 * ndevos hopes http://build.gluster.org/job/rackspace​-regression-2GB-triggered/5598/console passes the next few tests, then he can sleep well tonight
21:28 ndevos JustinClift: btw, when the cores have been collected in a tarball, why are there still cores in /core.* when a new regression test is started?
21:29 JustinClift ndevos: I have no idea
21:29 JustinClift ndevos: The theory is that it's not supposed to happen
21:29 JustinClift But obviously theory isn't up to the task and needs to be beaten
21:30 ndevos JustinClift: hmm, okay, theory doesnt meet practise here
21:30 JustinClift Yep
21:30 ndevos JustinClift: any reason to not delete them when starting a regression test?
21:30 JustinClift Not that I can think of
21:30 ndevos at the moment, they get counted... which is not really needed IMHO
21:31 JustinClift Yeah
21:31 ndevos if the core is of one of the gluster* processes, we dont know what changeset it was anyway, and we can not load it in gdb without the right binaries
21:32 ndevos I hope you have a TODO list :)
21:37 JustinClift ndevos: I call it Inbox :/
21:38 JustinClift ndevos: When looking at the cores and trying to figure out where they're from, it's not impossible
21:38 JustinClift They're date stamped as to when they're created
21:38 ndevos I call Inboxes overrated, if you have less than 2000 emails in your inbox you're a control feak
21:38 ndevos *freak even
21:38 misc 2000 unread ?
21:38 JustinClift Well, I have 70 odd folders (1 per mailing list)
21:39 JustinClift My "Inbox" is anything that's not for a mailing list
21:39 ndevos no, onlu 1405 unread, so that means I marked about 700 as "to follow up"
21:39 JustinClift eg I'm directly part of the conversation (CC or To)
21:39 ndevos and yes, those are non-mailinglist emails
21:39 JustinClift I also have a Done folder, that I move messages into when they're completed
21:40 misc so I have 412 filters for my personal mailbox
21:40 ndevos oh, I have a 00_TODO folder too for the important ones... no idea whats in there :-/
21:40 misc in fact, a bit more as I used to have filter dedicated to rewrite subject of some bugzilla mail to be more readable
21:41 JustinClift 412 filters
21:42 JustinClift wow
21:42 JustinClift I thought 70 was overdoing it
21:42 JustinClift ;)
21:42 ndevos well, not 700 emails for "to follow up", some of them are threads that would only need one email
21:42 ndevos I'm not going to count filters, the Zimbra webui is not so comfortable for that
21:42 * JustinClift tends to ignore threads over about 8 emaiils - they never get read - unless the subject line makes it clear its something I really care about
21:43 JustinClift ;)
21:43 ndevos My 'Done' is split per year, and I call it Archive.$YEAR, those are the mails I might need to refer to later
21:43 JustinClift Ahh
21:43 JustinClift Yeah
21:44 JustinClift I'm using Apple Mail, and the "across all folders" search functionality in it is very quick and powerful (and easy to use), so I stopped dividing stuff up that way a while.
21:44 JustinClift It has other drawbacks, but that's not one of them ;)
21:44 JustinClift Gah, I'm getting distracted
21:44 jobewan joined #gluster-dev
21:44 * JustinClift focuses on stuff
21:45 ndevos I used to run Thinderbird, but that does not cope well with much email, not I'm using mutt and notmuch for searching
21:45 ndevos s/not/now/
21:45 ndevos (the 2nd 'not')
21:46 ndevos wohoo, tests/basic/exports_parsing.t passed, I can now rest peacefully until tomorrow
21:46 ndevos cya!
21:47 JustinClift 'nite :)
21:53 JustinClift Hmmmm, is Harsha on holiday or something?
21:53 * JustinClift just realised I haven't seen him around for a while
21:53 JustinClift Gah, I'd better sign off for the night too
21:54 wushudoin| joined #gluster-dev
22:18 badone joined #gluster-dev
23:39 dlambrig left #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary