Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-06-03

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:30 badone_ joined #gluster-dev
02:43 pranithk joined #gluster-dev
02:48 pranithk ndevos: Do you mind merging http://review.gluster.org/#/c/11048/
03:19 overclk joined #gluster-dev
03:28 kdhananjay joined #gluster-dev
03:46 nbalacha joined #gluster-dev
03:53 nbalacha joined #gluster-dev
03:53 shubhendu joined #gluster-dev
04:02 atinmu joined #gluster-dev
04:23 hagarth joined #gluster-dev
04:24 ashishpandey joined #gluster-dev
04:30 kanagaraj joined #gluster-dev
04:32 soumya joined #gluster-dev
04:36 poornimag joined #gluster-dev
04:36 sakshi joined #gluster-dev
04:40 ppai joined #gluster-dev
04:47 hgowtham joined #gluster-dev
04:47 atalur joined #gluster-dev
04:50 schandra joined #gluster-dev
04:54 Manikandan joined #gluster-dev
04:54 gem joined #gluster-dev
04:54 ashiq joined #gluster-dev
05:03 RajeshReddy joined #gluster-dev
05:06 nkhare joined #gluster-dev
05:10 jiffin joined #gluster-dev
05:10 Joe_f joined #gluster-dev
05:11 17WAB0EN6 joined #gluster-dev
05:12 ashiq- joined #gluster-dev
05:23 spandit joined #gluster-dev
05:30 vimal joined #gluster-dev
05:41 anekkunt joined #gluster-dev
05:54 deepakcs joined #gluster-dev
05:59 spandit joined #gluster-dev
06:01 Gaurav__ joined #gluster-dev
06:13 rafi joined #gluster-dev
06:13 hagarth joined #gluster-dev
06:16 raghu joined #gluster-dev
06:27 rgustafs joined #gluster-dev
06:28 atalur joined #gluster-dev
06:29 spalai joined #gluster-dev
06:34 nishanth joined #gluster-dev
06:36 pppp joined #gluster-dev
06:46 kdhananjay joined #gluster-dev
07:02 ashiq joined #gluster-dev
07:03 ashiq- joined #gluster-dev
07:05 Joe_f joined #gluster-dev
07:10 pranithk joined #gluster-dev
07:16 kdhananjay1 joined #gluster-dev
07:18 kdhananjay joined #gluster-dev
07:33 shubhendu_ joined #gluster-dev
07:37 shubhendu__ joined #gluster-dev
07:43 Joe_f joined #gluster-dev
08:01 shubhendu_ joined #gluster-dev
08:07 pranithk joined #gluster-dev
08:26 kdhananjay joined #gluster-dev
08:48 Joe_f joined #gluster-dev
08:49 atinmu joined #gluster-dev
08:57 nishanth joined #gluster-dev
09:02 Manikandan joined #gluster-dev
09:11 nishanth joined #gluster-dev
09:13 Joe_f joined #gluster-dev
09:48 poornimag joined #gluster-dev
10:04 schandra joined #gluster-dev
10:04 anekkunt hagarth, have a look this patch http://review.gluster.org/#/c/10894/10   and  please can you post your comments if you have any.
10:09 anekkunt atinmu,  could you review this patch http://review.gluster.org/#/c/10850/
10:10 atinmu anekkunt, in few minutes
10:11 anekkunt atinmu,  ok  .. thanks
10:21 nishanth joined #gluster-dev
10:35 ndevos hagarth: I did not see a "could you host todays meeting" request yet, I assume you're hosting it then?
10:36 hagarth ndevos: I have been too busy to be even asking that question :)
10:39 ndevos hagarth: I'm in meetings until ours starts, I hope it finishes early, but I can not say anything about it yet
10:40 pranithk xavih: Have you run into any hangs when bricks are brought up/down on ec volumes?
10:41 pranithk xavih: Seems like we have some leaks too, I am not sure yet....
10:41 hagarth ndevos: OK, let us see if somebody can be on standby for this. atinmu, raghu, spot - can anybody run this if ndevos doesn't appear on time for the meeting?
10:41 atinmu can someone review http://review.gluster.org/11054 & http://review.gluster.org/#/c/11055/
10:42 atinmu hagarth, I am bit doubtful
10:43 atinmu raghu, spot : how about you guys?
10:44 ndevos atinmu: even if you can start it would help, I can continue if you can not finish it
10:44 xavih pranithk: haven't seen hangs in my tests. Can you reproduce them with some test ?
10:44 ndevos atinmu: I would only be a few minutes late, if at all
10:46 pranithk xavih: I am trying... The hang only happens if we up/down bricks as per Bhaskar (QE) otherwise things are smooth
10:47 pranithk xavih: Will let you know if I find something.
10:48 pranithk xavih: I am reviewing one patch, after that I am going to run the following test. Run parallel writes to same file from two mount points. Then kill bricks and restart loop periodically in the other terminal. Let me see if I find something with this...
10:49 xavih pranithk: ok. Let's see what happen...
10:50 lalatenduM joined #gluster-dev
10:52 lalatenduM joined #gluster-dev
10:57 kkeithley joined #gluster-dev
10:58 kkeithley left #gluster-dev
11:06 ira joined #gluster-dev
11:17 poornimag joined #gluster-dev
11:33 nbalacha joined #gluster-dev
11:35 nbalacha joined #gluster-dev
11:57 asengupt joined #gluster-dev
11:58 rafi1 joined #gluster-dev
11:58 asengupt_ joined #gluster-dev
11:58 ndevos hagarth, atinmu, raghu, spot: did you decide who's going to run the meeting?
12:00 hagarth ndevos: can start+run the meeting till you come by
12:00 rafi joined #gluster-dev
12:01 ndevos hagarth: okay, just finishing some notes and I'll be there in a few minutes
12:02 lpabon joined #gluster-dev
12:13 atalur joined #gluster-dev
12:14 poornimag joined #gluster-dev
12:17 ppai joined #gluster-dev
12:25 rafi joined #gluster-dev
12:45 kdhananjay joined #gluster-dev
12:54 shyam joined #gluster-dev
13:04 spalai left #gluster-dev
13:12 kkeithley joined #gluster-dev
13:20 hagarth joined #gluster-dev
13:42 firemanxbr joined #gluster-dev
13:45 ndevos quick! kkeithley_bat, lalatenduM, hagarth, *: http://review.gluster.org/10803 passed regression tests
13:48 lalatenduM ndevos: I have seen the patch before, but did not understand much :)
13:52 firemanxbr joined #gluster-dev
13:53 hagarth ndevos: merged
13:58 ndevos lalatenduM: ok, thanks for checking anyway :)
13:58 ndevos hagarth: thanks!
14:01 dlambrig1 joined #gluster-dev
14:09 kkeithley joined #gluster-dev
14:10 hagarth ndevos: do you remember who posted the silicon valley gluster picture here?
14:11 ndevos hagarth: K...
14:11 * ndevos checks his logs
14:12 pppp joined #gluster-dev
14:12 ndevos hagarth: irclogs/2015-06-01.log:15:02 #gluster: < Kins_> Did anyone notice Gluster showing up in the show 'Silicon Valley'? https://i.imgur.com/CMe9TxK.png
14:14 hagarth wonder if we could use that image broadly
14:23 pranithk xavih: I found the reason for hang...
14:25 pranithk xavih: pm
14:33 ndevos hagarth: it is on imgur? how more broadly can it be?
14:33 ndevos pranithk: oh, yes, dont tell us about that bug! ;-)
14:33 pranithk ndevos: yes! secret!
14:34 hagarth ndevos: tweet it?
14:34 ndevos hagarth: you, me, both, retweet?
14:34 hagarth ndevos: yes
14:34 * ndevos slaps pranithk
14:35 * pranithk cries
14:35 ndevos hagarth: make sure to include @SiliconHBO
14:36 * ndevos hands pranithk an ice cream <3
14:37 pranithk ndevos: (07:55:48 PM) pranithk: 1613    if ((version[0] != 0) || (version[1] != 0) ||
14:37 pranithk (07:55:48 PM) pranithk: 1614        (dirty[0] != 0) || (dirty[1] != 0)) {
14:37 pranithk (07:55:48 PM) pranithk: 1615            if (link->fop->id == GF_FOP_FLUSH)
14:37 pranithk (07:55:48 PM) pranithk: 1616                    GF_ASSERT (link->fop->state != EC_STATE_UNLOCK);
14:37 pranithk (07:55:48 PM) pranithk: 1617        ec_update_size_version(link, version, size, dirty);
14:37 pranithk (07:55:48 PM) pranithk: (gdb)
14:37 pranithk (07:55:59 PM) pranithk: That assert failed in ec_update_info
14:37 pranithk (07:56:12 PM) pranithk: dirty[0] was 2
14:37 pranithk (07:56:22 PM) pranithk: Now I am finding out when that can happen
14:37 pranithk (07:58:47 PM) xavih: this can happen if the lock used by flush is reused by other fops
14:37 pranithk (07:59:03 PM) xavih: I think it's a normal behavior. How does this cause a hang ?
14:37 pranithk (08:03:49 PM) pranithk: no, but here flush tried to unlock... i.e. that was the last one to unref
14:37 pranithk (08:04:53 PM) pranithk: at the time of unlock it shouldn't have any dirty[] differences.. I wonder where they came from
14:37 pranithk (08:05:10 PM) pranithk: flush should have already done update_size_version and should have a clean state...
14:37 pranithk ndevos: do you see the conversation?
14:38 ndevos pranithk: yes, thank you! we now have it in the logs, and hopefully someone else likes that too :)
14:38 pranithk ndevos: cool
14:38 hagarth spot: do you want to tweet about this - https://i.imgur.com/CMe9TxK.png ?
14:39 ndevos oh, yes, spot can tweet random pictures
14:39 xavih pranithk: when that happens, have_dirty is true ?
14:39 pranithk xavih: (gdb) p *lock->ctx
14:39 pranithk $11 = {bad = 0, inode_lock = 0x7fec18033d5c, have_info = _gf_true, have_config = _gf_true,
14:39 pranithk have_version = _gf_true, have_size = _gf_true, have_dirty = _gf_false, config = {version = 0,
14:39 pranithk algorithm = 0 '\000', gf_word_size = 8 '\b', bricks = 6 '\006', redundancy = 2 '\002',
14:39 pranithk chunk_size = 512}, pre_version = {1997, 1997}, post_version = {1997, 1997}, pre_size = 4,
14:39 pranithk post_size = 4, pre_dirty = {0, 0}, post_dirty = {2, 2}, heal = {next = 0x7fec20008800,
14:39 pranithk prev = 0x7fec20008800}}
14:40 ndevos pranithk: you could have answered "yes"
14:40 pranithk xavih: no
14:40 ndevos :D
14:40 pranithk ndevos: See now you know why I like pm :-D
14:40 pranithk ndevos: This is the structure xavih is looking for
14:41 ndevos pranithk: yeah, I guess he would be
14:41 pranithk xavih: I think I got it
14:41 pranithk xavih: gah! no
14:50 * kkeithley is pretty sure <3 isn't ice cream
14:52 ndevos kkeithley: <3 is over used!
14:52 ndevos well, maybe not, you can not have enough <3
14:57 pranithk xavih: some more debug info in pm
14:57 xavih pranithk: I think it doesn't make sense that have_dirty = _gf_false. It should be _gf_true...
14:58 jiffin joined #gluster-dev
14:58 deepakcs joined #gluster-dev
14:59 pranithk xavih: you got my pm with more debug info?
15:00 nbalacha joined #gluster-dev
15:10 kkeithley here is a double scoop for you <∞
15:46 soumya joined #gluster-dev
16:04 rafi joined #gluster-dev
16:06 pousley joined #gluster-dev
16:07 atinmu joined #gluster-dev
16:07 Gaurav__ joined #gluster-dev
16:09 rafi joined #gluster-dev
16:26 wushudoin| joined #gluster-dev
16:28 rafi1 joined #gluster-dev
16:32 wushudoin| joined #gluster-dev
17:23 atinmu joined #gluster-dev
17:27 hagarth joined #gluster-dev
17:37 atinmu joined #gluster-dev
17:48 dlambrig1 left #gluster-dev
17:57 jbautista- joined #gluster-dev
18:08 firemanxbr joined #gluster-dev
18:16 jbautista- joined #gluster-dev
18:18 pousley joined #gluster-dev
18:33 hgowtham joined #gluster-dev
18:33 ashiq joined #gluster-dev
19:00 dlambrig1 joined #gluster-dev
19:14 hgowtham_ joined #gluster-dev
19:14 ashiq- joined #gluster-dev
19:19 shyam joined #gluster-dev
19:20 wushudoin| joined #gluster-dev
19:25 wushudoin| joined #gluster-dev
19:42 rafi joined #gluster-dev
20:03 jbautista- joined #gluster-dev
20:17 shyam joined #gluster-dev
20:20 jbautista- joined #gluster-dev
20:27 hgowtham_ joined #gluster-dev
20:28 ashiq- joined #gluster-dev
20:53 badone_ joined #gluster-dev
20:53 dlambrig1 left #gluster-dev
21:00 kkeithley semiosis: ping. I did a `dput ppa:gluster/glusterfs-3.6 glusterfs_3.6.3-ubuntu1~trusty4_source.changes` from a trusty box.  It ran successfully. I'm not seeing that it built though. Or how to initiate a build.
21:11 kkeithley JustinClift, ndevos, tigert: see my email to gluster-infra
21:15 ndevos kkeithley: you dont have to check download stats manually anymore! http://projects.bitergia.com/redhat-glu​sterfs-dashboard/browser/downloads.html
21:16 csim I think the download stats had a problem ( as I should fix the script ), but maybe it was fixed bitergia side too
21:16 csim ( problem being "it put 1 month worth of data in a file where it should be 1 week" )
21:16 ndevos kkeithley: Jenkins always marks the slaves offline, but connects to them when needed
21:17 kkeithley not that email
21:17 * ndevos fetches email again
21:18 ndevos csim: oh, nice that you know how to get the bitergia things addressed :)
21:19 ndevos kkeithley: hmm, thats bad :-/
21:19 ndevos kkeithley: I think csim would want to know about that too
21:20 kkeithley I looked for misc, didn't occur to me that it's backwards day
21:20 * ndevos had to think about that one a little too
21:21 csim ndevos: I am not sure I did anything, i said I would look but then went in PTO
21:21 csim it is likely not hard, but that's dealing with date, in bash :/
21:22 ndevos csim: oh, I only just noticed that we have download stats, I know kkeithley was checking manually every now an then
21:22 ndevos hmm, 6011746 downloads from one IP??
21:23 ndevos IN A MONTH
21:24 csim yep
21:25 csim someone mirrored the git snapshot rpm
21:25 csim some mirror in hungary
21:26 ndevos oh, cern maybe?
21:26 kkeithley what kind of mirror works that way?
21:27 ndevos ah, no, they are in bulgary...
21:27 hgowtham_ joined #gluster-dev
21:27 csim cern is in switzerland, no ?
21:27 ndevos yes, but their new datacenter is in bulgary
21:27 csim kkeithley: potentially a misconfiguration, since it didn't happened after
21:27 kkeithley Cern is on the French/Swiss border outside Geneva
21:27 ndevos their new and bigger is in a cheaper location ;-)
21:28 ndevos ...add DC somewhere in that sentence
21:28 kkeithley I would have sworn they told us their new data center was in the Czech Republic.
21:29 ndevos no, I dont think so
21:29 ndevos you probably labelled it "eastern europe"? ;-)
21:29 kkeithley one web site says it's in Budapest
21:31 csim so hungary
21:34 ndevos hmm, so I'm wrong - oh well, somewhere eastern europe...
21:35 csim then, it could be them
21:36 kkeithley yeah, Budapest is usually in Hungary. ;-)
21:36 kkeithley It could be that other Budapest
21:37 csim I am sure we can find a Fox News footage stating otherwise
21:37 kkeithley you misspelled Faux News
21:37 kkeithley I don't usually correct spelling mistakes, but this one I always do. ;-)
21:46 wushudoin| joined #gluster-dev
21:52 wushudoin| joined #gluster-dev
22:03 csim kkeithley1: the file you erased was created when ?
22:30 ashiq joined #gluster-dev
22:36 hgowtham__ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary