Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2015-03-11

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:51 topshare joined #gluster-dev
01:01 soumya joined #gluster-dev
01:16 soumya joined #gluster-dev
01:27 topshare joined #gluster-dev
01:33 topshare joined #gluster-dev
01:38 pranithk joined #gluster-dev
01:42 hagarth joined #gluster-dev
01:52 pranithk left #gluster-dev
02:47 ilbot3 joined #gluster-dev
02:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:07 pranithk joined #gluster-dev
03:20 nkhare joined #gluster-dev
03:40 lalatenduM__ joined #gluster-dev
03:47 ppai joined #gluster-dev
03:50 itisravi joined #gluster-dev
04:05 hagarth joined #gluster-dev
04:10 bala joined #gluster-dev
04:10 atinmu joined #gluster-dev
04:11 rafi joined #gluster-dev
04:13 lalatenduM___ joined #gluster-dev
04:13 lalatenduM__ joined #gluster-dev
04:16 ndevos JustinClift: do we need to keep build.gluster.org/job/rackspace-regression-2GB-triggered/5070 running?
04:17 kanagaraj joined #gluster-dev
04:21 lalatenduM___ joined #gluster-dev
04:28 spandit joined #gluster-dev
04:32 gem joined #gluster-dev
04:33 anoopcs joined #gluster-dev
04:33 jiffin joined #gluster-dev
04:34 ndevos hmm, this looks scary, only failures? http://build.gluster.org/job/rackspace-regression-2GB-triggered/buildTimeTrend
04:36 jiffin joined #gluster-dev
04:36 nishanth joined #gluster-dev
04:45 anoopcs joined #gluster-dev
04:56 kshlm joined #gluster-dev
04:57 lalatenduM joined #gluster-dev
05:11 ndarshan joined #gluster-dev
05:18 Apeksha joined #gluster-dev
05:18 deepakcs joined #gluster-dev
05:19 nishanth joined #gluster-dev
05:19 Apeksha_ joined #gluster-dev
05:24 Manikandan joined #gluster-dev
05:24 Manikandan_ joined #gluster-dev
05:24 ashiq joined #gluster-dev
05:35 kdhananjay joined #gluster-dev
05:42 ppp joined #gluster-dev
05:46 Manikandan_ joined #gluster-dev
05:51 itisravi kshlm: KP has sent a patch for the bug-918437-sh-mtime.t failure.
06:01 kshlm itisravi, I didn't know that.
06:02 soumya joined #gluster-dev
06:08 vimal joined #gluster-dev
06:45 bala joined #gluster-dev
06:51 atinmu joined #gluster-dev
07:42 lalatenduM hchiramm, what link you will suggest to learn more about qcow2 disk format?
07:51 atinmu joined #gluster-dev
08:06 _shaps_ joined #gluster-dev
08:11 soumya joined #gluster-dev
08:14 aravindavk joined #gluster-dev
08:18 topshare joined #gluster-dev
08:23 topshare joined #gluster-dev
08:31 anrao joined #gluster-dev
08:37 raghu joined #gluster-dev
08:38 hgowtham joined #gluster-dev
08:49 anoopcs joined #gluster-dev
08:55 topshare joined #gluster-dev
08:57 pranithk joined #gluster-dev
09:01 topshare joined #gluster-dev
09:05 topshare joined #gluster-dev
09:13 itisravi_ joined #gluster-dev
09:18 deepakcs lalatenduM, u want to learn the qcow2 internals ?
09:19 deepakcs lalatenduM, http://www.burtonsys.com/qcow-image-format.html (by Mark , red hatter, iirc)
09:25 rafi in rhel what
09:26 lalatenduM deepakcs, thanks
09:27 lalatenduM deepakcs++
09:27 glusterbot lalatenduM: deepakcs's karma is now 2
09:27 deepakcs lalatenduM, wc
09:33 itisravi_ joined #gluster-dev
09:39 hchiramm deepakcs, lalatenduM something can help from here as well http://website-humblec.rhcloud.com/understanding-qcow2-image-header-verifying-corruption/
09:40 hchiramm :)
09:40 lalatenduM hchiramm++ cool
09:40 glusterbot lalatenduM: hchiramm's karma is now 20
09:40 hchiramm lalatenduM, !
09:41 lalatenduM hchiramm, for writing the blog :)
09:41 hchiramm lalatenduM, yep.. thanks :)
09:41 hchiramm bit of hexdump stuff
09:42 deepakcs hchiramm, :)
09:42 hchiramm :)
09:43 deepakcs hchiramm, so the check commands only verified integrity of the qcow2 header or even detects issues with data corruption ?
09:45 _shaps_ joined #gluster-dev
09:45 hchiramm deepakcs, afair, check command check the clusters as well
09:45 hchiramm L1 , L2 clusters
09:45 deepakcs hchiramm, ok, 'guess it just checks the checksum or something to deduce data corruption
09:46 hchiramm well, if the header itself corrupted the references to the clusters  wont work..
09:46 hchiramm deepakcs, thats true.
09:46 hchiramm http://website-humblec.rhcloud.com/kvm-guest-reports-io-errors-and-filesystem-goes-readonly-how-to-troubleshoot-or-track-it-using-systemtap/
09:46 hchiramm deepakcs, ^^^ :)
09:46 deepakcs hchiramm, header corruption will be caught by matching the header contents with the known header fields, right ?
09:47 hchiramm header corruption will be vefiried by calcluating the cluster allocation and its pointers..
09:47 hchiramm so if the references are corrupted, basically it lead to dangled data
10:32 ashiq joined #gluster-dev
11:14 mikedep333 joined #gluster-dev
11:17 dlambrig joined #gluster-dev
11:49 jimjag joined #gluster-dev
11:50 jimjag left #gluster-dev
11:52 atinmu joined #gluster-dev
11:57 itisravi joined #gluster-dev
11:57 JustinClift ndevos: Nope, regression test 5070 doesn't need to be running.  Thought I'd killed it already. :/
11:58 soumya joined #gluster-dev
11:59 JustinClift *** REMINDER: Weekly Gluster Community meeting starts in #gluster-dev in 1 minute ***
12:05 itisravi_ joined #gluster-dev
12:24 jflf joined #gluster-dev
12:25 jflf left #gluster-dev
12:31 _shaps_ joined #gluster-dev
12:32 kshlm joined #gluster-dev
12:33 kshlm How do you override Gluster Build System's Verified-1 on gerrit?
12:34 kshlm I'm trying to merge https://review.gluster.org/9851 , but cannot as Gluster Build System has a -1.
12:34 kshlm hagarth, JustinClift, any ideas?
12:35 kshlm I've seen some reviews getting the flags over ruled.
12:38 firemanxbr joined #gluster-dev
12:43 hchiramm_ joined #gluster-dev
12:43 misc kshlm: there is a meeting going on, FYI :)
12:46 ppp joined #gluster-dev
12:51 ndevos kshlm: I'm just going to some talk at the conference, I can explain that later - or ask JustinClift
12:54 kshlm ndevos, I'll ask you later. I'm leaving for home now.
12:56 misc JustinClift: what do you think of the plan to replace download.gluster.org by 2 server / VM and have them using gluster in the backend to share stuff ?
12:58 itisravi_ Does someone know why http://build.gluster.org/job/compare-bug-version-and-git-branch/3898/console says "BUG id 1200764 belongs to '' and not 'GlusterFS'" though it does?
13:00 JustinClift itisravi_: I've seen that behaviour before when the bot tries to get non-public info out of BugZilla
13:00 JustinClift itisravi_: There might be other situations it fails in too
13:00 JustinClift itisravi_: I haven't looked at the bug in question... is it public?
13:01 JustinClift misc: Personally, I wouldn't really use a non-quorum setup with GlusterFS for production / mission critical stuff
13:01 itisravi_ JustinClift: yup
13:01 hagarth joined #gluster-dev
13:02 JustinClift itisravi_: Um, are there details on who looks after the compare-bug-version-and-git-branch scripting? Maybe who wrote it?
13:02 JustinClift misc: eg I'd use at least 3 servers
13:02 JustinClift misc: And really, we're introducing failure points when we don't need to.  Unless there's a capacity issue we need to consider that means we need to use it?
13:03 kanagaraj joined #gluster-dev
13:04 shyam joined #gluster-dev
13:08 shyam joined #gluster-dev
13:11 itisravi_ JustinClift: I don't have much clue :(
13:12 nishanth joined #gluster-dev
13:13 * itisravi_ is calling it a day.
13:15 ndevos JustinClift: that bug is private indeed
13:16 pranithk joined #gluster-dev
13:30 anoopcs joined #gluster-dev
13:34 pranithk joined #gluster-dev
13:42 pranithk1 joined #gluster-dev
13:50 pranithk joined #gluster-dev
13:51 kshlm joined #gluster-dev
13:54 pranithk joined #gluster-dev
13:56 misc JustinClift: well, basically, we can decide several thing :
13:56 misc 1) 1 server is enough. We do not reboot it that often, so that's ok. FAilure would mean to restore from backup but we can afford downtime
13:57 misc 2) having a mirror would help for resilience and we can pay the price in ressources/complexity/admin time
13:57 misc 2 imply :
13:57 misc 2a) we can do a simple mirroring with rsync, etc
13:57 misc 2b) we decide to show case glsuter, to the price of greater complexity
13:58 misc 2a would be more community friendly if we want to setup a mirror network, for example, and is easier
13:58 misc but 2b is nice from PR point of view
13:58 misc all choices would be good, so we can also think taht's too much effort, that's fine by me :)
14:00 topshare joined #gluster-dev
14:06 topshare joined #gluster-dev
14:07 topshare joined #gluster-dev
14:25 topshare joined #gluster-dev
14:32 lalatenduM joined #gluster-dev
14:43 _shaps_ joined #gluster-dev
14:48 Apeksha joined #gluster-dev
14:54 anoopcs joined #gluster-dev
14:55 bala joined #gluster-dev
15:03 aravindavk joined #gluster-dev
15:25 _shaps_ joined #gluster-dev
15:25 gem joined #gluster-dev
15:30 wushudoin joined #gluster-dev
15:41 deepakcs joined #gluster-dev
15:45 JustinClift misc: Lets not let PR raise our admin burden ;)
15:53 misc JustinClift: I also see that as a learning opportunity somehow
15:55 jamesc|2 joined #gluster-dev
16:11 hchiramm_ joined #gluster-dev
16:25 gem joined #gluster-dev
16:42 dlambrig joined #gluster-dev
16:43 hagarth joined #gluster-dev
17:38 tg2 semiosis - do you have the matching .debug symbol file for the gluserfs-server that is on the official repo?, re: https://github.com/semiosis/glusterfs-debian/issues/9
17:40 tg2 its oddly missing form the -dbg package
17:47 jiffin joined #gluster-dev
17:49 dlambrig joined #gluster-dev
17:50 shyam joined #gluster-dev
18:01 firemanxbr joined #gluster-dev
18:14 kbyrne joined #gluster-dev
18:44 lalatenduM joined #gluster-dev
19:01 dlambrig joined #gluster-dev
20:06 hchiramm joined #gluster-dev
20:38 hagarth joined #gluster-dev
20:38 jamesc|2 joined #gluster-dev
22:53 dlambrig joined #gluster-dev
23:23 bala joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary