Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-06-12

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:26 wushudoin joined #gluster-dev
01:39 bala joined #gluster-dev
02:12 lalatenduM joined #gluster-dev
02:23 suliba joined #gluster-dev
02:37 bharata-rao joined #gluster-dev
02:47 skoduri joined #gluster-dev
03:11 awheeler joined #gluster-dev
03:15 hagarth1 joined #gluster-dev
03:15 hchiramm__ joined #gluster-dev
03:24 hagarth joined #gluster-dev
03:42 itisravi joined #gluster-dev
03:44 kanagaraj joined #gluster-dev
03:50 spandit joined #gluster-dev
03:56 JoeJulian Has anyone considered testing with selinux either enabled or at least permissive and checking logs to ensure that it'll work out of the box (it doesn't on F20, btw).
04:23 ndarshan joined #gluster-dev
04:31 itisravi JoeJulian: selinux is enforcing on my F20 laptop. Installing from the source works fine, with the logs created in /usr/local/var/log/glusterfs
04:37 JoeJulian itisravi: Who installs from source?
04:37 itisravi JoeJulian: devs :)
04:37 JoeJulian Besides, we're the ones packaging for F20 so we should test it.
04:38 itisravi JoeJulian: agreed. out of curiosity, what error does it give?
04:39 JoeJulian avc:  denied  { write } for  pid=7596 comm="glusterd" name="glusterd.socket" dev="tmpfs" ino=10087511 scontext=system_u:system_r:glusterd_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
04:39 JustinClift BZ it?  Assign it to Avati
04:40 JustinClift Pretty sure his task for 3.6 is to "make sure SELinux works"
04:40 itisravi ok.
04:40 JoeJulian I filed through the gui tool which I think files against selinux
04:41 JoeJulian Oh, and I can't assign anything. I'm just a user. :D
04:41 JustinClift Did it give you a BZ?
04:41 * JustinClift will attempt to screw with the BZ until it's Avati's
04:41 JoeJulian I was just looking at that... no email... hmm.
04:43 lalatenduM I can also assign the bz
04:45 JoeJulian Meh, no bz under my email. I'll open a new one.
04:45 kdhananjay joined #gluster-dev
04:45 * JustinClift couldn't find it when searching to bits like "ino=10087511"
04:46 JustinClift JoeJulian: Let Lala or myself know the BZ, and we'll give it to Avati
04:46 JustinClift Gah
04:46 * JustinClift just checked the new rackspace nodes
04:46 JustinClift 4 jobs hung
04:46 JustinClift Dammit
04:47 JoeJulian but did they do it well?
04:47 JustinClift Superbly
04:47 JoeJulian So at least they're well hung.
04:47 JustinClift ;)
04:47 lalatenduM :)
04:47 JustinClift After I get some sleep, I think I'll set up some slaves with 2GB ram VM's instead of the 1GB ones
04:48 JustinClift These 1GB ones ran several jobs fine, then started having issues
04:48 JoeJulian I should have probably gone to bed myself so I wouldn't make such terrible jokes.
04:48 JustinClift Cloud be the cleanup script problem, or it might be a resource thing
04:48 JustinClift s/Cloud/Could/
04:48 JustinClift JoeJulian: I make worse ones ;)
04:50 krishnan_p joined #gluster-dev
04:53 nishanth joined #gluster-dev
04:54 JoeJulian bug 1108448
04:54 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1108448 unspecified, unspecified, ---, vbellur, NEW , selinux alerts starting glusterd in f20
04:57 JoeJulian Everything's all set up to test that rebalance client crash tomorrow. I sure hope that's it. I'm hoping to roll it out to production on Monday if all goes well. I have a brick I need to decommission (it's replica failed and we're reconfiguring the brick layout) and I have 8 new 56TB bricks to add and rebalance.
04:58 JustinClift k, it's assigned to Avati now
04:58 shubhendu joined #gluster-dev
04:58 JustinClift Interesting times. ;)
04:58 JoeJulian It's very fun.
04:58 JustinClift Hmmm, rebooted slave0.  Came back up fine
04:58 JustinClift Rebooted slave1.  Hasn't come back up
04:59 * JustinClift bets there's a kernel panic on the console
04:59 JustinClift ...time to enable java in the browser and check
04:59 * JustinClift sighs
05:00 JustinClift Lets see if the reboot lets slave0 run things to completion...
05:01 bala joined #gluster-dev
05:01 JoeJulian btw... don't upgrade to chrome 35 if you run linux as your desktop. They no longer accept the same plugin api so there's no java support.
05:01 * JustinClift expunged Chrome from everything a while ago
05:01 JustinClift They started showing incline ads on the fucking new tab page
05:02 * JustinClift has a close to ad-free experience due to plugins
05:02 JustinClift Now using Opera and Firefox
05:02 JoeJulian whoah, never seen ads on it.
05:02 JustinClift I haven't until then
05:02 JustinClift I was running Chrome Canary though
05:03 JustinClift And I never will see ads on Chrome again
05:03 JustinClift s/incline/inline/
05:03 JoeJulian I knew what you meant. :)
05:03 JustinClift :)
05:04 JoeJulian but I still pictured incline ads: http://blogs.sitepointstatic.com/imag​es/tech/742-css3-starwars-screen.png
05:04 JustinClift Heh
05:06 JustinClift Yep.  Kernel panic on the console
05:07 JustinClift Betcha all the other slaves will have the same problem
05:07 JustinClift Tomorrow I'll create some RHEL 6.5 VM's in Rackspace, using the same 1GB of ram, and run some regression tests on them until they break like this again
05:08 * JustinClift should probably look into enabling that kdump thing or similar for kernel panic analysis
05:08 JustinClift With RHEL VM's, we can probably get official RH type engineers to debug it
05:11 kshlm joined #gluster-dev
05:25 ppai joined #gluster-dev
05:26 jobewan joined #gluster-dev
05:31 bala joined #gluster-dev
05:32 * JustinClift is reasonably confident that slave0 will now work fine for at least a few regression runs
05:32 JustinClift The reboot seems to have sorted it
05:32 JustinClift So, there's a build up of "something" which is screwing up the regressions after a few runs on Rackspace
05:32 JustinClift Which gets cleared by reboot
05:34 JustinClift Ugh.  Niels on holiday for two weeks now
05:34 JustinClift :/
05:39 skoduri joined #gluster-dev
05:44 raghu joined #gluster-dev
05:46 nishanth joined #gluster-dev
05:48 hagarth joined #gluster-dev
05:49 aravindavk joined #gluster-dev
05:54 JustinClift Sleep time finally.
05:54 JustinClift 'nite all :)
05:55 lalatenduM JustinClift, gud night :)
05:56 JustinClift :)
06:22 shubhendu joined #gluster-dev
06:39 vpshastry joined #gluster-dev
06:40 aravindavk joined #gluster-dev
06:43 hagarth joined #gluster-dev
07:37 vipulnayyar joined #gluster-dev
07:37 vipulnayyar krishnan_p : Hi.
07:38 scuttle_ joined #gluster-dev
08:26 shubhendu joined #gluster-dev
09:18 vpshastry joined #gluster-dev
10:17 deepakcs joined #gluster-dev
10:24 ndarshan joined #gluster-dev
10:26 bala joined #gluster-dev
10:52 edward1 joined #gluster-dev
10:54 vipulnayyar joined #gluster-dev
11:04 itisravi joined #gluster-dev
11:19 ndarshan joined #gluster-dev
11:25 kshlm Looks like rackspace slave0 and slave2 are hung.
11:26 bala joined #gluster-dev
11:34 ppai joined #gluster-dev
12:01 tdasilva joined #gluster-dev
12:15 rnz joined #gluster-dev
12:28 lpabon joined #gluster-dev
12:48 ndk joined #gluster-dev
12:55 awheeler joined #gluster-dev
12:59 shyam joined #gluster-dev
13:08 hagarth joined #gluster-dev
13:19 lalatenduM joined #gluster-dev
13:20 lalatenduM ndevos, regarding bz 1100204
13:21 kdhananjay joined #gluster-dev
13:25 ndevos lalatenduM: what about it?
13:25 vipulnayyar joined #gluster-dev
13:26 lalatenduM ndevos, some how I have feeling that replacing the stat call by a write and read is not good enough ( I may be wrong here, but thats my honest opinion)
13:26 lalatenduM I mean is it a clean way to know if a fs is corrupt
13:27 lalatenduM by doing write and read
13:27 ndevos lalatenduM: okay, what would you recomment?
13:27 shubhendu joined #gluster-dev
13:27 lalatenduM ndevos, I am no answers as of now, working on it
13:27 ndevos lalatenduM: if there is a filesystem corruption detected, the filesystem either causes a kernel panic, or goes read-only
13:28 ndevos lalatenduM: in case of a panic, we dont need to do anything ;)
13:28 ndevos lalatenduM: and, in case of read-only, write+read can be used to detect it
13:29 lalatenduM ndevos, may be you are write :), looking at the man page for stat call,
13:29 lalatenduM s/write/right/
13:29 lalatenduM it says stat should be call on a file
13:30 lalatenduM but for xfs we call it on the brick path
13:30 ndevos lalatenduM: file/dir, it does not matter, the ext4 code does not return an error on stat() when the fs is read-only
13:30 lalatenduM btw I agree with on the effects of fs corruption
13:31 lalatenduM ndevos, hence the write is necessary
13:33 lalatenduM ndevos, so I should write a temp file , read it and delete it too?
13:33 lalatenduM when fs is good
13:35 ndevos lalatenduM: just a file under the .glusterfs/ directory
13:35 ndevos no need to delete it every time
13:35 lalatenduM ndevos, ok
13:37 kanagaraj joined #gluster-dev
13:39 ndevos lalatenduM: you can probably write a timestamp or something, and use a filename like .glusterfs/health
13:40 lalatenduM ndevos, yup, and I think can keep on overwriting the timestamp, should be ok right?
13:41 ndevos lalatenduM: yeah, thats fine
13:41 lalatenduM ndevos, cool
13:42 lalatenduM CentOS team is trying build CentOS 7 and there are facing failures while rebuilding gluster packages rhel 7. http://buildlogs.centos.org/c7.00.03/gluster​fs/20140612105818/3.4.0.59rhs-1.el7.x86_64/
13:42 glusterbot Title: Index of /c7.00.03/glusterfs/2014061210581​8/3.4.0.59rhs-1.el7.x86_64CentOS Mirror (at buildlogs.centos.org)
13:43 lalatenduM hagarth, ndevos hchiramm_ ^^ let me know if you can see the cause of failure
13:51 lalatenduM this looks fishy "rm: cannot remove '/builddir/build/BUILDROOT/glusterfs-3.4.0​.59rhs-1.el7.x86_64/etc/init.d/glusterd': No such file or directory"
14:19 shyam joined #gluster-dev
14:25 wushudoin joined #gluster-dev
14:29 JustinClift ndevos: As a thought with the "< ndevos> lalatenduM: file/dir, it does not matter, the ext4 code does not return an error on stat() when the fs is read-only"
14:29 JustinClift ndevos: Is that someone we could submit a patch to the ext4 code about?
14:30 JustinClift Won't solve the immediate problem, but might help ease problems down the track?
14:30 lalatenduM JustinClift, ndevos has already talked to ext4 devlopers, they dont think the issue is with ext4
14:36 JustinClift lalatenduM: Gotcha
14:36 JustinClift Was a thought, etc ;)
14:36 ndevos JustinClift: it's a 2-line patch, but would have many difficulties getting upstream :-/
14:36 JustinClift :(
14:37 JustinClift Is there bad blood between us and them (eg political) or something?
14:37 JustinClift Or they just believe it's not needed?
14:37 JustinClift Meh, belay that
14:37 JustinClift I need to concentrate on getting these slaves sorted
14:42 ndevos it's just a behaviour that is undefined, and changing it to something else that is undefined does not make sense
14:42 ndevos we should not make assumptions about undefined behaviour :)
14:43 JustinClift :)
14:55 skoduri joined #gluster-dev
14:55 deepakcs joined #gluster-dev
15:08 JustinClift k, now trying: vm.dirty_background_ratio = 3
15:08 JustinClift vm.dirty_ratio = 10
15:08 JustinClift Lets see if there's a change
15:31 jobewan joined #gluster-dev
15:34 ndk joined #gluster-dev
15:35 hchiramm__ joined #gluster-dev
15:53 hchiramm__ joined #gluster-dev
16:20 bala joined #gluster-dev
16:21 JustinClift Hmmm, I think I'll set up a local RHEL box on the spare desktop next to me, and hook that up to jenkins remotely
16:54 bala joined #gluster-dev
16:58 [o__o] joined #gluster-dev
17:04 shyam joined #gluster-dev
17:36 pranithk joined #gluster-dev
17:37 pranithk JustinClift: ping
18:18 tdasilva left #gluster-dev
20:07 systemonkey joined #gluster-dev
20:50 jobewan joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary