Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-18

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:19 soumya_ joined #gluster-dev
01:23 topshare joined #gluster-dev
01:36 bala joined #gluster-dev
01:54 topshare joined #gluster-dev
02:07 bala joined #gluster-dev
02:10 flu_ joined #gluster-dev
02:10 _Bryan_ joined #gluster-dev
02:20 joevartuli joined #gluster-dev
02:22 joevartuli left #gluster-dev
02:40 topshare joined #gluster-dev
02:53 hagarth joined #gluster-dev
03:08 atalur joined #gluster-dev
03:41 semiosis @later tell kkeithley better late than never, I was going to publish debian packages but saw you beat me to it by a few hours. thank you x 100000! taking care of ubuntu now.
03:41 glusterbot semiosis: The operation succeeded.
03:41 semiosis kkeithley++
03:41 glusterbot semiosis: kkeithley's karma is now 41
03:43 semiosis @later tell kkeithley ping me when you have a free minute to talk about keeping the debian/ dir in a git repo.
03:43 glusterbot semiosis: The operation succeeded.
03:54 nkhare joined #gluster-dev
03:54 nkhare ping
03:54 nkhare portante, ping
03:54 glusterbot nkhare: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
03:56 itisravi joined #gluster-dev
04:05 sgowda joined #gluster-dev
04:07 nishanth joined #gluster-dev
04:16 Gaurav_ joined #gluster-dev
04:16 spandit joined #gluster-dev
04:17 atalur joined #gluster-dev
04:21 shubhendu joined #gluster-dev
04:25 ndarshan joined #gluster-dev
04:30 kanagaraj joined #gluster-dev
04:31 rafi1 joined #gluster-dev
04:31 Rafi_kc joined #gluster-dev
04:32 atinmu joined #gluster-dev
04:32 anoopcs joined #gluster-dev
04:33 ppai joined #gluster-dev
04:53 lalatenduM joined #gluster-dev
05:08 aravindavk joined #gluster-dev
05:24 aravinda_ joined #gluster-dev
05:31 flu__ joined #gluster-dev
05:45 kshlm joined #gluster-dev
06:01 jiffin joined #gluster-dev
06:06 hagarth joined #gluster-dev
06:19 topshare joined #gluster-dev
06:22 pranithk joined #gluster-dev
06:25 topshare joined #gluster-dev
06:53 shubhendu joined #gluster-dev
07:13 Debloper joined #gluster-dev
07:44 topshare joined #gluster-dev
08:16 xavih may anyone assign these bugs to me to be able to manage them ? 1164768, 1163760, 1161903, 1162805 and 1165041. Thank you :)
08:23 ndevos xavih: done!
08:23 xavih ndevos++: thanks :)
08:23 glusterbot xavih: ndevos's karma is now 55
08:42 nishanth joined #gluster-dev
08:47 spandit joined #gluster-dev
08:48 ndevos who wants to review some md-cache changes? http://review.gluster.org/9129 http://review.gluster.org/9131
08:50 shubhendu joined #gluster-dev
08:53 ndarshan joined #gluster-dev
08:58 bala joined #gluster-dev
09:07 hchiramm_ hagarth++ thanks
09:07 glusterbot hchiramm_: hagarth's karma is now 25
09:15 Debloper JustinClift: ping!
09:15 flu__ joined #gluster-dev
09:16 flu_ joined #gluster-dev
09:21 topshare joined #gluster-dev
09:30 deepakcs joined #gluster-dev
09:43 ilbot3 joined #gluster-dev
09:44 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
09:44 shubhendu joined #gluster-dev
09:44 Debloper hchiramm_ :D
09:46 lalatenduM Debloper++ good to see you in #gluster-dev  :)
09:46 glusterbot lalatenduM: Debloper's karma is now 2
09:46 spandit joined #gluster-dev
09:47 nishanth joined #gluster-dev
09:47 Debloper lalatenduM: I'm always lurking :P btw, thx for the karma! :D
09:47 lalatenduM :)
09:47 Debloper lalatenduM++ hchiramm_++
09:47 glusterbot Debloper: lalatenduM's karma is now 44
09:47 glusterbot Debloper: hchiramm_'s karma is now 6
09:48 lalatenduM Debloper, :)
09:48 ndarshan joined #gluster-dev
09:50 bala joined #gluster-dev
10:01 Debloper joined #gluster-dev
10:02 ira joined #gluster-dev
10:03 soumya_ joined #gluster-dev
10:12 topshare joined #gluster-dev
10:50 hagarth joined #gluster-dev
10:58 pranithk1 joined #gluster-dev
11:13 flu__ joined #gluster-dev
11:13 pranithk joined #gluster-dev
11:14 flu_ joined #gluster-dev
11:31 shyam1 joined #gluster-dev
11:38 kkeithley1 joined #gluster-dev
11:47 edward1 joined #gluster-dev
11:52 ndevos REMINDER: Gluster Bug Triage meeting starting in 8 minutes on #gluster-meeting
12:05 jdarcy joined #gluster-dev
12:08 ppai joined #gluster-dev
12:09 hagarth1 joined #gluster-dev
12:16 nkhare joined #gluster-dev
12:17 kanagaraj joined #gluster-dev
12:20 itisravi_ joined #gluster-dev
12:54 kkeithley_ In case nobody saw my comment yesterday afternoon (EST), the libgfapi symbol versions change broke on Mac OS X.
12:54 kkeithley_ JustiinClift: ^^^
13:10 tdasilva joined #gluster-dev
13:18 ndevos kkeithley_: do any libraries on OSX have symbol versions?
13:18 kkeithley_ lalatenduM: continuing from #gluster-meeting. Mac OS X doesn't use ELF. They use Mach-O, and shared lib versions are things like libcurl.3.dylib,, libz.1.1.3.dylib, and so on.
13:19 ndevos ah, so "readelf" would not work for inspecting them...
13:19 kkeithley_ I was looking at the OS X Developers web site at their assembler reference last night
13:23 ndevos kkeithley_: disable symbol versioning on OSX for now, and fix it later?
13:24 ndevos or, post a patch to disable it, so that users can test 3.6.1 on OSX
13:25 kkeithley_ I'm not sure there is any reasonable fix for later. I've reset the mainline bug, cloned the 3.6 bug, and 3.5 and 3.4 bugs are still open. I'll submit new patches for mainline and 3.6, and respin the patches for 3.5 and 3.4
13:25 kkeithley_ yes
13:25 atalur joined #gluster-dev
13:25 ndevos kkeithley_: please not re-spin the patches, just add a 2nd one on top
13:26 * ndevos likes to keep a 1:1 relation with mainline patches
13:26 kkeithley_ that's what I mean
13:26 ndevos :)
13:27 kkeithley_ oh, wait
13:28 kkeithley_ you want two separate patches for 3.4 (and for 3.5)
13:28 kkeithley_ ?
13:28 kkeithley_ how do I do that with gerrit?
13:29 kkeithley_ or, let me get mainline and 3.6 done and then we can deal with 3.4 and 3.5
13:37 anoopcs joined #gluster-dev
13:44 topshare joined #gluster-dev
13:47 shubhendu joined #gluster-dev
13:59 shyam joined #gluster-dev
14:05 ndevos xavih++ thanks for reviewing the md-cache changes
14:05 glusterbot ndevos: xavih's karma is now 8
14:14 ndevos JustinClift: I think many Jenkins slaves are offline, or had their IP changed without DNS update?
14:14 ndevos it seems impossible to start them through Jenkins...
14:43 hagarth joined #gluster-dev
14:56 JustinClift ndevos: Ugh.  That's not good.
14:56 JustinClift ndevos: I'll take a look in a sec.
14:56 * JustinClift is waking up still
15:00 nkhare joined #gluster-dev
15:02 nkhare joined #gluster-dev
15:02 nkhare joined #gluster-dev
15:06 topshare joined #gluster-dev
15:11 jobewan joined #gluster-dev
15:12 topshare joined #gluster-dev
15:13 topshare joined #gluster-dev
15:15 wushudoin joined #gluster-dev
15:23 kshlm joined #gluster-dev
15:46 ndevos davemc: do you know who has the AI for sending the 3.4 + 3.5 announcement?
15:46 ndevos I would like to close out those bugs after it :)
15:48 JustinClift ndevos: Any interesting in doing that? (eg you send the announcement bits)
15:48 davemc will ship it within 30 minutes. Fighting a head cold that has me dragging. My apologies
15:48 ndevos JustinClift: I dont really have time for that today...
15:49 ndevos davemc: okay, sounds good!
15:49 JustinClift ndevos: No worries, was just asking. :D
15:49 ndevos davemc++ and I hope you get well soon!
15:49 glusterbot ndevos: davemc's karma is now 3
15:51 davemc me too.  I am just worn down by this
16:03 shyam joined #gluster-dev
16:27 davemc The Gluster community is please to announce the release of updated releases for the 3.4 and 3.5 family. With the release of 3.6 a few weeks ago, this is brings all the current members of GlusterFS into a more stable, production ready status.
16:27 davemc The GlusterFS 3.4.6 release is focused on bug fixes. The release notes are available at https://github.com/gluster/glusterfs/b​lob/v3.4.6/doc/release-notes/3.4.6.md. Download the latest GlusterFS 3.4.6 at http://download.gluster.org/pu​b/gluster/glusterfs/3.4/3.4.6/
16:27 davemc GlusterFS 3.5.3 is also a bug-fix oriented release. The associated release notes are at https://github.com/gluster/glusterfs/b​lob/v3.5.3/doc/release-notes/3.5.3.md. Download the latest GlusterFS3.5.3 at http://download.gluster.org/pu​b/gluster/glusterfs/3.5/3.5.3/
16:27 davemc Also, the latest GlusterFS, 3.6.1, is available for download at http://download.gluster.org/pu​b/gluster/glusterfs/3.6/3.6.1/
16:27 davemc Obviously, the Gluster community has been hard at work, and w’re not stopping there. We invite you to join in for planning the next releases.  GlusterFS 3.7 planning and GlusterFS 4.0 planning would love your input.
16:33 _Bryan_ joined #gluster-dev
16:33 ndevos thanks davemc!
16:39 Gaurav_ joined #gluster-dev
16:39 davemc twitter, blog, irc, emails, fcebook, linkedin, g+ all up
16:40 JustinClift Excellent :)
16:41 semiosis davemc++
16:41 glusterbot semiosis: davemc's karma is now 4
16:41 JustinClift davemc++
16:41 glusterbot JustinClift: davemc's karma is now 5
16:59 hagarth joined #gluster-dev
17:00 kkeithley_ JustinClift: netbsd7-smoke is borked. Java something or other. E.g. http://build.gluster.org/job​/netbsd7-smoke/1202/console
17:01 JustinClift Ugh
17:01 JustinClift Ok, me looks
17:02 kkeithley_ java, gitAPI, maybe it just can't git clone. DNS issue?
17:03 JustinClift Maybe
17:03 JustinClift I'll log in and see if I can do it from a user account or something
17:04 nishanth joined #gluster-dev
17:05 JustinClift ndevos: At first blush, it looks like slave21, 22, and 25 probably need rebooting btw
17:06 ndevos JustinClift: should I just reboot slaves when they behave like that?
17:07 JustinClift ndevos: If they don't pick up a connection after 2 attempts at connecting, then yeah
17:07 JustinClift ndevos: eg, I go to the node screen and click the "Launch slave agent" button
17:07 JustinClift Sometimes, there seems to be some kind of weird initial connection problem
17:08 JustinClift ndevos: And the master won't be able to connect right away
17:08 ndevos JustinClift: yes, I tend to press that button on occasion...
17:08 JustinClift ndevos: So, I wait 2-3 minutes, then try again (not right away)
17:08 JustinClift Yeah
17:08 ndevos okay!
17:08 JustinClift ndevos: If it doesen't connect again, I then try to ssh, etc.  If that doesn't work, reboot time
17:08 JustinClift That normally fixes the problem
17:09 JustinClift I'm actually pretty sure there's a memory leak in the client side of things
17:09 JustinClift Not sure if it's in the Jenkins client, Glusterfs client or what
17:09 ndevos JustinClift: ah, just ssh and see the connection fail (timeout or key-error)
17:09 JustinClift Yeah
17:09 kkeithley_ oh, there's definitely a leak in the glusterfs fuse client
17:10 kkeithley_ http://download.gluster.org/pub/gluster/glus​terfs/dynamic-analysis/longevity/client.out
17:10 JustinClift One time when logging into the Racksapce java console for one of these weird non-loginable-slaves, there wree errros on the screen about the OOM killing stuff
17:11 JustinClift So, I suspect that these things are losing important processes :> due to the OOM
17:11 JustinClift Maybe if we fix the fuse leak, or maybe change how the OOM works, we could stop having to reboot like this
17:11 JustinClift Maybe get the OOM to trigger a reboot and restart of the job ;)
17:11 JustinClift But, that's kind of a bandaid ;d
17:11 ndevos kkeithley_: yes, ~20% CPU!
17:12 ndevos JustinClift: there is a reboot-on-oom sysctl, I think
17:12 JustinClift k, salve21, 22, and 25 are still not responding
17:13 kkeithley_ well, it's running heavy I/O, so the CPU% doesn't bother me _too_ much
17:13 * JustinClift reboots them
17:13 JustinClift ndevos: Ahhh, that might do the initial trick
17:13 kkeithley_ but VSZ growing from 430240 to 766284 worries me a lot
17:14 ndevos JustinClift: set vm.panic_on_oom=1 and kernel.panic=5
17:14 kkeithley_ and RSZ growing from 044756 to 343860
17:15 ndevos kkeithley_: yes, I understand, I was only joking :D
17:15 kkeithley_ ah, haha
17:15 kkeithley_ yes
17:18 kkeithley_ but I wouldn't think glusterfs client side would be an issue on any of the jenkins slaves. They start and stop, and never run long enough to consume a lot of memory. Or is something else going on?
17:22 JustinClift ndevos: Thanks, found that googling while you typed it for me. :)
17:23 JustinClift kkeithley_: I really don't know.  It could be a bug in the Jenkins client side too for all I know
17:25 ndevos JustinClift: you could also configure kexec/kdump on a VM and see if it captures a vmcore one day
17:32 JustinClift ndevos: Yeah, that could work.
17:32 JustinClift I think 2GB of ram is enough for kdump to function
17:33 ndevos JustinClift: yes, kdump will reserve a few mb from that 2gb, but that should not be a problem
17:41 lalatenduM joined #gluster-dev
17:48 Guest43643 joined #gluster-dev
17:58 shyam joined #gluster-dev
18:02 davemc joined #gluster-dev
18:22 JustinClift ndevos: So, the vm.panic_on_oom bit works
18:22 JustinClift The krnel.panic =5 --> reboot doesnt seem to be
18:23 ndevos JustinClift: that is because you spell it as "kernel.panic"?
18:23 JustinClift slave25 went offline when doing a simple test to run it out of memory... and hasn't come back several minutes later. ;)
18:23 JustinClift Yeah
18:23 JustinClift Was that wrong?
18:23 ndevos missing an "e"?
18:23 JustinClift < ndevos> JustinClift: set vm.panic_on_oom=1 and kernel.panic=5
18:24 * JustinClift looks
18:24 ndevos JustinClift: also, maybe there is a /etc/sysctl.conf.d/* file?
18:24 JustinClift IRC-only typo :)
18:24 ndevos JustinClift: of the option is set more than once, I do not know which will be used
18:25 ndevos JustinClift: 'sysctl kernel.panic' should return 5, in that case, but you can set it higher too
18:28 JustinClift Just looked on the other slaves, and no /etc/sysctl.conf.d/ directory there
18:28 JustinClift I'll get the console loaded and take a look
18:28 JustinClift Right now my hotel wifi connection is being a pain again.
18:33 ndevos JustinClift: ah, that dir is relatively new, /etc/sysctl.conf is the common way on RHEL6
18:33 JustinClift Yeah. This doesn't look especially wrong does it? http://fpaste.org/151890/41633517/
18:39 ndevos JustinClift: no, unless there is a defauls kernel.panic in that file
18:39 ndevos *default ever
18:39 ndevos **even
18:39 JustinClift ndevos: Oh interesting.  In racspace slave25 is in "Shutoff" state
18:39 JustinClift rackspace even
18:39 JustinClift Meh, I'm going to ignore my typos in IRc today
18:40 JustinClift I wonder why rackspace shuts down instead of reboots the vm's for that setting
18:40 ndevos JustinClift: it could be a kdump/kexec setting, have you set that up already?
18:41 JustinClift Not me, no.  Guess I'd better take a look. ;)
18:46 ndevos JustinClift: 'sysctl kernel.panic' does not return 0, or something?
18:48 JustinClift Nope it's definitely set to 5
18:49 lalatenduM kkeithley, RE: Mac OS X doesn't use ELF. They use Mach-O, and shared lib versions are things like libcurl.3.dylib,, libz.1.1.3.dylib, and so on
18:50 lalatenduM kkeithley, sorry I was in a meeting , so could not respond
18:51 lalatenduM kkeithley, yeah pain of supporting Mac I guess, I am hoping they inherited this from open bsd right ?
18:51 lalatenduM ndevos, are you around?
18:53 JustinClift lalatenduM: MaybeOSX inherited it from FreeBSD? (not sure)
18:54 JustinClift lalatenduM: Pretty sure it was FreeBSD that Apple grabbed for the base OS bit, not OpenBSD.  Just sayng.ing.
18:55 JustinClift :)
18:58 JustinClift Lunch time.  Need mental break :)
19:07 lalatenduM JustinClift, You are right :)
19:07 kkeithley_ Mac OS X is the Mach kernel, inherited a lot from Next and NextStep. The userland, is a mix (mishmash) of FreeBSD and NetBSD mostly; maybe some OpenBSD, although that's essentially NetBSD anyway.
19:07 lalatenduM kkeithley, yeah thats what I was reading from wikipedia
19:08 kkeithley_ Not sure it matters. Yes, it's just part of the pain of maintaining on Mac OS X.
19:08 lalatenduM kkeithley, though I have not much knowledge on that part, still wondering why this part is not standardized like posix
19:09 kkeithley_ I'm a long time FreeBSD user. I remember when they switched from a.out to ELF. There were factions that didn't want to switch, who thought a.out (with extensions) was "good enough."  Nearly 20 years later I don't hear anyone complaining that they have ELF.
19:10 kkeithley_ In the end it means we don't have symbol versions in libgfapi on Mac. Maybe the port maintainer wants to do versioned dylibs like we started to do for linux?
19:12 lalatenduM kkeithley, yup, agree
19:14 kkeithley_ As for it not being "standardized."  Linux is a de facto standard, sort of. Apple has never cared much about standards. But even POSIX doesn't speak to object file formats; only APIs. And IIRC, Mac OS X is certified POSIX compliant, or was at one time.
19:14 JustinClift I'm kind of the "port maintainer" for GlusterFS atm
19:15 JustinClift Which reminds me, 9on OSX that is
19:15 JustinClift Which reminds me, I need to test it out for 3.6.1 and see if I cn get it to actually work this time
19:15 * JustinClift tried quickly with 3.6.0 and failed the other day, but no time to really find out why
19:16 JustinClift If I get 3.6.1 working, I'll submit the port to upstream Homebrew so we have official support
19:18 kkeithley_ JustinClift++
19:18 glusterbot kkeithley_: JustinClift's karma is now 32
19:18 JustinClift If someone wants to do a PR with changed settings for the formula, I have no objection at all
19:19 kkeithley_ PR?
19:28 JustinClift kkeithley_: GitHub Pull Request
19:31 ira joined #gluster-dev
20:28 kkeithley_ JustinClift: hey, did you send back those cables yet?
21:40 shyam joined #gluster-dev
21:51 _Bryan_ joined #gluster-dev
21:53 badone joined #gluster-dev
22:20 badone joined #gluster-dev
23:18 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary