Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-08-06

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 lpabon joined #gluster-dev
01:23 awheeler joined #gluster-dev
01:24 Yuan_ joined #gluster-dev
01:25 Yuan__ joined #gluster-dev
01:45 dlambrig joined #gluster-dev
01:47 ilbot3 joined #gluster-dev
01:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
01:49 dlambrig joined #gluster-dev
02:25 bala joined #gluster-dev
03:10 bharata-rao joined #gluster-dev
03:36 kshlm joined #gluster-dev
03:38 shubhendu_ joined #gluster-dev
03:50 Humble joined #gluster-dev
03:55 lalatenduM joined #gluster-dev
04:06 itisravi joined #gluster-dev
04:09 vimal joined #gluster-dev
04:27 kanagaraj joined #gluster-dev
04:28 Humble joined #gluster-dev
04:30 lalatenduM kkeithley, regarding http://review.gluster.org/#/c/8418, It still does not fix the bz as "glusterfs-georep-logrotate" still have the bug in source
04:31 lalatenduM kkeithley, I was talking abt bz 1126801
04:33 anoopcs joined #gluster-dev
04:36 Rafi_kc joined #gluster-dev
04:46 ndarshan joined #gluster-dev
04:53 ppai joined #gluster-dev
04:57 jiffin joined #gluster-dev
04:58 hagarth joined #gluster-dev
05:07 spandit joined #gluster-dev
05:07 hagarth hmm, mailing lists seem to be non-operational right now
05:15 kdhananjay joined #gluster-dev
05:34 awheeler joined #gluster-dev
05:39 bala joined #gluster-dev
06:00 atalur joined #gluster-dev
06:09 ppai joined #gluster-dev
06:10 raghu joined #gluster-dev
06:12 bmikhael joined #gluster-dev
06:15 sac`away` joined #gluster-dev
06:16 bala1 joined #gluster-dev
06:16 itisravi_ joined #gluster-dev
06:16 spandit_ joined #gluster-dev
06:16 kaushal_ joined #gluster-dev
06:16 darshan joined #gluster-dev
06:17 kdhananjay1 joined #gluster-dev
06:17 shubhendu__ joined #gluster-dev
06:17 spandit__ joined #gluster-dev
06:17 sac`awa`` joined #gluster-dev
06:17 anoopcs joined #gluster-dev
06:17 bala joined #gluster-dev
06:17 itsravi joined #gluster-dev
06:18 ndarshan joined #gluster-dev
06:21 atalur joined #gluster-dev
06:22 pranithk joined #gluster-dev
06:23 ppai joined #gluster-dev
06:35 skoduri1 joined #gluster-dev
06:45 kshlm joined #gluster-dev
06:45 bmikhael joined #gluster-dev
07:05 bala joined #gluster-dev
07:13 JoeJulian I'm sure the answer is no, but I don't suppose there's any way to free leaked client memory without unmounting and remounting... A graph change wouldn't do it, would it?
07:18 itisravi joined #gluster-dev
07:22 pranithk JoeJulian: No it wont'
07:22 awheeler joined #gluster-dev
07:22 JoeJulian Didn't think so, but thinking wishfully.
07:22 pranithk JoeJulian: Could you raise a bug with what leaks you have observed?
07:27 JoeJulian There are too many structs that are way too big for me to know where to start. The only thing these have in common are a bunch of readv failed/client_rpc_notify disconnected. I kind-of wonder if this is directly related to the failed fd migrations. I don't know if the machines exhibiting this behavior were remounted.
07:27 JoeJulian Anyway, I was going to try to valgrind a client in staging and see if I could get it to leak.
07:30 pranithk JoeJulian: which version?
07:30 pranithk JoeJulian: will it be possible to attach this statedump file to some bz so that we can take a look at it?
07:30 JoeJulian 3.4.4+ (not quite .5 but the only leak I saw addressed in .5 was related to nfs.)
07:31 JoeJulian Sure
07:31 JoeJulian I don't suppose you know if the bugzilla cli is available somewhere to install in ubuntu?
07:31 pranithk JoeJulian: ok joe, going for lunch cya
07:31 JoeJulian Eat well.
07:32 pranithk JoeJulian: I use fedora :-(
07:32 JoeJulian me oto
07:32 JoeJulian too
07:32 JoeJulian or centos
07:32 JoeJulian but I'm stuck with this at work for now. :/
07:33 pranithk JoeJulian: I would love it if you become the leader for deb packaging and keep it good actually ;-)
07:34 JoeJulian Oooh.. That's gunna sting semiosis... :P
07:34 pranithk JoeJulian: whats the problem if two people maintain it, ok colleagues are waiting... later :-)
07:37 nishanth joined #gluster-dev
07:56 ppai joined #gluster-dev
07:57 atalur joined #gluster-dev
07:59 itisravi_ joined #gluster-dev
08:53 nishanth joined #gluster-dev
08:53 itisravi joined #gluster-dev
08:54 dachary joined #gluster-dev
08:58 ppai joined #gluster-dev
09:06 vimal joined #gluster-dev
09:19 atalur joined #gluster-dev
09:20 atalur joined #gluster-dev
09:25 deepakcs joined #gluster-dev
09:26 bharata-rao joined #gluster-dev
09:33 hchiramm JustinClift, ping
09:33 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
09:33 hchiramm thanks glusterbot
09:46 ndevos glusterbot++ :D
09:50 hagarth JustinClift: ping, mailman on gluster.org seems to be broken :-/
10:11 spandit__ joined #gluster-dev
10:16 nishanth joined #gluster-dev
10:58 ppai joined #gluster-dev
11:05 suliba_ joined #gluster-dev
11:11 atalur joined #gluster-dev
11:30 shyam joined #gluster-dev
11:31 ira joined #gluster-dev
11:56 JustinClift hchiramm hagarth: Thanks guys, just saw your emails.
11:57 JustinClift I forwarded the info to the professional SysAdmin guy in the OSAS team (IRC handle "misc"), who was given access to the servers a few weeks ago.
11:57 JustinClift hchiramm hagarth: It looks like he's at a conference atm though, so doesn't have internet access. :(
11:58 JustinClift hchiramm hagarth: And I'm on a laptop atm without my normal ssh keys.
11:58 JustinClift Still, I might be able to find a way into the box.  Trying now. :)
12:05 JustinClift Ugh.
12:06 JustinClift So, I can get to the console for the box via rackspace webui, but I don't have a username/password that I remember.
12:06 JustinClift Trying backup method...
12:06 JustinClift I just triggered a soft reboo for the www.gluster.org server (hosts the mailing lists), under the assumption that it'll come back up ok with working mailing lists.
12:06 JustinClift ^^^ Hopefully not a bad assumption
12:10 anoopcs joined #gluster-dev
12:11 JustinClift hchiramm: Mailing lists seem to be working again.  At least, I just got a whole bunch of spam and moderation notices from them 2 mins ago. ;)
12:17 hagarth joined #gluster-dev
12:17 lalatenduM kkeithley, did you sign my gpg keys/ I didn't receive any mail abt it :) and dont knwo how to check it
12:19 hchiramm JustinClift, sorry , I was away from kbd
12:20 hchiramm \o/, the list is back .. Thanks a lot JustinClift++ !
12:20 glusterbot hchiramm: JustinClift's karma is now 5
12:23 kkeithley lalatenduM: I did sign it. one sec and I'll figure out how to check
12:24 lalatenduM kkeithley++ awesome
12:24 glusterbot lalatenduM: kkeithley's karma is now 9
12:25 hchiramm kkeithley, list-sigs ?
12:25 hchiramm kkeithley++ thanks !
12:25 glusterbot hchiramm: kkeithley's karma is now 10
12:25 kkeithley hchiramm: probably. I do this so infrequently I can never remember
12:26 JustinClift hchiramm: Glad that worked. :)
12:26 hchiramm yep :)
12:26 kkeithley wait, you need to look at the mit keyserv to see who has signed I think
12:27 hchiramm http://pgp.mit.edu/pks/lookup?op=vi​ndex&search=0xA0DFD0166729A0F1
12:27 hchiramm kkeithley, yes .. thanks :)
12:27 kkeithley yup
12:29 kkeithley lalatenduM: http://pgp.mit.edu/pks/lookup?op=vi​ndex&search=0x8351EDFA1D3D70E6, signed by me
12:29 kkeithley you two should sign each other's keys. Have a BLR key signing too
12:30 kkeithley and neither of you signed my key. :-(
12:31 kkeithley although I don't remember showing you my docs
12:31 kkeithley ;-)
12:34 hchiramm kkeithley, :) .. I will do it asap ..
12:34 hchiramm sorry for the delay :(
12:39 hchiramm kkeithley++ thanks for replying that thread
12:39 hchiramm :)
12:39 glusterbot hchiramm: kkeithley's karma is now 11
12:42 hchiramm semiosis, ping
12:42 glusterbot hchiramm: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
12:44 kkeithley hoo, all this karma is going to go to my head.
12:44 kkeithley ;-)
12:45 hchiramm later u can covert that to dollar :)
12:45 JustinClift kkeithley: Does that make your head glow? :D
12:45 hchiramm :P
12:45 kkeithley one awsh*t wipes out all the attaboys. It's only a matter of time. ;-)
12:47 hchiramm :)
12:48 JustinClift Heh
12:51 lalatenduM kkeithley, thanks, will sign yours and hchiramm's keys
12:52 hchiramm vice versa :) lalatenduM
12:52 kkeithley thanks, no hurry
12:53 lalatenduM hchiramm++
12:53 glusterbot lalatenduM: hchiramm's karma is now 4
13:01 shyam joined #gluster-dev
13:04 awheeler joined #gluster-dev
13:12 pranithk left #gluster-dev
13:15 hchiramm kkeithley++ u rock!! deserve one more karma
13:15 glusterbot hchiramm: kkeithley's karma is now 12
13:34 hchiramm lalatenduM, kkeithley done with signing :)
13:34 tdasilva joined #gluster-dev
13:37 kkeithley hchiramm++
13:37 glusterbot kkeithley: hchiramm's karma is now 5
13:37 dlambrig_ joined #gluster-dev
13:43 lalatenduM hchiramm, awesome, thanks
13:43 hchiramm np
13:56 itisravi joined #gluster-dev
13:59 deepakcs joined #gluster-dev
14:01 lalatenduM hchiramm, kkeithley signed ur keys :)
14:05 kkeithley lalatenduM++
14:05 glusterbot kkeithley: lalatenduM's karma is now 17
14:07 cristov joined #gluster-dev
14:15 wushudoin joined #gluster-dev
14:23 ndk joined #gluster-dev
14:45 itisravi joined #gluster-dev
14:51 deepakcs joined #gluster-dev
15:04 kkeithley *** Upstream Weekly GlusterFS Community Meeting is on NOW in #gluster-meeting on irc.freenode.net ***
15:06 hagarth joined #gluster-dev
15:16 bala joined #gluster-dev
15:41 lalatenduM kkeithley, there is email about https://bugzilla.redhat.co​m/show_bug.cgi?id=1113543 in gluster-users sub: Gluster Failed on RPM Update
15:41 glusterbot Bug 1113543: low, unspecified, 3.6.0, kkeithle, MODIFIED , Spec %post server does not wait for the old glusterd to exit
15:42 hchiramm lalatenduM, yep..
15:42 kkeithley indeed
15:42 hchiramm dont we have the fix in 3.5.2 ?
15:43 hchiramm I think we have it .
15:43 kkeithley the Fedora dist-git glusterfs.spec has it for 3.5.2
15:44 hchiramm yep..
15:44 lalatenduM yeah we have the fix
15:44 lalatenduM may be I need to extract the src rpm too
15:45 kkeithley we didn't exactly love that fix; it was the best we could do though
15:47 lalatenduM extracted the src rpm again to see the fix in present
15:47 lalatenduM in it
16:10 hchiramm JustinClift++
16:10 glusterbot hchiramm: JustinClift's karma is now 6
16:26 hchiramm lalatenduM, did u verify that fix ?
16:36 lalatenduM hchiramm, nope
16:36 hchiramm k
16:50 dlambrig_ joined #gluster-dev
18:10 lpabon joined #gluster-dev
18:11 hchiramm joined #gluster-dev
18:19 sac`away joined #gluster-dev
18:19 hchiramm joined #gluster-dev
18:29 _Bryan_ joined #gluster-dev
18:56 kkeithley semiosis: pbuilder question, trying to build 3.5.2 for wheezy.   I have glusterfs_3.5.2.orig.tar.gz (renamed from glusterfs-3.5.2.tar.gz),  I've hacked up the glusterfs_3.5.2-1.dsc file, changed all refs from 3.5.1 to 3.5.2, reused the glusterfs_3.5.1-1.debian.tar.gz by renaming it glusterfs_3.5.2-1.debian.tar.gz and confirmed there's nothing in it with a 3.5.1
18:57 kkeithley but when I try to build, it errors out with dpkg-source: error: can't build with source format '3.0 (quilt)': no upstream tarball found at ../glusterfs_3.5.1.orig.tar.{bz2,gz,lzma,xz}
18:58 kkeithley I can't figure out where it's coming up with glusterfs_3.5.1.orig.tar.gz
18:58 semiosis kkeithley: debian/changelog
18:58 kkeithley hmmm, okay
18:59 semiosis sorry i've been remiss building debs for the last release.  life got in the way but i'm getting back on track
18:59 kkeithley np
18:59 semiosis making ubuntu debs now
18:59 kkeithley wasn't pmatthai doing .debs for debian
18:59 kkeithley ?
18:59 semiosis i have a short list of bugs, one of which stumped me, so i didnt do the release, but i am doing releases now without the fix
19:00 semiosis patrick is a debian developer & the official maintainer of the glusterfs packages in Debian
19:00 kkeithley for Debian too? Or just Ubuntu?
19:00 semiosis the stumper bug was just ubuntu
19:01 * kkeithley didn't think a changelog would affect the build
19:01 semiosis patrick handles debian process for the package
19:01 kkeithley okay
19:01 semiosis and works with upstream when there's an issue, filing bugs, etc
19:02 semiosis you can use dch -i to update the changelog, or edit it manually
19:02 semiosis the first line in the top section determines the version & distro release to build
19:02 JoeJulian semiosis: which bug? Anything I could help with?
19:03 kkeithley aha
19:04 semiosis JoeJulian: upstart related.  the goal is to have the mounts held back until networking is up, unless glusterfs-server is installed, then hold the mounts until glusterfs-server is running *AND* ready to respond.  glusterfs-server in turn should start once networking is ready
19:04 semiosis several issues here
19:05 semiosis 1. glusterd daemonizes before it's ready to serve. seems like a sleep is needed after it daemonizes no matter what
19:05 JoeJulian kkeithley: The crashing bug discussed at the meeting has to do with failed fd migrations. We're still not sure what actually precipitates the failure (EINVAL). Since I can't find anything in the logs server-side, nor anything in the fd migration that would trigger that error, I'm wondering if it comes from fuse?
19:05 kkeithley is there some magic to updating the SHA1, SHA256, and md5/files section of the .dsc file, or that's just boring
19:05 semiosis 2. upstart didn't seem to be respecting its own config semantics w/r/t post-start returning before emitting the started event
19:06 semiosis kkeithley: you shouldn't be editing the dsc by hand
19:06 semiosis kkeithley: i have a vbox vm that i use for debian builds.  i'd like to send it to you, along with my notes, which will make what you're doing very easy
19:07 semiosis kkeithley: i'd suggest just giving up on what you're doing now, you're way off track! :)
19:07 semiosis kkeithley: maybe you could automate my notes, it shouldn't be too difficult for you
19:07 kkeithley well, send me your notes. I have a qemu/kvm box running wheezy. I'll have to see where I can run vbox
19:08 semiosis kkeithley: ok forget about the vbox vm, i'll just tar up the important bits from it
19:09 kkeithley does it include building the apt repo? ;-)
19:09 semiosis yes
19:09 kkeithley semiosis++
19:09 glusterbot kkeithley: semiosis's karma is now 3
19:09 JoeJulian I wanted to help out with that too. Plus, we should put the ppa on download.g.o
19:09 semiosis with source package & everything crypto signed
19:09 kkeithley that's way too low
19:10 JoeJulian semiosis+=1000000
19:10 semiosis pfft
19:11 kkeithley I have a newer wheezy box than what I used 1.5 years ago, and hchiramm will be helping out too
19:12 semiosis JoeJulian: launchpad ppa's are really great. it would be much harder to do the builds without it.
19:12 JoeJulian I understand, but we do lack the download metrics from that.
19:12 semiosis johnmark figured out a way to pull stats from the launchpad API
19:12 semiosis never sent me the script though :(
19:13 JoeJulian Well, ok then...
19:13 JoeJulian Maybe we should move this into a project so we can all be members and contribute?
19:13 semiosis this sounds promising.  i'd feel a lot better knowing there was some redundancy around building debs
19:13 JoeJulian ... cause it looks like I'll be stuck with them for a while...
19:14 semiosis we should move the official ppas into a gluster org account on launchpad
19:14 semiosis i'll work on that
19:14 JoeJulian semiosis++
19:14 glusterbot JoeJulian: semiosis's karma is now 4
19:16 semiosis ok ubuntu packages uploaded to launchpad & building now. the debian stuff is on my laptop, i'll get that out later
19:16 semiosis need to get back to work now
19:16 semiosis can still chat but wont have that debian build stuff until later
19:18 JoeJulian I'm going to be afk for a bit. I have to run down to Seattle to help a customer of a business I sold over 7 years ago... :/
19:23 shyam joined #gluster-dev
19:33 johnmark semiosis: actually, I never had a script
19:34 semiosis how'd you pull those stats then?
19:34 johnmark I just used the published API to get some very rough download stats
19:34 semiosis like, with curl?
19:34 johnmark from the command line
19:34 johnmark right
19:34 semiosis ok cool
19:34 johnmark but I didn't have the right kind of python foo to actually break down the stats into meaningful things
19:34 johnmark like, downloads by day and/or by version
19:34 semiosis oh
19:34 johnmark that would be nice
19:35 johnmark hang on... it's documented somewhere
19:52 ndevos semiosis: fwiw, it always bothered me that glusterd daemonizes before it has started its child processes, you could file a bug or rfe for that
20:15 ndk joined #gluster-dev
20:15 johnmark semiosis: this looks promising: http://wpitchoune.net/blog/ppastats/
20:16 JoeJulian ndevos: I'm sure I filed a bug for that once upon a time...
20:26 ndevos JoeJulian: is it listed on your https://bugzilla.redhat.com/frontpage.cgi ?
20:28 ndevos JoeJulian: never mind, found it! Bug 1010068
20:28 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1010068 unspecified, unspecified, ---, kparthas, NEW , enhancement: Add --wait switch to cause glusterd to stay in the foreground until child services are started
20:28 semiosis ndevos: yeah as i was typing it earlier today i was thinking to myself "why didnt i just file a bug?!"
20:28 semiosis oh good
20:29 ndevos semiosis: please add your comments too, and I'll try to follow up later this week
20:29 semiosis commenting now :)
20:29 ndevos thanks!
20:32 semiosis done. yw.
21:00 johnmark semiosis: w00t! victory. it works
21:00 * johnmark uploading to gluster.org server
21:06 semiosis nice
21:06 semiosis got link?
21:17 johnmark semiosis: see http://download.gluster.org/logs/ppa/
21:18 semiosis requires auth
21:18 johnmark pm
21:36 shyam joined #gluster-dev
22:16 awheeler joined #gluster-dev
22:50 bala joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary