Camelia, the Perl 6 bug

IRC log for #gluster-dev, 2012-11-15

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:00 inodb_ left #gluster-dev
01:53 maxiz joined #gluster-dev
02:12 sunus joined #gluster-dev
03:07 bharata joined #gluster-dev
03:15 bulde joined #gluster-dev
03:28 quillo joined #gluster-dev
03:46 sripathi joined #gluster-dev
03:52 bulde__ joined #gluster-dev
04:01 quillo joined #gluster-dev
04:03 bulde joined #gluster-dev
04:18 sripathi joined #gluster-dev
04:53 shireesh joined #gluster-dev
05:10 mohankumar joined #gluster-dev
05:31 raghu joined #gluster-dev
05:42 bulde joined #gluster-dev
05:44 sripathi joined #gluster-dev
05:45 hagarth joined #gluster-dev
05:53 bala1 joined #gluster-dev
05:53 bharata joined #gluster-dev
05:55 18VAAE11C joined #gluster-dev
06:02 hagarth joined #gluster-dev
06:12 sunus joined #gluster-dev
06:25 raghu joined #gluster-dev
06:39 deepakcs joined #gluster-dev
06:52 yliu joined #gluster-dev
06:57 bharata joined #gluster-dev
06:59 hagarth joined #gluster-dev
07:35 sunus joined #gluster-dev
07:36 sunus joined #gluster-dev
07:57 xavih joined #gluster-dev
07:58 14WAAOVQ4 joined #gluster-dev
08:00 sunus joined #gluster-dev
08:01 14WAAOVQ4 left #gluster-dev
08:24 lkoranda joined #gluster-dev
08:29 sripathi joined #gluster-dev
08:36 JoeJulian kkeithley: In your next build, might gluster-swift provide python-swiftclient to satisfy the dependency for installing openstack-glance-2012.2?
08:37 hagarth joined #gluster-dev
08:56 bulde joined #gluster-dev
08:56 gbrand_ joined #gluster-dev
09:25 mohankumar kp_: thanks for the review
09:28 kp_ mohankumar, Anytime. Thanks for being patient.
09:31 mohankumar kp_: :)
09:56 deepakcs joined #gluster-dev
09:58 gbrand_ joined #gluster-dev
11:24 sripathi joined #gluster-dev
11:30 quillo joined #gluster-dev
11:45 sripathi joined #gluster-dev
11:48 shireesh joined #gluster-dev
12:05 bulde jdarcy: ping
12:05 bulde jdarcy: replied to your comments on inline fns
12:25 kkeithley1 joined #gluster-dev
12:33 shireesh joined #gluster-dev
12:41 puebele joined #gluster-dev
12:45 edward1 joined #gluster-dev
12:50 quillo joined #gluster-dev
13:27 bfoster joined #gluster-dev
13:32 mohankumar joined #gluster-dev
14:25 maxiz joined #gluster-dev
14:58 maxiz joined #gluster-dev
14:59 hagarth joined #gluster-dev
15:35 sghosh joined #gluster-dev
16:00 raghu joined #gluster-dev
16:03 puebele joined #gluster-dev
16:05 bulde joined #gluster-dev
16:16 wushudoin joined #gluster-dev
17:39 bulde joined #gluster-dev
19:09 the-me johnmark: I need a patch for 3.2.7 about CVE-2012-4417, could you help me?
19:13 johnmark the-me: possibly?
19:13 johnmark what exactly do you need?
19:22 semiosis https://bugzilla.redhat.com/s​how_bug.cgi?id=CVE-2012-4417
19:22 glusterbot Bug CVE: could not be retrieved: InvalidBugId
19:23 semiosis http://goo.gl/9cGAC
19:23 JoeJulian That's an effing stupid CVE. It's on purpose!
19:24 JoeJulian Yeah, I want a dump but I want it to be randomized do I have no idea which process it's dumping for...
19:24 JoeJulian lame.
19:25 puebele joined #gluster-dev
19:26 semiosis https://rhn.redhat.com/errata/RHSA-2012-1456.html
19:28 semiosis JoeJulian: unpriv user could make a symlink to arbitrary file at the known location in /tmp where glusterfs will write as root, thus enabling unpriv user to overwrite arbitrary files on the filesystem as root
19:29 * JoeJulian sighs... ok... I guess that's more valid than I thought.
19:29 * JoeJulian slinks quietly back into his work.
19:31 the-me johnmark: patch for the 3.2.7 one, so that it could be fixed :)
19:31 semiosis i'm looking into it
19:32 the-me thanks
19:33 semiosis yw
19:33 gbrand_ joined #gluster-dev
19:34 semiosis according to that page i last linked (errata/RHSA...) this has been patched in RHS, but looking at the code on github i still see the getpid() call
19:34 semiosis how/when do code changes from RHS come back into glusterfs?
19:35 the-me would be nice if there is a fix that it becomes public..
19:35 semiosis i agree
19:35 johnmark semiosis: it happens in upstream first
19:35 johnmark so it *should* be fixed
19:36 johnmark avati_: ping
19:36 semiosis https://github.com/gluster/glusterfs/blob​/master/libglusterfs/src/statedump.c#L652
19:37 semiosis oh i think i see the fix...  https://github.com/gluster/glusterfs/blob​/master/libglusterfs/src/statedump.c#L726
19:38 semiosis L726 generates the file name with an additional (random-ish) timestamp on the end?
19:38 semiosis if i am reading that right
19:43 the-me hum I do not find the commit in git log..
19:44 johnmark the-me: if it's fixed, would be nice to find a reference to the bug
19:45 semiosis looks like it's fixed in git master
19:45 semiosis imo, but iana-c programmer
19:46 semiosis looks like release-3.2 and release-3.3 branches are not fixed, again imo
19:46 the-me johnmark: I were just bugged from debian security about this CBE
19:46 the-me s/CBE/CVE/
19:47 johnmark the-me: ok
19:47 johnmark if you look at the diff:
19:47 johnmark https://github.com/gluster/glusterfs/commi​t/3d10587d9d6400c9141b1f278bb5e2027fa784b8
19:47 johnmark it would sure appear to be a randomizing of the filename in tmp
19:48 johnmark but I'd like to get a confirmation
19:48 johnmark hagarth: ^^
19:48 the-me johnmark: lol also just found it... "code cleanup", hmkay :/
19:48 johnmark exactly :)
19:49 kkeithley_wfh well, that change is associated with BZ 764890, not the CVE
19:50 johnmark kkeithley_wfh: can you investigate and see if the RHS fix was applied?
19:50 kkeithley_wfh sure, hang on
19:50 semiosis and if it was applied, was it applied only to master, or to release branches as well?  (seems like only master to me)
19:53 kkeithley_wfh The RHS source has the same code at HEAD
19:55 the-me it also doesn't look like this patch is easy adoptable for 3.2.7
19:56 nick5 joined #gluster-dev
20:00 the-me no; I could write it myself, but I do not want to damage it :)
20:01 kkeithley_wfh Looks to me like the "fix" from HEAD would work on 3.2.7. (I would have used mktemp(3), not mkostemp(3)); there was a BZ from Emmanuel Dreyfus recently, IIRC, highlighting that mkostemp isn't portable, and JoeJulian's comment about pure random vs. a mktemp() call with the pid in the template that ought to be safe.)
20:04 the-me kkeithley_wfh: yeah, but the code from head to 3.2.7 is too different within the affected functions and I do not want to have negative side effects. so something offical would be nice..
20:07 kkeithley_wfh Yes, we should do something official. I think that goes without saying.
20:09 the-me btw what is the redhat way atm with supporting older release branches like 3.2.x (after they took over gluster)?
20:10 kkeithley_wfh Red Hat RHS/RHSS? Or Fedora/EPEL?
20:10 the-me in general
20:10 semiosis i was asking the same thing recently
20:11 semiosis if there's an official position, i have not heard it :/
20:11 kkeithley_wfh Fedora/EPEL would fall probably to me to do something because the other glusterfs maintainers haven't been doing much.
20:11 the-me semiosis: .. and what is your position withing glusterfs ? :)
20:11 semiosis the-me: yeah.... i'm just this guy, you know?
20:12 JoeJulian semiosis: is a member of the board...
20:12 the-me semiosis: yeah, and now you are this guy
20:12 semiosis JoeJulian: oh yeah, i am!
20:12 JoeJulian :D
20:12 kkeithley_wfh I don't do release engineering for RHS/RHSS, and don't have much (i.e. any) visibility into that side of things.
20:13 the-me kkeithley_wfh: RHS,Fedora,CentOS are "just" distributions, it would be interesting what way "gluster" goes
20:14 kkeithley_wfh I do kinda have carte blanche with the Fedora/EPEL stuff to DTRT. In my copious spare time.
20:15 * semiosis had to look up Do The Right Thing (DTRT)
20:16 kkeithley_wfh At least you didn't have to look up carte blanche ;-)
20:16 the-me kkeithley_wfh: heh and I have to frickel through the community and public information to get it ;)
20:16 semiosis kkeithley_wfh: hahaha
20:16 the-me damn life of a DD
20:18 JoeJulian It's one thing to say, as a board, we want X. But really we have to take into consideration how much the people that actually write the code want to support what might be (in their minds) old dead code that was done the wrong way in the first place. As someone that does that for some of Ed Wyse's legacy code, I  know how frustrating that can be.
20:19 semiosis it would be unrealistic to ask for that... even if it were possible, it would be at the cost of less progress on master
20:20 semiosis imho better to encourage users to upgrade than devs to maintain old versions
20:20 semiosis (says the guy running 3.1.7 in production :)
20:20 the-me JoeJulian: I am not familar with the position "board". I just can speak for another famous distribution and most of its users. that is my position :-)
20:20 semiosis http://www.gluster.org/advisors/
20:22 JoeJulian We should get a consensus though and make some formal commitment.
20:23 the-me it would be just nice to know if 3.2.x is still supported with critical/security patches *currently*
20:25 the-me not only with private patches for some paying customers, but for the public
20:26 kkeithley_wfh FWIW, hagarth_ has said that if the situation warrants it, there would be a 3.2.8.
20:26 semiosis the-me: looks like even the "paying customers" who use RedHat Storage are using 3.3
20:26 kkeithley_wfh The Red Hat mantra is "upstream first". If a paying customer gets a private patch, it'll be upstream first.
20:28 kkeithley_wfh even if it's only upstream for a few hours first
20:28 kkeithley_wfh I just bumped the priority and severity of the CVE BZ
20:29 the-me kkeithley_wfh: do not understand me wrong, I just also have to plan with the current 3.2.x base for some more years :)
20:31 kkeithley_wfh yes, me too, but so far I don't have anyone nagging me about fixing vulnerabilities that aren't already fixed upstream. ;-)
20:31 kkeithley_wfh And I hope I haven't just jinxed myself
20:33 the-me kkeithley_wfh: debian-security didn't marked this as release stopper for glusterfs, so glusterfs may be included in wheezy without this patch, but this is not my intention
20:34 the-me since I am maintaining it for the users which want to have safe software and I also have to know for the future how glusterfs is supported, especially if realy hard security issues are discovered
20:45 johnmark JoeJulian: +1
20:46 johnmark JoeJulian: sounds like something the board can help resolve
20:50 JoeJulian johnmark: I was wondering how long it would be before you +1'd that. :D
20:56 johnmark JoeJulian: well, I was away from my desk for a bit :)
22:28 davdunc joined #gluster-dev
23:36 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary