Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-10-30

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
02:47 ilbot3 joined #gluster-dev
02:47 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:28 hagarth joined #gluster-dev
03:33 kanagaraj joined #gluster-dev
03:36 hchiramm_ joined #gluster-dev
03:40 bala joined #gluster-dev
03:42 shubhendu joined #gluster-dev
03:48 kshlm joined #gluster-dev
03:51 ira joined #gluster-dev
03:57 ppai joined #gluster-dev
04:04 kshlm joined #gluster-dev
04:07 kshlm joined #gluster-dev
04:20 Rafi_kc joined #gluster-dev
04:21 rafi1 joined #gluster-dev
04:23 jiffin joined #gluster-dev
04:25 anoopcs joined #gluster-dev
04:33 atinmu joined #gluster-dev
04:37 nishanth joined #gluster-dev
04:52 anoopcs joined #gluster-dev
04:59 ndarshan joined #gluster-dev
05:07 shubhendu joined #gluster-dev
05:16 bala joined #gluster-dev
05:18 ira joined #gluster-dev
05:45 hagarth joined #gluster-dev
05:52 atalur joined #gluster-dev
05:54 kdhananjay joined #gluster-dev
06:05 Humble rafi1, can u give me that patch url ?
06:06 rafi1 Humble: https://review.gluster.org/#/c/8762/
06:11 soumya joined #gluster-dev
06:14 anoopcs1 joined #gluster-dev
06:16 anoopcs1 joined #gluster-dev
06:19 anoopcs joined #gluster-dev
06:21 anoopcs joined #gluster-dev
06:22 anoopcs joined #gluster-dev
06:31 ira joined #gluster-dev
06:32 Humble rafi1, contacted manu on this.. check ur inbox
06:33 Humble lets wait for him
06:34 Rafi_kc thanks Humble
06:34 Rafi_kc Humble++
06:34 glusterbot Rafi_kc: Humble's karma is now 8
07:13 rgustafs joined #gluster-dev
07:18 atinmu joined #gluster-dev
07:22 raghu joined #gluster-dev
07:42 atinmu joined #gluster-dev
07:53 Humble hagarth, can u please review/merge this patch http://review.gluster.org/#/c/8379/
07:56 Humble hagarth++
07:56 glusterbot Humble: hagarth's karma is now 18
08:26 lalatenduM joined #gluster-dev
08:41 vikumar joined #gluster-dev
08:56 atinmu joined #gluster-dev
09:25 atinmu joined #gluster-dev
09:33 kshlm joined #gluster-dev
09:34 kshlm joined #gluster-dev
09:40 lalatenduM joined #gluster-dev
09:48 hagarth joined #gluster-dev
09:49 atinmu joined #gluster-dev
09:50 aravindavk joined #gluster-dev
09:51 shubhendu joined #gluster-dev
09:56 ppai joined #gluster-dev
10:05 pranithk joined #gluster-dev
10:15 rgustafs joined #gluster-dev
10:22 ira joined #gluster-dev
10:28 shyam joined #gluster-dev
10:31 Humble rafi1, ping
10:31 Humble can u come here
10:31 rafi1 Humble: pong
10:31 glusterbot Humble: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
10:31 rafi1 Humble: coming
10:43 kshlm joined #gluster-dev
10:44 kkeithley1 joined #gluster-dev
10:51 atinmu joined #gluster-dev
10:51 xavih JustinClift++ :)
10:51 glusterbot xavih: JustinClift's karma is now 27
10:53 ppai joined #gluster-dev
10:57 kkeithley1 pranithk: ping.  Any status update on http://review.gluster.org/8923 ?
10:59 kkeithley1 JustinClift: Don't plan to go in to London this weekend. I need to get the cards out of the systems they're in. I can probably get that done tomorrow or Monday.
11:03 JustinClift kkeithley_: No worries. :)
11:03 JustinClift kkeithley_: Will do it the weekend after then I guess.  Just let me know, etc.
11:09 kkeithley_ I forgot that we put the old cards into other machines after we did the upgrade. Even if we hadn't done that, I still have to go in to the office to get them shipped.
11:10 JustinClift np
11:13 lpabon joined #gluster-dev
11:14 pranithk kkeithley: I think that is just a spurious failure?
11:14 pranithk kkeithley: I triggered one more build. We shall see based on that?
11:15 kkeithley_ nixpanic, ndevos, hchiramm_: I'm of a mind to add a -compat package for 3.6 for the qemu/kvm/vdsm folks. Do we need anything more than a symlink libgfapi.so.0.0.0 -> libgfapi.so.0.0.X? I don't think we do. (And I'd like to avoid versioning symbols in the shlib.)
11:19 shubhendu joined #gluster-dev
11:20 kkeithley_ nixpanic, ndevos, hchirram_: And what are you thoughts about 3.6 in f21 and f22/rawhide and leaving f20 and f19 at 3.5.x. Is that still what we want to do?
11:23 Humble kkeithley++ indeed its a good idea to have a compat package
11:23 glusterbot Humble: kkeithley's karma is now 27
11:23 Humble rather than keep watching and building all these packages
11:24 Humble I think having a symlink from so.0.0.0 -> 0.0.X should do the trick ..
11:24 lalatenduM Humble, ndevos kkeithley, I like idea as well
11:24 Humble however we need to check it througly
11:25 Humble because 'ldd' look for 'so.X' which is a link ..
11:26 * Humble cross-checking ^^^^
11:26 kkeithley_ yes, it's actually libgfapi.so.0.0.0 -> libgfapi..so.X.0.0
11:27 Humble oh.. yes
11:28 kkeithley_ I can quickly make a scratch build of the last 3.6.0 beta with an added compat package and give it to Anders to try out. (Unless someone else wants to do that.)
11:29 Humble sounds like a plan
11:30 Humble kkeithley_, do u see any issue in future because of compat package presence ?
11:31 * Humble : thinking
11:31 kkeithley_ There shouldn't be.  The libgfapi.so -> libgfapi.so.7.0.0 symlink is what applications link with, so libfapi.so.7.0.0 is what applications load at run-time.
11:32 kkeithley_ do an `ldd $foo` and you should see libgfapi.so.7.0.0 listed
11:33 kkeithley_ adding a libgfapi.so.0.0.0 -> libgfapi.so.7.0.0 symlink will let old qemu find the new library. Since the API and ABI of the old symbols haven't changed it should just work. (Famous last words, I know. ;-))
11:34 Humble :) .. \o
11:34 kkeithley_ nothing else will ever use that libgfapi.so.0.0.0 symlink from the -compat package
11:35 Humble thanks.. I think this should work
11:35 an joined #gluster-dev
11:36 Humble kkeithley++
11:36 glusterbot Humble: kkeithley's karma is now 28
11:37 Humble kkeithley,  as a second thing we need to decide on glusterfs 3.6 -> glusterfs 3.6.1 releases :)
11:37 kkeithley_ You mean the quick 3.6.1 right after 3.6.0?
11:37 Humble yes..
11:38 Humble there are couple of thoughts ..
11:38 Humble either we can skip the 3.6.0 rpms and publish only 3.6.1 packages via download.g.o
11:39 Humble or start with 3.6.0.50 or so rpms on 3.6.0 GA  release
11:40 kkeithley_ just so I"m clear you mean 3.6.0.50, not 3.6.0-50. Because RHS has 3.6.0.xx-1. Correct?
11:40 Humble yep
11:41 Humble in effect , some number which is bit higher than the downstream version
11:42 Humble hagarth is ready to release 3.6.1 soon after 3.6.0 GA
11:42 kkeithley_ I don't have a _good_ reason for not liking 3.6.0.50.  I guess I'd be worried that if we didn't pick a high enough number that RHS could catch up or pass us
11:42 Humble thats true..
11:42 kkeithley_ We could just tag 3.6.1 right after 3.6.0. We don't even need any changes
11:43 kkeithley_ I've got conf call, need to dial in
11:43 Humble thats true.. but it looks like there are afrv2 patches which is not in
11:43 Humble kkeithley, go ahead.. will talk later
11:44 lalatenduM joined #gluster-dev
11:45 kkeithley_ either way, with or without any changes
11:47 Humble the second option  is also something which we can think of, ie skipping 3.6.0 rpms and only publishing 3.6.1 packages in download.g.o as releases are back to back ..
11:47 Humble if some one want 3.6.0 GA release , they have it in tar .
11:47 kkeithley_ sure
11:49 Humble I am more tangent to second option being it will avoid lots of confusion and if some one really want 3.6.0 , get the source tar and build the rpms..
11:49 Humble but I am ready to be flexible here :)
11:50 lalatenduM Humble, kkeithley from fedora point of view ,  are we planning for compact pkg also or ?
11:51 kkeithley_ If Anders says it works, then yes,
11:51 Humble yep
11:54 Humble regarding
11:54 Humble -------->And what are you thoughts about 3.6 in f21 and f22/rawhide and leaving f20 and f19 at 3.5.x. Is that still what we want to do? -->
11:54 glusterbot Humble: ------'s karma is now -1
11:55 Humble I think we can leave f20 and f19 with GlusterFs 3.5
11:55 Humble we already covered 2 major GlusterFS versions on this
11:55 Humble getting 3.6 in f21 need an exception ... Isnt it ?
11:57 kkeithley_ strictly speaking, probably. I haven't ever had a problem releasing newer versions.
11:57 kkeithley_ It's all still 3.X. ;-)
11:58 soumya joined #gluster-dev
11:58 Humble lol :)
11:58 kkeithley_ I was just wondering, based on the qemu/kvm/vdsm think whether we should revisit our earlier decision
11:58 kkeithley_ s/think/thing/
11:59 lalatenduM kkeithley, Humble , sorry if I am asking a question which is already answered. what do we do with compact package when  the libgfapi.so.x is actually not compatible with the older version?
12:00 kkeithley_ I think that's where we have to start using versioned symbols in the library.
12:02 lalatenduM kkeithley, did not get u,  you mean versioned symbolic links of so.x?
12:02 kkeithley_ Just adding more APIs  (symbols) to the library doesn't break compatibility.
12:02 Humble lalatenduM, normally if new api is there , the major version of so will change
12:02 kkeithley_ Iff we change a function's API or ABI, or change the size of a table, that's what breaks the ABI and compatibility
12:03 Humble but if the functionality is getting changed with new implementation its a minor version change in so .
12:03 kkeithley_ I mean the symbol names inside the library
12:03 kkeithley_ They can be versioned
12:03 lalatenduM kkeithley, is it , the function i.e. api it self will be versioned?
12:03 lalatenduM kkeithley,  cool
12:04 kkeithley_ This is something to consider though
12:04 lalatenduM Humble, thinking abt an extreme case where symbols i.e. api had to changed drastically
12:04 kkeithley_ If a function changed dramatically, but still has the same name and signature, then it may not be compatible any more
12:04 kkeithley_ exactly
12:04 lalatenduM kkeithley, yup
12:04 Humble lalatenduM, in that case we should move from so.7 to so.>7
12:05 lalatenduM Humble, right, question is what will happen to comapct pkg then
12:05 kkeithley_ compat (not compact)
12:05 kkeithley_ sorry, my inner grammar policeman got out
12:05 lalatenduM kkeithley, oops sorry , compat it is
12:06 Humble compat package gives a smooth transition to link latest version of library thats it
12:06 Humble kkeithley, I heard he (inner grammar policeman) is on pto  ? :)
12:06 kkeithley_ I'm pretty sure that the old functions in gfapi haven't changed, other than minor bug fixes
12:07 Humble true,,
12:07 * ndevos doesnt mind a -compat package either
12:07 kkeithley_ he should be on permanent retirement ;-)
12:07 Humble hahahaha
12:07 Humble so all votes goes to compat package !!
12:08 edward1 joined #gluster-dev
12:08 lalatenduM Humble, kkeithley  I am still not clear what will be role of compat pkg when a api changes drastically , will we giving the compat pkg then too ?
12:09 Humble 'drastic' change is not expected to happen in a release .. thats what we want to believe
12:09 kkeithley_ So, we have to be careful that we don't change APIs and ABIs.
12:09 Humble soumya, ^^^^^
12:10 lalatenduM Humble, I am thinking between two major releases
12:10 kkeithley_ But we have a de facto standard. If we start changing gf_init() by adding new parameters or drastically changing what it does, we'd be breaking existing applications, and that would be a Bad Thing®
12:10 lalatenduM the compat package is a one time thing , is it?
12:11 Humble kkeithley, yeah.. it will break all the application which is using the function.. that should not happen at all..
12:11 kkeithley_ One time, meaning for all of 3.6.x.  Maybe for 3.7 we can drop it
12:11 Humble true..
12:12 kkeithley_ Even from release to release, we don't want to change the APIs and ABIs.
12:13 kkeithley_ E.g. if we need extra params in gf_init(), we should really write a new gf_initplus(), and keep the old gf_Init(). And maybe version those symbols in the library.
12:14 Humble in other words gf_initplus should be a wrapper on gf_init
12:14 kkeithley_ I'm not really up on symbol versioning. It's come up on the fedora -devel list recently and someone posted a link to a HOWTO blog.
12:14 kkeithley_ yes, it could be a wrapper
12:15 kkeithley_ Let's hope we don't ever fall into that hole
12:16 Humble yeah..
12:17 Humble any objection on skipping 3.6.0 packages and starting with 3.6.1 ? or any thoughts ? we need a decision
12:17 lalatenduM kkeithley, Humble , lets consider we did compat pkg for 3.6.X and in 3.X (X> 6) we want to drop it. Should we ask users to uninstall the compat pkg while installing 3.X
12:18 kkeithley_ +1 for straight to 3.6.1 for packaging; Fedora and d.g.o. Let's make sure semiosis knows so he can do the same for Debian and Ubuntu
12:19 lalatenduM Humble, IMO we should still package 3.6.0 for d.g.o and let use take the decision of using or not using it
12:19 kkeithley_ in 3.X (X>6) we would add an Obsoletes: glusterfs-api-compat
12:19 lalatenduM s/let use/let user/
12:19 kkeithley_ to the RPM .spec fiel
12:19 Humble lalatenduM, what adv. he is getting
12:19 kkeithley_ to teh RPM .spec file
12:20 Humble if both releases are same ?
12:20 * kkeithley_ can't type
12:21 kkeithley_ so, glusterfs-api-compat will be the -compat RPM?
12:21 Humble looks good to me
12:21 lalatenduM Humble, I think you said there will be few more patches in 3.6.1 then 3.6.0 . So we should not be taking decisions for users/administrators , as I see , the job of packagers is to package upstream release , thats it
12:22 lalatenduM kkeithley, glusterfs-api-compat looks fine
12:22 lalatenduM kkeithley++
12:22 glusterbot lalatenduM: kkeithley's karma is now 29
12:22 Humble lalatenduM, at the same time it is our responsibility to avoid confusion if its not worth..
12:23 Humble its matter of 1/2 patches, that it meant to be adding value and not destructing their setup :)
12:23 lalatenduM Humble, the confusion is created by us, we dont control versions in RHEL
12:23 lalatenduM s/is/is not/
12:23 lalatenduM :)
12:23 Humble lalatenduM, :) thats true ..
12:24 Humble the users are free to use source tar ball and build rpms if they want or do it from source installation
12:24 Humble so we should not be worrying much about it..
12:25 Humble if we really want we can make 3.6.0 and 3.6.1 with no difference
12:29 lalatenduM Humble,  I think 3.6.1 with more bug fixes will be better
12:31 JustinClift Hmmm, I still can't login to build.gluster.org
12:32 ndevos JustinClift: I can!
12:32 JustinClift kkeithley_: The other day when you added my new key to build.gluster.org, which account did you add it to?
12:32 kkeithley_ If the fixes will land soon enough
12:32 kkeithley_ jclift I think, let me look
12:32 JustinClift kkeithley_: Tx
12:32 Humble yeah.. if we have quick fixes we can include
12:33 Humble lalatenduM, the whole point is avoiding confusion and breakage of upgrade/installation
12:33 Humble which does not look good..
12:33 Humble we will make sure downstream will follow standards from here onwards.
12:34 Humble this step can be a final exception ..
12:34 kkeithley_ that's odd. When I added there were both authorized_keys and authorized_keys2 files, and I added the key to both. Now there's only authorized_keys and the key isn't there
12:35 JustinClift Weird
12:35 kkeithley_ no, they're both there.
12:36 kkeithley_ and both keys are in both files
12:36 JustinClift Hmmm.  I keep getting permission denied (public key)
12:36 JustinClift Wonder wtf is up with it.  Perms on this end are good.  I use the same key to ssh elsewhere.
12:36 kkeithley_ mode on .ssh was bad. Try now
12:37 kkeithley_ dunno how that happened
12:37 JustinClift $ ssh -l jclift build.gluster.org
12:37 JustinClift Permission denied (publickey).
12:37 JustinClift kkeithley_: What're the perms for .ssh/auth* ?
12:38 JustinClift 600
12:38 JustinClift ?
12:38 kkeithley_ yup
12:38 JustinClift Hmmm.  Which port is ssh bound to?
12:38 JustinClift 22 as per standard, or something else?
12:38 kkeithley_ 0700 for .ssh/, 0600 for .ssh/auth*
12:38 kkeithley_ 22
12:38 JustinClift k, so it's not that
12:38 JustinClift Can you fpaste the authorized keys files?
12:39 atalur joined #gluster-dev
12:39 JustinClift The public key I'm using is this one: http://fpaste.org/146452/14146724/
12:39 * JustinClift just changed the justin@gluster.org bit on the end
12:39 JustinClift But that shouldn't affect things
12:40 kkeithley_ yup, that's the key you sent me and it's the second key in both authorized_keys files
12:41 JustinClift Bizarre
12:41 JustinClift Hmmm, lets try a different approach
12:41 kkeithley_ http://paste.fedoraproject.org/146454/14146728
12:41 JustinClift Would you be ok to append it to the root .ssh authorized_keys* files?
12:42 JustinClift Just in case there's something wrong with the jclift one.
12:42 JustinClift As long as I can get in, I can damn well fix it. :)
12:42 kkeithley_ yup, hang on a sec
12:43 JustinClift Um...
12:43 JustinClift That authorized_keys files you pasted
12:43 JustinClift The 2nd line key in it is damaged
12:43 kkeithley_ oh
12:43 JustinClift It's not the same
12:43 kkeithley_ hah, that explains a lot. ;-)
12:44 JustinClift Yep. ;)
12:44 JustinClift Bad cut-n-paste ;)
12:44 kkeithley_ I guess
12:44 kkeithley_ sorry about that
12:44 kkeithley_ try now
12:44 JustinClift np
12:44 JustinClift tx
12:44 JustinClift Trying
12:44 JustinClift Yep, I'm in
12:44 JustinClift Thanks. :)
12:44 kkeithley_ yw
12:47 soumya Humble, sorry for jumping in late..
12:48 soumya lalatenduM just explained me this version-ing problem :)
12:48 soumya lalatenduM++
12:48 glusterbot soumya: lalatenduM's karma is now 30
12:48 pranithk left #gluster-dev
12:49 hagarth joined #gluster-dev
12:51 lalatenduM soumya, :)
12:54 Humble soumya, np :)
12:54 soumya :)
12:59 lalatenduM joined #gluster-dev
13:03 an joined #gluster-dev
13:10 shyam joined #gluster-dev
13:20 an joined #gluster-dev
13:28 kshlm joined #gluster-dev
13:55 kkeithley_ Would someone please review  http://review.gluster.org/8923 so that I can merge it and start wrapping up the 3.4.6 release?   Thanks
13:55 ndevos kkeithley_: oh, we'll do a triple release this week?
13:57 kkeithley_ I hope so.  If we want the logrotate fix in 3.4.6 that might bog things down
13:59 kkeithley_ Ah, I see you merged it in release-3.5
13:59 kkeithley_ and it's in release-3.6
14:14 jobewan joined #gluster-dev
14:18 kkeithley_ thanks Niels. ndevos++
14:18 glusterbot kkeithley_: ndevos's karma is now 39
14:22 wushudoin joined #gluster-dev
14:28 _Bryan_ joined #gluster-dev
14:30 kkeithley_ One more review  http://review.gluster.org/9013 !  It's an easy one. ;-) Maybe one of our lurkers wants to take a shot?
14:31 lalatenduM joined #gluster-dev
14:42 ndevos I think lalatenduM should review http://review.gluster.org/9013
14:44 lalatenduM ndevos, checking
14:44 bala joined #gluster-dev
14:44 kkeithley_ I'll take anything I can get. ;-)
14:45 * kkeithley_ was just trying to draw out some of the lurkers
14:47 JustinClift kkeithley_: Would you be ok to copy my public key to www.gluster.org and review.gluster.org as well?
14:47 kkeithley_ yep
14:48 kkeithley_ no interactive shell on review
14:49 JustinClift kkeithley_: Ahhh, that's right.  It needs to be added to the root user there, and I have to connect on a different port
14:49 * JustinClift will adjust his local .ssh/config to do it automatically
14:50 hchiramm_ kkeithley,   http://review.gluster.org/9014
14:51 kkeithley_ hchiramm_++
14:51 glusterbot kkeithley_: hchiramm_'s karma is now 5
14:51 kkeithley_ Humble++
14:51 glusterbot kkeithley_: Humble's karma is now 9
14:52 kkeithley_ JustinClift: you're all set on www.gluster.org
14:52 JustinClift Tx
14:52 JustinClift Yep, that's working
14:53 lalatenduM kkeithley, abt http://review.gluster.org/#/c/9013/1 , we have to change the specfile too isn't? so that it does not use other logrotate files from Sources (fedora gist)
14:55 kkeithley_ Yes, we need to do that in the Fedora dist-git glusterfs.spec
14:55 lalatenduM kkeithley, cool
14:55 kkeithley_ let's see about glusterfs.spec.in
14:57 kkeithley_ I think I have separate reviews out for fixing glusterfs.spec.in. hang on
14:58 kkeithley_ just on master, and already merged
14:59 kkeithley_ There are a couple .spec/.spec.in sync patches that have been up for review since Sep 25 :-(
15:01 kkeithley_ http://review.gluster.org/8853 and http://review.gluster.org/8854
15:02 lalatenduM kkeithley, sorry abt the reviews :(
15:02 lalatenduM kkeithley, ndevos Humble check http://review.gluster.org/#/c/9014/
15:02 kkeithley_ lol, don't lose any sleep over it.
15:02 lalatenduM patch for "First shot at a libgfapi.so.0 compatibilty rpm."
15:03 ndevos lalatenduM: yeah, that is probably the more correct way to do a -compat package - a symlink is rather an ugly hack
15:06 hchiramm_ lalatenduM, hchiramm_> kkeithley,   http://review.gluster.org/9014 :)
15:06 kkeithley_ I'm retching at the dupes of every file
15:06 ndevos real -compat packages are UGLY
15:07 lalatenduM hchiramm_, yup :) we are abt that only
15:07 deepakcs joined #gluster-dev
15:07 lalatenduM s/abt/talking abt/
15:08 kkeithley_ I dunno. If the files are 99.9% the same... If they're 100% the same except for added functions...
15:09 kkeithley_ Then we dont' need all the duplicate files, and I'm not convinced we even need a separate shlib
15:09 kkeithley_ but if that's the way people want to do it...
15:09 hchiramm_ lalatenduM, scroll up and read :)
15:09 ndevos well, that is how most -compat packages do it
15:10 kkeithley_ Is "compat" supposed to mean bug-for-bug compatibility?
15:10 kkeithley_ yeah, I don't doubt it. I think we can be smarter than that is all
15:10 hagarth joined #gluster-dev
15:10 lalatenduM kkeithley, I think compatibility :)
15:11 hchiramm_ yeah, if the functionality does not break , we can stick with thin compat rpm..
15:11 hchiramm_ no need for duplication ..
15:11 hchiramm_ I mean duplication of files.
15:12 ndevos I was hoping we could do some ld scripting or something, and build some redirects in a mostly empty .so.0
15:12 lalatenduM hchiramm_, lol :)
15:13 JustinClift kkeithley_: I'm still getting perm denied when trying to ssh into review.gluster.org.  Any ideas?
15:13 JustinClift Port = 21, user = root
15:14 JustinClift ^ That should work shouldn't it?
15:15 kkeithley_ backing up a bit, wrt logrotate.  The glusterfs.spec.in files in upstream  install the in-tree logrotate files. We can take out the _for_fedora_koji_builds stuff at some point
15:15 kkeithley_ I don't think I've ever signed on to reveiw.gluster.org. I wasn't even aware of the port 21, and don't know the root password
15:16 kkeithley_ We can take out the _for_fedora_koji_builds stuff at some point as part of synchronizing with the dist-git spec
15:16 ndevos kkeithley_: yes please!
15:17 kkeithley_ And that _for_fedora_koji_builds stuff in the dist-git glusterfs.spec becomes less and less over time as we use more from upstream, e.g. the new logrotate files
15:18 ndevos and that is really cool :)
15:18 hchiramm_ yeah, at one point both should be same :)
15:19 kkeithley_ if we can ever agree on glusterfsd.{init,service}, either drop it from dist-git or add it to upstream.
15:20 ndevos I think we have an agreement for that?
15:20 ndevos just need someone to test and get it posted
15:21 JustinClift kkeithley_: No worries.  I'll ping misc.  Pretty sure I gave him access a while ago.
15:21 lalatenduM kkeithley, ndevos hchiramm_ I think you guys are talking abt http://review.gluster.org/#/c/7199/
15:21 lalatenduM I am not sure, how to move ahead with thsi patch
15:21 ndevos well, we might want to improve the spawning of the processes and use some lib-systemd to inform systemd correctly, but thats only a TODO I have to think about
15:21 kkeithley_ indeed we are
15:22 lalatenduM Either i will take help from u guys, or you can take it up
15:22 kkeithley_ IIRC, we don't want glusterfsd.{init,service} to ever start glusterfsds, that's glusterd's job. But we do want it to stop them if the user hasn't already stopped the volume on a system shutdown. Right?
15:23 ndevos kkeithley_: right
15:24 ndevos kkeithley_: and have an option to configure restarting of glusterfsd when doing a yum update
15:25 ndevos defaulting to restart glusterfsd, but it should be possible to disable that
15:28 kkeithley_ anyway, pretty minimal use of _for_fedora_koji_builds right now. logrotate (going away), and glusterfsd.{init,service} and related, RHEL5 fuse module install, and some versioning stuff.
15:29 kkeithley_ ganesha call
15:35 soumya joined #gluster-dev
15:36 an joined #gluster-dev
16:09 soumya joined #gluster-dev
16:11 an joined #gluster-dev
16:15 semiosis [08:18] <kkeithley_> +1 for straight to 3.6.1 for packaging; Fedora and d.g.o. Let's make sure semiosis knows so he can do the same for Debian and Ubuntu
16:15 semiosis can someone give me an executive summary of the issue?
16:44 hagarth joined #gluster-dev
17:02 lalatenduM joined #gluster-dev
17:06 hagarth @seen
17:06 glusterbot hagarth: (seen [<channel>] <nick>) -- Returns the last time <nick> was seen and what <nick> was last seen saying. <channel> is only necessary if the message isn't sent on the channel itself. <nick> may contain * as a wildcard.
17:18 kkeithley_ Okay, 3.4.6 is tentatively "a wrap."  Tomorrow AM EDT I'll tag and do the release. Anything desired for 3.4.7, file a BZ and add the BZ to https://bugzilla.redhat.com/show_bug.cgi?id=1154714
17:18 glusterbot Bug 1154714: unspecified, unspecified, future, kkeithle, ASSIGNED , GlusterFS 3.4.7 Tracker
17:21 davemc @seen JustinClift
17:21 glusterbot davemc: JustinClift was last seen in #gluster-dev 2 hours and 33 seconds ago: <JustinClift> kkeithley_: No worries.  I'll ping misc.  Pretty sure I gave him access a while ago.
17:22 JustinClift davemc: ?
17:22 davemc yep?
17:22 JustinClift davemc: Need me for something specific? :D
17:23 davemc did, but solved it myself
17:23 JustinClift Cool. :)
17:23 davemc Gluster slide template
17:23 davemc found
17:23 kkeithley_ no, I've jumped the gun. _beta_ is ready. I'll tag and make the beta release tomorrow AM
17:24 JustinClift :)
17:24 davemc kkeithley, which beta 3.5.3b4?
17:24 JustinClift 3.5.6 brta
17:24 kkeithley_ 3.4.6
17:24 JustinClift Gah
17:24 JustinClift What he said
17:24 davemc ah.
17:24 davemc far to keep up with the myraid of betas
17:24 davemc and we need to promote their availability
17:25 davemc s/far/hard/
17:25 JustinClift They get announced on the mailing list + blog/twitter
17:25 kkeithley_ yeah, having _three_ releases plus main-line devel going on is a lot.
17:26 davemc we also have a facebook page, g+, etc
17:26 JustinClift Now the web site is reputedly able to be updated, people should be able to see the announcements from the front page too :)
17:26 * kkeithley_ hopes that 3.4.7 is the end of 3.4
17:26 davemc and all the RHT storage channels
17:26 JustinClift davemc: Good point
17:26 JustinClift I don't think I've ever even looked at the FB page.  Probably not the G+ page either
17:28 JustinClift kkeithley_: Yeah, that'd be nice.
17:29 * JustinClift hopes 3.6.0 is incredibly good (when compared to say 3.5.0)
17:29 kkeithley_ 3.5 is awesome. 3.6 will be amazing. ;-)
17:31 JustinClift 3.5.x is awesome.  3.5.0 I'd use other words for. ;)
17:31 hagarth 3.7 will be Nirvana ;)
17:31 JustinClift Though to be fair 3.5.0 wasn't busted for everyone.
17:33 hagarth davemc: how about announcing gluster cloud night, paris on announce at gluster.org?
17:34 hagarth JustinClift: maybe something to keep a tab on - https://osuosl.org/services/supercell
17:34 JustinClift Oh crap.  I totally forgot about that.
17:35 JustinClift Paris Gluster Cloud Night that is
17:36 hagarth JustinClift: you are going to be there, right?
17:39 davemc hagarth, I think we now have an agenda that includes a 3.6.0 features overview, Manilla-GlusterFS from deepak, and not sure on cinder+glusterfs
17:39 davemc hagarth, that match your read?
17:40 hagarth davemc: swift + gluster from thiago
17:41 davemc is that an 'AND" or a replace for cinder?
17:41 hagarth davemc: "AND"
17:42 davemc tks
17:42 hagarth davemc: we can also keep cinder part of the agenda, Deepak will fill in if Eric cannot make it
17:43 davemc I'll plot out for 20 minutes for Manilla, Cinder and Swift, and 10 for overview. that leaves time for questions
17:44 hagarth davemc: sounds perfect!
17:44 JustinClift hagarth: Just filled out a request on the Supercell site, asking for them to give us 6+ VM's. (we're getting close to our upper bound with Rackspace)
17:44 hagarth JustinClift: cool!
17:44 davemc next semi-weird Q.  Does announce at gluster.org go to everyone? or should it also go to gluster-users, etc?
17:44 JustinClift If they go for it, we cna hook them into Jenkins and potentially let more Community members setup regression-y things
17:45 hagarth davemc: gluster-users is a member of announce at gluster.org
17:45 hagarth davemc: however gluster-devel is not a member of announce, can copy that separately
17:47 JustinClift hagarth: No, I'm not going to be at the Gluster Cloud Night in Paris
17:47 JustinClift No passport atm
17:48 hagarth JustinClift: ah ok
17:58 davemc email to announce list awaiting moderator pproval
17:58 davemc s/ppro/appro/
17:58 pranithk joined #gluster-dev
17:59 JustinClift davemc: You have the password to go in and approve it
18:00 davemc I do?
18:00 davemc where?
18:00 davemc I suspect i do, but
18:00 JustinClift Check your RH IRC
18:00 hagarth davemc: just approved
18:00 JustinClift Or option b ^
18:28 ndevos davemc: if you pass through Amsterdam, let me know and we might be able to meet for a coffee
18:29 davemc not this time. Might do that for December when heading for Europe again
18:29 ndevos okay :)
18:38 JustinClift Hmm, netbsd0 VM seems offline
18:38 JustinClift Emailed Manu
18:39 JustinClift If I don't hear from him, I'll try rebooting it through Rackspace UI
18:48 JustinClift hagarth: v3.3.x is EOL isn't it?
18:48 JustinClift The front page just says it's rapidly aging and approaching EOL
18:49 * JustinClift is thinking of updating that to say 3.3 is EOL
18:51 hagarth JustinClift: as soon as we release 3.6.0 tomorrow
18:52 davemc hagarth, what time tomorrow. I can schedule the blog to go live then
18:54 hagarth davemc: I am hoping to have it out by mid day IST tomorrow, early morning PST should be a good time for the blog to go live
18:54 JustinClift hagarth: Do we have the release announcement mailing list text ready to go?
18:55 JustinClift hagarth: Ditto for blog announcement text. :)
18:55 JustinClift hagarth: btw, what happened to beta4?  Guess we're not doing one?
18:55 hagarth JustinClift: blog is almost ready
18:55 hagarth JustinClift: release announcement draft .. I will send it you folks in a bit
18:55 davemc hagarth, tks
18:55 JustinClift hagarth: Cool. :)
18:57 JustinClift k, just did a hard reboot of netbsd0
18:57 JustinClift Lets see if it's happy now...
18:58 JustinClift Yep, it's up and running again
19:05 * hagarth crashes now, later folks
19:07 wushudoin joined #gluster-dev
19:08 JustinClift l8r hagarth
19:08 JustinClift On that note, I need to split for a bit now too
19:26 an joined #gluster-dev
21:45 badone joined #gluster-dev
21:50 badone_ joined #gluster-dev
23:29 shyam joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary