Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-01-19

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 wushudoin joined #gluster-dev
00:17 msvbhat joined #gluster-dev
00:32 wushudoin joined #gluster-dev
01:20 wushudoin joined #gluster-dev
01:49 susant left #gluster-dev
02:02 gem joined #gluster-dev
02:37 overclk_ joined #gluster-dev
02:46 sankarshan joined #gluster-dev
02:46 sankarshan joined #gluster-dev
02:48 ilbot3 joined #gluster-dev
02:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:50 wushudoin joined #gluster-dev
02:59 magrawal joined #gluster-dev
03:11 anoopcs joined #gluster-dev
03:24 riyas joined #gluster-dev
03:35 mchangir joined #gluster-dev
03:48 atinm_ joined #gluster-dev
03:51 Shu6h3ndu joined #gluster-dev
03:51 nbalacha joined #gluster-dev
04:22 Shu6h3ndu joined #gluster-dev
04:28 puiterwijk kkeithley: package approved. +1
04:29 msvbhat joined #gluster-dev
04:43 rjoseph joined #gluster-dev
04:45 ashiq joined #gluster-dev
04:46 t1m1 joined #gluster-dev
04:52 skumar joined #gluster-dev
04:55 msvbhat joined #gluster-dev
05:02 t1m1 joined #gluster-dev
05:03 ndarshan joined #gluster-dev
05:11 Humble joined #gluster-dev
05:25 itisravi joined #gluster-dev
05:35 gem joined #gluster-dev
05:42 jiffin joined #gluster-dev
05:43 aravindavk joined #gluster-dev
05:47 t1m1 joined #gluster-dev
05:47 riyas joined #gluster-dev
05:48 vimal joined #gluster-dev
05:50 kdhananjay joined #gluster-dev
05:50 msvbhat joined #gluster-dev
05:59 mchangir joined #gluster-dev
05:59 apandey joined #gluster-dev
06:07 susant joined #gluster-dev
06:16 vimal joined #gluster-dev
06:17 skoduri joined #gluster-dev
06:18 ppai joined #gluster-dev
06:23 sanoj joined #gluster-dev
06:27 msvbhat joined #gluster-dev
06:28 gyadav joined #gluster-dev
06:35 susant joined #gluster-dev
06:41 rafi joined #gluster-dev
06:43 Saravanakmr joined #gluster-dev
06:49 ankit_ joined #gluster-dev
06:52 t1m1 joined #gluster-dev
07:08 vimal joined #gluster-dev
07:34 gyadav joined #gluster-dev
07:38 Humble joined #gluster-dev
08:06 devyani7 joined #gluster-dev
08:34 rraja joined #gluster-dev
08:39 sanoj joined #gluster-dev
08:56 _nixpanic joined #gluster-dev
08:57 _nixpanic joined #gluster-dev
08:59 nishanth joined #gluster-dev
09:12 k4n0 joined #gluster-dev
09:15 mchangir joined #gluster-dev
09:33 rafi1 joined #gluster-dev
09:36 gem joined #gluster-dev
09:46 poornima_ joined #gluster-dev
09:55 msvbhat joined #gluster-dev
09:55 Shu6h3ndu joined #gluster-dev
09:59 mchangir joined #gluster-dev
10:08 shyam joined #gluster-dev
10:58 itisravi joined #gluster-dev
10:59 atinmu joined #gluster-dev
11:00 kdhananjay joined #gluster-dev
11:08 rafi joined #gluster-dev
11:21 kotreshhr joined #gluster-dev
11:23 skoduri joined #gluster-dev
11:26 ashiq joined #gluster-dev
11:34 percevalbot joined #gluster-dev
11:35 ira joined #gluster-dev
11:42 Shu6h3ndu joined #gluster-dev
11:44 sanoj joined #gluster-dev
11:46 apandey joined #gluster-dev
11:54 kotreshhr left #gluster-dev
11:57 rastar joined #gluster-dev
11:57 shyam joined #gluster-dev
12:03 rjoseph joined #gluster-dev
12:12 susant left #gluster-dev
12:12 skoduri joined #gluster-dev
12:29 atinmu joined #gluster-dev
12:32 atinm_ joined #gluster-dev
12:42 kkeithley puiterwijk: sweet. thanks
12:44 kkeithley puiterwijk++
12:44 glusterbot kkeithley: puiterwijk's karma is now 1
12:44 puiterwijk Hah. more bots where I can get cookies \o/
12:45 ndevos puiterwijk++ enjoy!
12:45 glusterbot ndevos: puiterwijk's karma is now 2
12:45 puiterwijk :-)
12:45 ndevos kkeithley: I just tried to build the package for the storage sig (el7), and get "Error: No Package found for python2-setuptools" :-(
12:46 ndevos I guess it's just python-setuptools there?
12:49 ndevos and python-devel
12:51 kkeithley lol
12:52 kkeithley ndevos: meh, looks like more %if ( %{rhel} ) cruft
12:52 kdhananjay joined #gluster-dev
12:53 kkeithley I've just requested the distgit repo. I'll update it when I check it in
12:53 ndevos kkeithley: well, I dont care how, just replacing the BuildRequires in the SIGs dist-git is fine
12:53 kkeithley that works too.
12:53 ndevos I see it as a distribution 'fix', so it can be in the .spec for CentOS
12:54 kkeithley although from a truth-and-beauty standpoint I'd like the spec files to be the same, as much as possible.
12:54 kkeithley now I just need storhaug reviewd
12:54 kkeithley reviewed
13:00 kkeithley although from a truth-and-beauty standpoint I'd like the spec files to be the same, as much as possible.  Someone should be able to take, e.g., the f25 src.rpm and build it on RHEL or CentOS.
13:00 nishanth joined #gluster-dev
13:01 ndevos maybe puiterwijk wants to earn an other lunch or drink? I dont know if he uses pacemaker though (bz 1411875)
13:03 puiterwijk kkeithley: I fully agree with it being a nice thing to keep the spec the same everywhere. Also it makes it easier for you to make sure stuff is synced :)
13:03 kkeithley yup
13:04 kkeithley I wonder when RHEL/CentOS will catch up with the python2/3 stuff.
13:04 puiterwijk ndevos: I don't need to use pacemaker to do a package review :). I proved that yesterday, since I don't use python-gfapi!  (only the "real" gfapi). So feel free to assign to me, and I'll take a look in a few minutes
13:04 ndevos puiterwijk: yuck, no %{__python2} for el6? any idea?
13:04 puiterwijk kkeithley: probably around the next RHEL release
13:04 puiterwijk ndevos: yes. That's documented, one second
13:04 * ndevos gets a failure for CentOS-6 http://cbs.centos.org/kojifiles/​work/tasks/3505/153505/build.log
13:06 ndevos kkeithley: I'll put this in centos-gluster38 and centos-gluster39 (for CentOS 7) if you dont object https://cbs.centos.org/koji/taskinfo?taskID=153498
13:06 puiterwijk ndevos: https://fedoraproject.org/​wiki/EPEL:Packaging#Python
13:06 kkeithley puiterwijk++: okay, assigned to you. Many thanks. I'm good for the beer in Brno or Brussels in two weeks
13:06 glusterbot kkeithley: puiterwijk's karma is now 3
13:06 puiterwijk ndevos: so, copy the "Line to fix" to the top of your spec file. That'll make rpm define them if they're not in the real -devel
13:07 ndevos kkeithley: it's tagged for testing, and will be pushed to the buildlogs repo server probably in a few hours
13:07 puiterwijk kkeithley: heh, no worries. I'll look in a few, after I got some othr bugs sorted out :)
13:07 kkeithley ndevos: sounds good
13:08 ndevos puiterwijk++ will give it a try, thanks!
13:08 glusterbot ndevos: puiterwijk's karma is now 4
13:10 kkeithley lol. I never set the fedora-review flag on either of those BZs. Perhaps that's why they didn't get picked up sooner?
13:14 msvbhat joined #gluster-dev
13:18 ashiq joined #gluster-dev
13:18 shyam joined #gluster-dev
13:22 Saravanakmr joined #gluster-dev
13:25 ndevos kkeithley: I kept the release as 1.1-1 release for the storage sig packages, let me know when it is in Fedora and I'll take the updated .spec and rebuild it again
13:33 kkeithley ndevos: I was also planning to keep it at 1.1-1
13:33 ndevos kkeithley: please dont, you updated the spec after ppai did, it should have a %changelog entry for the modifications
13:34 kkeithley er, okay
13:34 ndevos thanks! :)
13:34 kkeithley and there will certainy be a changelog for the import in any event
13:34 kkeithley gah, certainly
13:35 ndevos will there? I dont think so
13:35 kkeithley I think the import does that
13:35 ndevos hmm, I never noticed
13:36 kkeithley sorry, git commit log, not %change
13:36 ndevos if you feel inclined to make changes in the specs I used, go and have a hack at https://github.com/CentOS-Sto​rage-SIG/python-glusterfs-api (different branches for el6/7)
13:37 kkeithley I'm supposed to import the srpm that was reviewed
13:37 kkeithley per https://fedoraproject.org/wiki/New_pac​kage_process_for_existing_contributors
13:37 nbalacha joined #gluster-dev
13:39 ndevos sure , but does that note you should update the release tag?
13:40 kkeithley no
13:40 ndevos anyway, if you make it 1.1-2 in Fedora, I'll update the spec for the SIG too
13:41 kkeithley yeah, after it's imported I can do anything before I do builds. ;-)
13:45 kkeithley okay, got the dist-git repo.
13:46 prasanth joined #gluster-dev
13:48 kkeithley Could not execute import_srpm: Request is unauthorized.
13:48 kkeithley ???
13:48 Saravanakmr joined #gluster-dev
13:49 ndevos atinm_, samikshan: trans->peerinfo.volname does not seem to be set for a mgmt-callback? http://review.gluster.org/#/c/9228/1​0/xlators/mgmt/glusterd/src/glusterd.c@271
13:49 kkeithley FOne9143b
13:49 kkeithley lol
13:49 ndevos thats an OTP, right?!
13:51 kkeithley it's something ;-)
13:51 Saravanakmr :)
13:52 * Saravanakmr kkeithley needs to reset something then :)
13:52 atinm_ ndevos, checking
13:55 kkeithley srsly? fedpkg import, then `git commit ...`  not `fedpkg commit ...` ?
13:56 nbalacha joined #gluster-dev
13:56 samikshan ndevos: I think there might be a problem there. glusterd seems to not have the connected client details. I faced this problem while trying to implement get the client op-versions through the volume get command
13:57 samikshan s/to implement get/to get
13:58 samikshan atinm_: ^
13:58 ndevos samikshan: hmm, but that is the connection that the client uses to request the volfile? how does glusterd otherwise know how to update the volfile after a change?
13:58 ndevos ... glusterd know how to inform the client to update ...
14:00 * samikshan checks the code
14:00 samikshan as far as I remember the client seems to have all the information, which is sent to glusterd via a dict
14:01 atinm_ ndevos, when client asks for a volfile from glusterd at the time the xprt is populated however the same is deleted when client successfully establishes connection with the brick process
14:01 samikshan I'll need to reverify
14:01 atinm_ that's what I recollect
14:01 samikshan Yes that is correct..
14:01 atinm_ for daemons like shd, quotad glusterd still maintains the details in its xprt list, but for mount processes it doesn't
14:02 ndevos oh :-/ Does that mean glusterd sends an updated volfile to all connected clients, even if they use a different volume?
14:02 atinm_ why would it do that?
14:02 samikshan Nope.. It doesnt
14:04 ndevos atinm_: I'm just wondering how glusterd knows which clients use which volumes, so that I can filter all connections for the selected volume only
14:08 ndevos samikshan: indeed, the same struct holds max_op_version and min_op_version, both are always 0
14:09 ashiq joined #gluster-dev
14:09 mchangir joined #gluster-dev
14:10 samikshan ndevos: yes
14:10 atinm_ ndevos, refer __server_getspec () , that should give you some pointers on the question
14:12 samikshan A client makes a call to glusterd via the function client3_getspec in client-handshake.c
14:12 msvbhat joined #gluster-dev
14:13 ndevos atinm_: is it allowed for a client to request the volfile for multiple volumes over the same connection?
14:13 samikshan The handler on glusterd side corresponding to the GF_HNDSK_GETSPEC enum is server_getspec () that atinm_ pointed out
14:13 ndevos atinm_: if that is not the case, trans->peerinfo.volname could be set in __server_getspec ()?
14:14 atinm_ ndevos, I don't think we support volfiles for multiple volumes
14:14 ndevos samikshan: right, got it
14:15 nbalacha joined #gluster-dev
14:16 ndevos atinm_: actually that code tries to set peerinfo.volname already...
14:16 ndevos 850                 strncpy (peerinfo->volname, volume, strlen(volume));
14:18 samikshan ndevos: But that is not persistent :(
14:18 samikshan Since..
14:18 samikshan its not trans->peerinfo.volname that is set.
14:18 ndevos samikshan: yes, its definitely gone when I try to use it :-/
14:21 samikshan The only reason that peerinfo variable exists in that file is to make sure if the client is supported.. refer L#860.. _client_supports_volume (peerinfo, &op_errno)
14:21 samikshan ndevos: ^
14:21 samikshan :-|
14:21 samikshan s/file/function
14:26 ndevos samikshan: ok, but that does not explain why peerinfo->volname gets cleared...
14:27 vbellur joined #gluster-dev
14:31 kkeithley ndevos: python-glusterfs-api all checked in.  f26 and f25 builds are done, f24 building now
14:33 msvbhat joined #gluster-dev
14:33 ppai kkeithley++
14:33 glusterbot ppai: kkeithley's karma is now 158
14:33 ndevos kkeithley++ great, thanks!
14:33 glusterbot ndevos: kkeithley's karma is now 159
14:36 ndevos kkeithley: does that include a change like https://github.com/CentOS-Storage-SI​G/python-glusterfs-api/commit/ddebdf​6fc48197419a58003fd2ccc1b33280f84b
14:36 nbalacha joined #gluster-dev
14:36 kkeithley it does not. I didn't see that in the diff between the centos distgit bits
14:37 kkeithley when I diffed about, oh, an hour ago.
14:37 kkeithley did you just add that?
14:38 ndevos I made that change before telling you about the repo, but it is in the sig-storage6-gluster38 branch, not in the el7 one
14:39 kkeithley oh, I only diffed the el7 branch
14:40 ndevos yeah, I guess it helps to have the .spec the same for all distributions ;-)
14:40 kkeithley yup
14:42 ndevos samikshan: do you know why peerinfo->volname and op-version are reset? is that really needed, or can we prevent that?
14:42 kkeithley I just added it. (master branch only atm, will do other branches later)
14:43 kkeithley I'm not going to do another fedora build at this time though.
14:43 samikshan ndevos: They are not reset, the peerinfo variable is local and its life ends after __server_getspec ()
14:45 samikshan No attempt seems to be ever made to populate the xprt_list member in glusterd_conf_t with the mounted clients
14:46 samikshan Once the client receives the volfile from glusterd, it just talks to the brick process
14:47 ndevos samikshan: but I thought glusterd keeps track of the clients, so that it can inform them when they need to load a new volfile?
14:48 samikshan atinm_: Anything on this? ^
14:49 samikshan ndevos: I'm not very sure anout this particular workflow
14:50 ppai samikshan, yes, glusterd notifies clients about volfile change
14:51 atinm_ well, the logic is like this, once client establishes the connection with the bricks there is no active connection between glusterd and mount process, so glusterd doesn't need to have that information
14:52 ppai samikshan, see glusterd_fetchspec_notify
14:52 atinm_ and I would only call it as mount process not client as glusterd still manages to hold the other client details which are *active*
14:53 ppai samikshan, on an RPCSVC_EVENT_ACCEPT event (client connected), the xprt_list is updated
14:54 samikshan ppai: Ah yes..
14:57 nbalacha joined #gluster-dev
15:02 kkeithley ppai: do you have a fedora (FAS) account? Are you a fedora packager?
15:02 ppai kkeithley: No
15:03 kkeithley you should do that so I can give you committer access on python-glusterfs-api in Fedora
15:03 ppai oh okay, I'll do that.
15:04 * kkeithley goes off to find the page for packager sponsorship
15:05 kkeithley let me know once you have the FAS account
15:07 ppai sure
15:10 ppai kkeithley: it turns out that I did have an account! The name's ppai
15:14 samikshan ppai: I don't think that RPCSVC_EVENT_ACCEPT event comes from the mounted clients however though
15:15 samikshan This event only comes from daemons like shd, quotad..
15:24 kkeithley ppai: okay, I'll get you a packager bit then
15:25 kkeithley as a co-maintainer
15:25 nbalacha joined #gluster-dev
15:26 kkeithley fedora documentation is a twisty maze
15:27 annettec joined #gluster-dev
15:33 ndevos samikshan: if the current approach for triggering a statedump through glusterd is not correct, should we change it to have the bricks inform the client to do a statedump?
15:36 ndevos samikshan: now I'm also wondering how the clients get the notification about the updated volfile when bricks were added/removed...
15:37 lpabon joined #gluster-dev
15:40 susant joined #gluster-dev
15:43 kkeithley puiterwijk: this used to be easier to find/do. How do we get ppai added as a comaintainer for python-glusterfs-api?  Instructions at https://fedoraproject.org/wiki/How_to_​sponsor_a_new_contributor#Sponsoring_S​omeone_for_Fedora_Package_Collection
15:44 kkeithley don't seem to work for me. I can't add him to the packagers group, don't see how to request adding him.
15:44 puiterwijk kkeithley: what is your FAS username?
15:45 kkeithley mine is kkeithle
15:45 puiterwijk kkeithley: you're not a package sponsor.
15:45 kkeithley right
15:45 puiterwijk So you cannot do a packager's initial review, nor add someone to the packager group
15:45 kkeithley I don't have that bit
15:46 ndevos kkeithley: you're looking for https://fedoraproject.org/wiki/How_to_get_sponsor​ed_into_the_packager_group#Become_a_co-maintainer
15:46 puiterwijk So yeah, you can't add people to that group. They'd need to add FE-NEEDSPONSOR as flag.
15:46 kkeithley yup
15:46 puiterwijk Ahh, it's for a co-maintainer?
15:46 kkeithley yup
15:46 puiterwijk Okay. Give me one second, as I might be able to help you here
15:47 puiterwijk kkeithley: okay, yeah. If you trust them, I can sponsor them on your behalf
15:47 kkeithley I used to have a friendly packager who would do that for us. I'd have to dig through old email to find it.
15:47 vbellur joined #gluster-dev
15:47 puiterwijk kkeithley: I can also do it for you
15:47 puiterwijk (I'm also a packager sponsor)
15:48 puiterwijk I will just hold ndevos responsible if anyone asks :D
15:48 kkeithley just for python-glusterfs-api. It is originally his package, part owner
15:48 ndevos hey!
15:48 puiterwijk Sure. What's his username?
15:48 kkeithley ppai is his FAS account name
15:48 puiterwijk Ahhhh. Okay
15:48 puiterwijk kkeithley: so, you're willing to be responsible if they do anything terrible as a packager?
15:48 kkeithley sure
15:49 puiterwijk Okay
15:49 kkeithley he's a Red Hatter working on GlusterFS/RHGS
15:51 kotreshhr joined #gluster-dev
15:51 puiterwijk kkeithley: okay, they're added. It can take about 30 minutes to sync out
15:52 kkeithley you da bomb
15:52 ndevos kkeithley: auch, el6 builds still fail, more  .spec updates needed - http://cbs.centos.org/kojifiles/​work/tasks/3528/153528/build.log
15:52 kkeithley I'm learning to hate RHEL6
15:53 kkeithley I didn't use to hate it.
15:55 kkeithley I don't remember who sponsored me. I wonder how much heat he's taken for the terrible packaging things I've done. ;-)
15:55 ndevos kkeithley: el6 doesnt have %licence and probably also no python2_sitelib
15:56 kkeithley %{!?_licensedir:%global license %%doc}
15:56 atinm_ joined #gluster-dev
15:57 kkeithley # From https://fedoraproject.org/w​iki/Packaging:Python#Macros
15:57 kkeithley %{!?python_sitelib: %global python_sitelib %(python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")}
15:58 kkeithley I wonder if that's still correct wrt python2/3
15:58 kkeithley see, e.g., how those are used in the glusterfs.spec
16:00 kkeithley looks like it should now be %{python2_sitelib}
16:03 misc mhhh, would it be difficult to keep a calendar with releases dates, etc ?
16:03 nbalacha joined #gluster-dev
16:03 misc the idea is to know when community cage people (ie, me and others) can schedule disruptive changes
16:03 misc and so know when we can't
16:06 kkeithley keeping the calendar would be easy. Getting the actual releases out when they're scheduled is hard.
16:07 ndevos misc: se have https://www.gluster.org/co​mmunity/release-schedule/ with approx. dates
16:07 wushudoin joined #gluster-dev
16:07 misc ndevos: yeah, but not "we can't reboot server because we are branching", or this kind of stuff
16:08 wushudoin joined #gluster-dev
16:08 misc and approximate dates are not gonna work fine :/
16:08 kkeithley I believe you're safe for a while.  Until Jan 30 at least
16:08 misc "of course we plan long term to as long as 10 days in advance" :)
16:09 misc but yeah, there is delay for the release, but so that mean that we have a buffer of 5 days before and after the release
16:13 kkeithley ndevos: I've pushed fixes to the python-glusterfs-api distgit
16:14 ndevos kkeithley: you're awesome, you earned yourself a hug
16:14 kkeithley I'd rather have a beer
16:14 kkeithley ;-)
16:14 * ndevos feels offended
16:15 ndevos I can get you a beer *as*well*
16:15 kkeithley but if you feel strongly about it I guess I can handle it
16:17 kkeithley ppai has commit bits for python-glusterfs-api now
16:17 kkeithley along with Humble
16:17 kkeithley welcome to the exciting world of Fedora packaging
16:24 kkeithley oh, ppai has gone for the day.
16:33 kkeithley misc: well, just tell us if you need to do something. We should be able to be flexible around that
16:34 misc kkeithley: for urgent stuff, sure
16:34 misc but I understand also that release have their own challenges, so I know people can't do miracle too often
16:34 atinm_ ndevos, it looks like what we told you is wrong, GlusterD does maintain all the connected client's details
16:34 kkeithley although telling seems to be hit or miss. I told people the release-3.9 branch was frozen and there were still commits made on it.
16:34 misc I wonder if we can try to get 2 gerrit in HA later
16:35 ndevos atinm_: yes, I thought so, and even wireshark confirms it :)
16:35 atinm_ ndevos, apologies for the confusion as we had a different observation while working on the client op-version details in volume status output
16:35 ndevos atinm_: but, in the GETSPEC request, I can see that there is a weird volume name in some requests, like patchy.127.1.1.3.d-backends-3-patchy, instead of just patchy in others
16:36 kkeithley well, even for things that aren't urgent. If you tell us your needs in advance, we ought to be able to work with that.
16:37 ndevos kkeithley, shyam: btw, did you add patches to 3.10 to remove the 4.0 features and set a correct op-version?
16:37 kkeithley If we have to slip a release--  $deity knows we don't rigidly hold to release schedules.
16:37 glusterbot kkeithley: release's karma is now -1
16:37 misc yeah, we want to do that for planned upgrade
16:37 atinm_ ndevos, I think its in the form of volname.ip/hostname.brickpath
16:38 misc but we also have stuff like code upgrade that do affect others and not us
16:38 atinm_ ndevos, but that's for bricks
16:38 atinm_ ndevos, clients should have only the volume name
16:38 atinm_ ndevos, you can take a statedump of glusterd and have a look
16:38 ndevos atinm_: oh, right, the bricks have that longer name, the request to glusterd is the volume name
16:39 misc kkeithley: however, I would also try to see if we can improve releases, ie, if we are late, do a post mortem and improve from here
16:39 atinm_ ndevos, I think you are close to it now :)
16:41 ndevos atinm_: I'm getting a good understanding of it, but it is still unclear why the peerinfo->volname (+op-version) is not stored persistently
16:46 Shu6h3ndu joined #gluster-dev
16:52 Humble kkeithley++ thanks a lot for making it :)
16:52 glusterbot Humble: kkeithley's karma is now 160
16:53 Humble I failed to finish it in my first attempt ..
16:53 Humble :)
16:53 Humble Thanks a lot :)
17:01 kkeithley yw
17:17 shyam ndevos: not yet, it is there in the to-do list for 3.10 (removing the experimental stuff)
17:17 ndevos shyam: ah, ok
17:18 kkeithley a lot of the same things that were removed from 3.9, I expect.
17:19 nishanth joined #gluster-dev
17:19 shyam kkeithley: yup
17:19 kkeithley fix up the gfapi symbols
17:19 shyam Only thing is, do we feature flag it or remove it from the code base for that branch?
17:20 kkeithley feature flag it?
17:20 kkeithley if people want to try a 3.11/4.0 feature, they can build the master branch.
17:21 kkeithley lunch, biab
17:25 rastar joined #gluster-dev
17:35 sanoj joined #gluster-dev
17:35 skoduri joined #gluster-dev
17:45 jiffin joined #gluster-dev
17:46 jiffin vbellur: http://review.gluster.org/#/c/12256/ passed all regressions and got +1 from jdarcy
17:47 jiffin can u please have a look
17:50 vbellur jiffin: will do
17:50 jiffin vbellur: thanks
18:11 ndevos shyam: care to merge the ones that have +2 review? http://review.gluster.org/#/q/topic:bug-1169302
18:12 * ndevos will be back later _o/
18:12 shyam ndevos: Is there a dependency on the other 2 patches?
18:21 vbellur joined #gluster-dev
18:23 rastar joined #gluster-dev
18:38 ndevos shyam: http://review.gluster.org/16415 is the 1st in the series
18:38 ndevos the others depend on that one
18:38 shyam 16415 says cannot merge?
18:38 ndevos 2nd in line is http://review.gluster.org/9228, it is needed for the last one
18:39 ndevos why not? I see a [submit] button but no error
18:40 ndevos just to be sure, http://review.gluster.org/16414 >
18:41 ndevos shyam: sorry, 16415 is the last one, 16414 is the 1st
18:41 msvbhat joined #gluster-dev
18:41 shyam ah ok
18:41 ndevos thanks!
18:41 * ndevos will be afk for a while, might be back later today
18:47 shyam ndevos: ok 16414 merged... 16415 still says "cannot merge" on the gerrit UI (did not try the submit button)
18:48 shyam ndevos: It is failing on the test added for that commit, so I guess a genuine failure then?
18:53 Acinonyx joined #gluster-dev
19:00 vbellur joined #gluster-dev
19:06 Acinonyx joined #gluster-dev
19:21 kkeithley shyam, jdarcy: wrt removing things that aren't supposed to be in 3.10——  The change to glfs_ipc that's nominally for 4.0.  Should we pull that up to 3.10? Or leave it to bake for 4.0 proper?
19:24 shyam kkeithley: I would say pull into 3.10, it has been out there for some time now, but, I think ndevos had some reservations on the call semantics itself
19:25 kkeithley oh, right.
19:31 kkeithley the only things I'm seeing in release-3.10 branch that should be removed are .../xlators/experimental and .../xlators/features/ganesha.
19:32 k4n0 joined #gluster-dev
19:33 kkeithley I'm not sure why .../xlators/features/ganesha wasn't taken out of master at the same time it was removed from release-3.9
19:36 kkeithley wrt glfs_ipc, params were changed to void**, so I think ndevos' reservations were addressed
19:37 kkeithley ndevos' and my reservations
19:37 shyam Hmmm... then let's take it, but check with ndevos once?
19:37 kkeithley yup
19:37 shyam Ah, ok you were involved :) then if it is fine, we should mark it 3.10 and have one less thing to worry about in the future
19:37 kkeithley just needs correct symbol versions
19:39 kkeithley pollute is not the right word, but we didn't want to add, expose gluster types in an interface that is otherwise clean of them
19:39 kkeithley let's see if jdarcy has an opinion about it too. too bad he's not here atm
19:41 annettec joined #gluster-dev
19:42 annettec joined #gluster-dev
19:43 kkeithley @later tell jdarcy any reservations about the glfs_ipc change nominally slated for 4.0 being brought into 3.10 — with corresponding changes to its symbol version
19:43 glusterbot kkeithley: The operation succeeded.
19:44 kkeithley @later tell ndevos  any reservations about the glfs_ipc change nominally slated for 4.0 being brought into 3.10 — with corresponding changes to its symbol version
19:44 glusterbot kkeithley: The operation succeeded.
19:51 k4n0 joined #gluster-dev
20:12 msvbhat joined #gluster-dev
20:36 Humble joined #gluster-dev
21:09 timotheus1 joined #gluster-dev
21:33 foster joined #gluster-dev
23:52 vbellur joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary