Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-20

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 bala joined #gluster-dev
01:02 bala joined #gluster-dev
03:12 hagarth joined #gluster-dev
03:19 bharata-rao joined #gluster-dev
03:57 itisravi joined #gluster-dev
04:06 shubhendu joined #gluster-dev
04:16 aravindavk joined #gluster-dev
04:30 anoopcs joined #gluster-dev
04:36 nishanth joined #gluster-dev
04:39 ndarshan joined #gluster-dev
04:40 kanagaraj joined #gluster-dev
04:43 rafi1 joined #gluster-dev
04:49 soumya_ joined #gluster-dev
04:49 nkhare joined #gluster-dev
04:52 spandit joined #gluster-dev
04:52 jiffin joined #gluster-dev
04:52 kshlm joined #gluster-dev
05:06 bala joined #gluster-dev
05:08 krishnan_p joined #gluster-dev
05:11 atalur joined #gluster-dev
05:26 nkhare joined #gluster-dev
05:27 deepakcs joined #gluster-dev
05:30 shubhendu joined #gluster-dev
05:39 lalatenduM joined #gluster-dev
05:42 nkhare joined #gluster-dev
05:43 hagarth joined #gluster-dev
05:48 ppai joined #gluster-dev
05:53 vimal joined #gluster-dev
06:08 bala1 joined #gluster-dev
06:15 shubhendu joined #gluster-dev
06:18 ndarshan joined #gluster-dev
06:23 nishanth joined #gluster-dev
06:39 kshlm joined #gluster-dev
06:48 pranithk joined #gluster-dev
07:09 Anuradha joined #gluster-dev
07:15 ndarshan joined #gluster-dev
07:17 shubhendu joined #gluster-dev
07:20 nishanth joined #gluster-dev
07:50 aravindavk joined #gluster-dev
08:01 soumya_ joined #gluster-dev
08:10 bala joined #gluster-dev
08:17 nkhare joined #gluster-dev
08:26 anoopcs joined #gluster-dev
08:37 lalatenduM joined #gluster-dev
08:50 nishanth joined #gluster-dev
08:52 bala joined #gluster-dev
08:52 lalatenduM joined #gluster-dev
09:25 nishanth joined #gluster-dev
09:30 bala joined #gluster-dev
09:35 Gaurav_ joined #gluster-dev
09:49 krishnan_p joined #gluster-dev
09:54 ppai joined #gluster-dev
10:04 anoopcs joined #gluster-dev
10:19 krishnan_p joined #gluster-dev
10:40 itisravi_ joined #gluster-dev
11:01 ppai joined #gluster-dev
11:02 shubhendu joined #gluster-dev
11:18 pranithk xavih: I went through the writev fop today at a high level. I am not able to find how it will decide which fragments in which bricks are good among the bricks.
11:18 pranithk xavih: I mean in ec of course.
11:19 xavih pranithk: it does so by checking the 'trusted.ec.version' of the file on each brick before the write
11:20 xavih pranithk: this is obtained in the lookup made just after locking the inode
11:20 pranithk xavih: I only saw a lookup getting the version. I didn't see any setxattr updating it, where is it done?
11:21 xavih pranithk: updates are done after the write, just before unlocking. Let me find the exact function name...
11:21 xavih pranithk: it's done in ec_update_size_version(), using an xattrop call
11:21 aravindavk joined #gluster-dev
11:22 pranithk xavih: what will happen if the mount crashes before the xattrop call is sent? i.e. write succeeded on 2 of the three nodes but then either the mount crashed or one of these bricks on which write succeeded go down?
11:24 pranithk xavih: I am yet to check self-heal functions, may be it is handled, but just wanted to check if you considered them...
11:25 xavih pranithk: this situation will be problematic :(
11:27 xavih pranithk: I'm considering to replace the initial lookup by an xattrop. This could allow to obtain the version and size xattr's, but at the same time I could write some dirty xattr, like afr does
11:30 xavih pranithk: but this does not fully solve the problem of identifying which bricks are updated or not if all fail at the same time or the last xattrop is not sent
11:30 pranithk xavih: that is what I was thinking this morning. I find that there is a fundamental difference between afr's transaction and ec's transaction. i.e. afr winds write on two of its subvols and the mount crashes, then irrespective of the fop's success/failure, afr can choose either one of the files as source and heal it. Because if the write succeeded on the source then we behave as though the write succeeded on both the bricks. If the write d
11:31 pranithk xavih: yes, I was just composing my thoughts on the same.
11:33 xavih pranithk: using reed-solomon there's a way to detect incorrect writes, even if we don't have any clue which ones succeeded and which not (obviously assuming that there aren't too much errors)
11:33 pranithk xavih: I am thinking we need to know the 'data' part of previous 'version' before we upgrade the fragments to next 'version'?
11:33 pranithk xavih: oh nice,
11:33 pranithk xavih: how?
11:33 xavih pranithk: however this approach will require a slight modification in the encoding algorithm and an important modification in the decoding
11:34 xavih pranithk: but this will also affect read performance because reads will need to read from all bricks instead of a subset
11:34 pranithk xavih: I know nothing about the erasure codes. I just started reading about finite fields and reed-solomon codes
11:34 pranithk xavih: gah!
11:35 pranithk xavih: why did we go with non-systematic codes instead of systematic codes?
11:35 xavih pranithk: maybe a combined approach is the best solution: using dirty flag to determine if we can do a 'fast' read or not
11:35 pranithk xavih: on disk data need not change?
11:36 pranithk xavih: if we change encoding algo, will there be backward compatibility?
11:36 xavih pranithk: currently this is only an implementation detail because decoding algorithm is able to process few GB/s, so there's no much difference between systematic and non systematic implementation (at least for now)
11:36 pranithk xavih: that is good to hear
11:36 xavih pranithk: anyway I've it in my todo list
11:36 pranithk xavih: cool.
11:37 pranithk xavih: 'I am thinking we need to know the 'data' part of previous 'version' before we upgrade the fragments to next version' Do you think this is un-necessary?
11:37 xavih pranithk: I created an xattr (trusted.ec.config) for each file to be able to change algorithms and configurations without breaking backward compatibility
11:37 pranithk xavih: brilliant decision :-)
11:38 kkeithley1 joined #gluster-dev
11:38 xavih pranithk: I don't understand what are you proposing...
11:39 pranithk xavih: I don't have any solution, but the heart of any atomic transaction is to be able to recover if we find there is a failure before commit point. In case of ec with my limited understanding it seems like the commit point is increasing the version number?
11:40 xavih pranithk: yes, currently version number is what determines if all bricks are in sync
11:41 pranithk xavih: cool, so that is my point. If the mount knows that the brick didn't succeed on at least N-R bricks (unfortunately this may happen at the time of write) there is no way to recover the data at the moment is my understanding. Is it true?
11:42 pranithk xavih: the modifications to reed-solomon algo, will that prevent this from happening?
11:43 xavih pranithk: are you talking about the case when the version is not correctly updated because of a crash, or only if a write fails ?
11:45 pranithk xavih: Both cases. If the write succeeds only 2 of the 5 bricks in ec-subvolumes. Can we recover the data.
11:46 pranithk xavih: I mean 5 i.e. (4+1)
11:47 xavih pranithk: this combination is a problem. If your configuration is 4+1, and 2 bricks fail, there's no way to recover data
11:48 xavih pranithk: this is a limitation of erasure codes. They can only recover if there are at most R erasures or R/2 errors in unknown locations
11:50 pranithk xavih: we can't write out the previous version's data to some file and provide a way to rollback to at least one version before?
11:50 xavih pranithk: to solve this, some sort of transaction tracking on the bricks would be necessary to undo the changes on bricks that succeeded
11:50 pranithk xavih: we are thinking alike :-)
11:50 soumya_ joined #gluster-dev
11:50 xavih pranithk: but this would require a read before every write. If managed from the client, the performance impact will be high
11:51 xavih pranithk: I think this would need to be implemented in the server side
11:51 pranithk xavih: No no, as part of lookup before the write we can ask it to keep a backup of the fragment we are going to modify?
11:51 pranithk xavih: again, thinking alike :-)
11:52 xavih pranithk: that could be a valid approach to handle these cases
11:52 pranithk xavih: Or we need to log the details about what we are going to write before writing, so that we always move ahead. Something like Write Ahead logging in NSR
11:53 pranithk xavih: As part of pre-op we need to say we are writing the following data. If this part succeeds on at least N-R bricks we can always recover the data?
11:54 xavih pranithk: but this needs a way to guarantee that the rollback will not fail, which I consider is not possible (the same way that you have not been able to guarantee that the write would have had to succeed)
11:54 xavih pranithk: but if the write fails, how can we redo the write ? it will probably fail again
11:54 pranithk xavih: No we disallow reads as long as we can't recover the file to a sane state
11:55 xavih pranithk: and how do you recover if you don't have the old data and you cannot write the new one ?
11:57 xavih pranithk: another approach would be to have a way to guarantee that a write won't fail
11:57 pranithk xavih: Either the write succeed or it didn't. If the write succeed then the data in the log and the file match. If it doesn't match then the data on the file is of previous version. So check the N bricks and see if we have N-R old versions then we can abort the transaction we did before. If we don't then we have no option but to move forward. Here the important thing is we know exactly what the data is supposed to be in the fragment we are
11:58 xavih pranithk: yes, but what happens if we cannot write this data to the file ?
11:59 pranithk xavih: we mark the file as bad. We don't serve reads until self-heal or some other operation can make sure the contents are good in at least N-R bricks.
11:59 xavih pranithk: ok
12:00 xavih pranithk: but this can require human intervention if the cause is disk full or something similar
12:00 pranithk xavih: yes. But still, the data is recoverable. It is not lost.
12:00 xavih pranithk: yes
12:01 xavih pranithk: however this is a bit complex to implement in the client side, isn't it ?
12:01 pranithk xavih: no no, it will be implemented in server side. As part of pre-op we do this logging on the brick.
12:02 xavih pranithk: you would need to send the data in an special operation (maybe a lookup), but then you will also need to send the data in the write call
12:02 xavih pranithk: but how do you log the data that will be written ? you need it
12:03 xavih pranithk: or you have to modify write fop in a radical way or not use it at all
12:03 pranithk xavih: I didn't understand your previous question :-(
12:03 pranithk xavih: I am thinking the second approach. The actual write fop will become something like 'replay' what we wrote in the log
12:03 xavih pranithk: if you log the data that will be written before writing it, you need to receive that data, right ?
12:04 pranithk xavih: ^^
12:04 xavih pranithk: so you will have a write fop that only works as a commit message (without sending data) ?
12:04 pranithk xavih: something like that, yes
12:04 xavih pranithk: and data will be sent using lookup or something else ?
12:05 pranithk xavih: No, we use write  fop with a special flag in xdata. Which will write to log
12:06 xavih pranithk: and which fop do you plan to use as the commit message ?
12:06 pranithk xavih: I am making up the answer as I go. I need to give it some thought about how best to implement it.
12:06 pranithk xavih: we can use virtual setxattr
12:07 pranithk xavih: again all these are thoughts, not concreate
12:07 xavih pranithk: maybe in xattrop so that it can be also used to update some xattrs ?
12:07 pranithk xavih: thats even better :-)
12:07 xavih pranithk: of course, I'm also thinking on it as we talk... :)
12:07 pranithk xavih: cool :-)
12:08 pranithk xavih: Question is how will you implement log?
12:08 pranithk xavih: that is the beast
12:09 xavih pranithk: not sure if this could interfere with snapshots, but it could be easily be a circular file written sequentially and stored in SSD for maximum performance (otherwise inside .glusterfs)
12:10 xavih pranithk: if there's something similar in NSR maybe we could look at it
12:10 xavih pranithk: maybe it's useful for this
12:11 pranithk xavih: yes.
12:13 pranithk xavih: Let me learn a bit more about erasure codes as well. I feel we need this write ahead logging to minimize the number of irrecoverable instances. Did you ever develop WAL protocol before?
12:14 pranithk xavih: one more question
12:14 pranithk xavih: about versions
12:14 xavih pranithk: no, I haven't
12:15 xavih pranithk: tell me
12:16 pranithk xavih: Can parallel writes happen on two different regions of the file in ec? i.e. we have two writes non-overlapping writes w1, w2, both are sent in parallel, w1 succeeded on some and w2 succeeded on others. can two fragments end up with same version but they actually mean different set of fragments?
12:17 pranithk xavih: let me give example:
12:17 pranithk xavih: we have (2+1) bricks
12:17 xavih pranithk: this cannot happen in the current version because all writes are serialized. If one write files on one brick, this brick won't receive further writes (for the same file)
12:18 xavih s/files/fails/
12:18 pranithk xavih: how is serialization achieved?
12:19 pranithk xavih: full file INODELK?
12:19 xavih pranithk: yes. Currently all locks lock the full file, and this lock is reused if multiple writes are received
12:20 pranithk xavih: got it. wait
12:20 pranithk xavih: what if the following happens?
12:20 pranithk xavih: we have (2+1) bricks
12:21 pranithk xavih: ah! we don't send writes if the versions don't match
12:21 xavih pranithk: right
12:21 pranithk xavih: that makes things simple
12:22 xavih pranithk: and we don't send a write before having received the answer to the previous one (to make it possible to mark the file on that brick as bad and not send more changes)
12:22 pranithk xavih: yes, cool.
12:23 xavih pranithk: I was planning to allow non overlapping writes to go in parallel, but this is more complex as you have seen and the problems are not trivial to solve
12:23 pranithk xavih: exactly :-)
12:24 pranithk xavih: not a lot of applications do it anyway, so I guess we are good
12:25 xavih pranithk: with something like dfc I could allow full parallel writes, even if they overlap, and this problem could be solved easily... :)
12:25 pranithk xavih: hey wait, I saw that reads take locks as well. If there is a logfile on ec volume and writes keep happening on the file and someone else does tailf, what will happen?
12:25 pranithk xavih: :-)
12:26 pranithk xavih: As you said, as long as writes happen it doesn't release the lock, I was wondering if tailf hangs until writes stop on the brick
12:27 xavih pranithk: if it's made from the same gluster client, it will work as expected (reads and writes share the same lock)
12:27 xavih pranithk: if made from multiple clients, currently there's an issue (I have a bug about that)
12:27 pranithk xavih: cool
12:28 xavih pranithk: I plan to solve this by limiting the number of requests that a client can do with the lock held
12:28 pranithk xavih: okay, I think the only thing I see now is that logging part and self-heal-daemon kind of thing in ec
12:28 pranithk xavih: makes sense
12:28 xavih pranithk: this way it gives the oportunity to other clients to access the file
12:28 pranithk xavih: exactly
12:29 xavih pranithk: this can also be combined with open-fd-count (or something like that) that posix can return
12:29 pranithk xavih: could you give me some easy bugs so that I can fix some.
12:29 pranithk xavih: yes
12:29 edward1 joined #gluster-dev
12:30 xavih pranithk: maybe the easiest bug I have right now is this one...
12:30 xavih pranithk: BTW, have you seen the emails about the timer issue ?
12:30 pranithk xavih: not really. I am not aware of that implementation much. I can take a look.
12:32 xavih pranithk: I've a bug caused by a structure being referenced in a timer callback when that timer has already been cancelled...
12:32 pranithk xavih: hey, I need to leave home now. Is it okay if I take this open-fd bug so that I will get exposure to the transaction?
12:32 pranithk xavih: oh
12:32 xavih pranithk: current timer api does not allow to know if the timer has been really cancelled
12:32 pranithk xavih: I will check that mail once and see what the problem once I get home
12:32 xavih pranithk: and this makes it difficult to release resources
12:32 pranithk xavih: ah!
12:32 pranithk xavih: got it
12:33 pranithk xavih: I will need to check the code. I will take a look and respond
12:33 xavih pranithk: there's a mail in gluster-devel talking about that and explaining an alternative
12:33 pranithk xavih: yeah, I remember shyam responded to that one. Will need to read about it a bit more.
12:34 xavih pranithk: ok. Thanks :)
12:34 pranithk xavih: Could you assign this inodelk give-up after some time bug to me?
12:34 xavih pranithk: no problem. You can look at this bug
12:34 pranithk xavih: assign it or give me bug id. I will take it in my name
12:34 xavih pranithk: I'm not sure if I can assign you. I'll try
12:35 pranithk xavih: okay give me bug id
12:35 pranithk xavih: I can take a look
12:35 xavih bugs: 1161903 (3.6) and 1165041 (master)
12:35 xavih pranithk: thanks :)
12:35 pranithk xavih: cool, thanks
12:36 xavih pranithk: if you need any help, please ask ;)
12:36 pranithk xavih: I will be actively working on ec
12:36 pranithk xavih: Oh totally :-)
12:36 xavih pranithk: great :)
12:36 pranithk xavih: I was supposed to start on them 3 weeks ago but customer issues got in the way.
12:36 xavih pranithk: no problem :)
12:37 pranithk xavih: My immediate plan is to understand ec and come up with design to integrate with self-heal-daemon
12:37 xavih pranithk: that would be very good
12:38 pranithk xavih: okay. Ill cya later. Keep me posted if you come up with something for WAL protocol we discussed. I think in ec it is necessary, not so much in afr.
12:38 xavih pranithk: sure
12:38 xavih pranithk: see you :)
12:38 pranithk xavih: cya, will be logging off in a bit.
12:42 lalatenduM joined #gluster-dev
12:46 shubhendu joined #gluster-dev
12:52 tdasilva joined #gluster-dev
13:03 rtalur_ joined #gluster-dev
13:04 lpabon joined #gluster-dev
13:10 shyam joined #gluster-dev
13:13 anoopcs joined #gluster-dev
13:57 ppai joined #gluster-dev
13:58 ppai lalatenduM, https://kojipkgs.fedoraproject.org/​/work/tasks/7114/8167114/build.log
13:58 lalatenduM ppai, thanks
13:58 lalatenduM kkeithley, ndevos hchiramm_ take a look https://kojipkgs.fedoraproject.org/​/work/tasks/7114/8167114/build.log
13:58 lalatenduM Samba build is failing as it is looking for glusterfs-api >= 4
13:58 hagarth joined #gluster-dev
14:07 ndevos lalatenduM: hmm, I wonder how they check that version, maybe through a pkg-config .pc file?
14:08 ndevos lalatenduM: we should send a fix to samba for that
14:08 ndevos and, maybe to qemu and others too...
14:09 ndevos or, we create a .pc file that contains some higher version... but with the symbol versioning I would not know what version to pick
14:09 wushudoin joined #gluster-dev
14:11 lalatenduM ndevos, yes a pkg-config .pc
14:11 lalatenduM file
14:11 lalatenduM checking qemu now
14:15 ndevos lalatenduM: pkg-config can handle SO_NAME versioning, not sure of it can do symbol versioning
14:16 ndevos kkeithley_: are you a pkg-config .pc expert?
14:16 Anuradha joined #gluster-dev
14:34 ndevos lalatenduM: I tend to think the .pc file should contain the version of the glusterfs sources, so 3.6.1, 3.4.6 or 3.5.3 and the like
14:35 lalatenduM ndevos, not sure what is right , but thats falls in to the same category of things for which we did symbolic versions
14:36 ndevos lalatenduM: yes, gluster-api.pc would have contained version=7 before the symbol versioning, it now is back to 0 (I think, did not check)
14:36 _Bryan_ joined #gluster-dev
14:37 ndevos or, we could set teh version in the .pc file to 4.<version> like 4.3.6.2
14:38 ndevos that would prevent the need of updating samba, qemu and possibly others
14:40 kshlm joined #gluster-dev
14:40 soumya joined #gluster-dev
14:41 lalatenduM ndevos, yeah, we need to check other pkgs and then decide
14:46 ndevos lalatenduM: is there a bug for this already? we should note the options we have there
14:47 lalatenduM ndevos, I dont think so , else you would have seen it till now :)
14:47 lalatenduM ndevos, interestingly I dont see any mail on this in fedora devel too
14:48 ndevos lalatenduM: I guess that there have not been any rebuilds of packages yet
14:48 lalatenduM The issue came up while Poornima was trying to compile Samba with gluster
14:49 ndevos lalatenduM: I think the .pc file with a 4.<version> makes everyone happy
14:49 lalatenduM ndevos, here is the rebuild http://koji.fedoraproject.org/​koji/buildinfo?buildID=593897
14:49 ndevos (except for the ones that prefer useful versions, but they lost already when version=4 got used)
14:50 lalatenduM ndevos, do you know the history behind using  4.<version> in pc file
14:50 lalatenduM not sure why we are doing that
14:51 lalatenduM when the so file version was at so.0
14:51 tdasilva joined #gluster-dev
14:52 ndevos lalatenduM: that version in the .pc file got increased when new functions in libgfapi.so were added, new functions do not require the SO_NAME change -only incompatible changes would need that
14:53 lalatenduM ndevos, yeah makes sense now
14:54 lalatenduM ndevos, that means samba, qemu , nfs ganesha , which are built previous to 3.6.1, will nit work with 3.6.1 now
14:54 ndevos lalatenduM: with symbol versioning the version=4 in the .pc does not make much sense anymore, the versioned symbols are tied to the glusterfs version
14:54 lalatenduM s/nit/not
14:55 ndevos lalatenduM: it depends, only if they use the .pc file and check for the version
14:55 lalatenduM ndevos, hmm
14:55 ndevos lalatenduM: you can also use a configure.ac to test for functions in a header or library, there is no need to use pkg-config
14:56 ndevos pkg-config is just one of the options to check for certain libs/functions
14:56 ndevos (and configure.ac can also use pkg-config to do those checks)
14:56 rafi1 joined #gluster-dev
14:57 pranithk joined #gluster-dev
14:57 lalatenduM ndevos, ok
14:58 nkhare joined #gluster-dev
15:06 hagarth pranithk: ping, around?
15:07 ndevos lalatenduM: I propose this as a solution: http://paste.fedoraproject.org/152464/16495999
15:07 pranithk hagarth: yes
15:07 lalatenduM ndevos, checking
15:07 pranithk hagarth: I am trying to get into the conference but the number is not connecting :-(
15:08 ndevos lalatenduM: we can include that in the Fedora package, re-build for Rawhide and ask the samba guys to rebuild tomorrow (or the day after)
15:08 dlambrig_ joined #gluster-dev
15:12 lalatenduM ndevos, in fedora dist git we use configure.ac from the source tar file
15:12 lalatenduM ahh, we have create a patch with it
15:13 lalatenduM s/have/can/
15:13 ndevos lalatenduM: yes, sure, but we can add a patch in fedora dist-git (or just "sed") to fixup the gluster-api.pc file
15:15 ndevos lalatenduM: can you file a bug for this? I'm happy to update the .spec in Fedora and post the patch for review
15:16 lalatenduM ndevos, on it
15:17 ndevos lalatenduM: okay, I'll let asn know about it, can you pass it on to Poornima?
15:17 lalatenduM ndevos, yes
15:18 lalatenduM I dont see 3.6.1 in bugzilla , hchiramm_ ^^
15:23 ndevos lalatenduM: just file is against mainline :)
15:23 lalatenduM yeah , doing that
15:24 ndevos lalatenduM: I told asn that I'll fix it in Rawhide today, he seems happy with that
15:24 lalatenduM ndevos, cool
15:28 dlambrig_ left #gluster-dev
15:35 lalatenduM ndevos, here is the bug
15:35 lalatenduM https://bugzilla.redhat.co​m/show_bug.cgi?id=1166232
15:35 glusterbot Bug 1166232: urgent, urgent, ---, bugs, NEW , Libgfapi symbolic version breaking Samba Fedora rawhide (22)  koji builds
15:35 ndevos lalatenduM++ thanks!
15:35 glusterbot ndevos: lalatenduM's karma is now 47
15:40 bfoster joined #gluster-dev
15:45 ndevos kkeithley_, lalatenduM: please review http://review.gluster.org/9154 :)
15:46 ndevos lalatenduM: I want to commit http://paste.fedoraproject.org/152482/64983791 to fedora dist-git, look ok?
15:48 * ndevos build a test-package now, just to make sure :)
15:49 lalatenduM ndevos, scratch build? yes plz
15:50 ndevos lalatenduM: I tend to build locally with mock, but have started a scratch one for you now: http://koji.fedoraproject.org​/koji/taskinfo?taskID=8194378
15:54 lalatenduM ndevos++ :)
15:54 glusterbot lalatenduM: ndevos's karma is now 59
15:55 Guest96353 joined #gluster-dev
15:57 kkeithley_ I thought lalatenduM already bumped Release: to -2 (for removing regression-tests)
15:58 lalatenduM kkeithley, yes thats right
15:58 ndevos oh, maybe I do not have that commit yet?
15:58 ndevos and that sed is broken, it misses a /
16:00 kkeithley_ fix those and then it's okay, IMO
16:06 kkeithley_ if someone wanted to review the updated libgfapi symbol versions and the Ubuntu code audit patches, that'd be a Good Thing®
16:06 lalatenduM kkeithley, ndevos what abt GFAPI_LT_VERSION="0:0:0" in http://review.gluster.org/#/c/9154/1/configure.ac
16:07 ndevos lalatenduM: http://koji.fedoraproject.org​/koji/taskinfo?taskID=8194492 2nd try
16:07 lalatenduM what is LT stands for ?
16:07 kkeithley_ I actually don't know what that is. I don't think I changed it as part of the symbol versions
16:07 lalatenduM kkeithley, yes you have changed it :)
16:07 ndevos lalatenduM: I think that is the .so.0.0.0 that gets made when passing that option to the linker
16:07 lalatenduM git show 7e497871d11a3a527e2ce192e4274322631f27d0 configure.ac
16:08 kkeithley_ oh, okay.
16:08 lalatenduM ndevos, hmm
16:10 hchiramm joined #gluster-dev
16:13 lalatenduM ndevos, kkeithley ${PACKAGE_VERSION}, wheer does it get value
16:14 ndevos lalatenduM: it is a default in configure.ac
16:15 ndevos lalatenduM: in the end (or beginning) it gets it from ./build-aux/pkg-version
16:16 lalatenduM ndevos, example values?
16:17 lalatenduM ndevos, I hope it would be 4.0.0.0 or something
16:17 ndevos lalatenduM: ./configure executes './build-aux/pkg-version --version' and uses the output as PACKAGE_VERSION
16:17 ndevos lalatenduM: the output would be 3.6.1, and we just have a static 4. in front of that in the .pc
16:18 kkeithley_ is there actually a reason why it should ever change from 4 at this point?
16:18 kkeithley_ well, sorry, assuming that the library api never changes again
16:19 ndevos kkeithley_: maybe other projects want to use pkg-config to check for the availability of new symbols?
16:19 kkeithley_ IOW do we really want to always change it with every release?
16:19 lalatenduM :), I am assuming we will stay at 4 for near future
16:19 ndevos I assume it will always stay "4." and then the actual glusterfs version
16:20 lalatenduM ndevos, whey do we need actul glustersf version? thr
16:20 kkeithley_ I understand that part, i.e. 4
16:20 ndevos unless we encounter a project that uses a pkg-config check for >= 5
16:21 ndevos lalatenduM: if samba starts to use a new function in libgfapi, they need to check for the libgfapi version, if that is always "4", they can not check for new function
16:21 kkeithley_ E.g. in 3.5.4, if we don't add any new APIs, which by definition means no new symbol versions, why would we change the version in glusterfs.pc?
16:22 ndevos kkeithley_: in that case it can stay on "4", but in case new symbols get added, we need to increase it
16:23 kkeithley_ fair enough. I'm okay with that. But you're sed in the glusterfs.spec is going to change it with every Version bump, isn't it?
16:24 kkeithley_ s/you're/your/
16:24 ndevos kkeithley_: it will start to fail when glusterfs-3.6.2 is released and includes a fix
16:25 ndevos but yes, it will always reflect the "4.<version>" with the change
16:25 kkeithley_ going to meet my wife for coffee/lunch. biab
16:25 ndevos enjoy!
16:25 ndevos kkeithley_: you can call that a date ;)
16:25 kkeithley_ so, I just don't want to see it change from 3.6.2 to 3.6.3 if there are no API and symbol version changes
16:25 kkeithley_ from 4.3.6.2 to 4.3.6.3
16:26 ndevos kkeithley_: why not?
16:26 kkeithley_ if there are no new APIs in libgfapi?
16:26 ndevos who would care about the version?
16:26 ndevos or do some projects change "= 4" for pkg-config?
16:27 ndevos s/change/check/
16:27 kkeithley_ samba, ganesha?
16:27 ndevos no, they check >= 4
16:27 kkeithley_ Do we know what anyone does in a configure (or configure.ac) script?
16:27 kkeithley_ okay. I guess I don't care
16:27 kkeithley_ I guess you've convinced me that I don't care
16:27 kkeithley_ ;-)
16:27 ndevos :P
16:28 kkeithley_ ganesha call.
16:28 kkeithley_ lol
16:28 ndevos I guess we'll hear about it when other projects start to break
16:28 ndevos yes, would you cancel/postpone your date for a call?
16:28 * ndevos fetches a drink
16:29 lalatenduM ndevos, you too have my +1
16:29 lalatenduM kkeithley, enjoy the date :)
16:37 davemc good day
16:37 davemc when do we want the bitrot talk?
16:48 hagarth davemc: how about next Tuesday?
16:48 davemc time of day?
16:50 hagarth davemc: that's the million dollar question :)
16:50 davemc yep, but we relly don't have time for a poll to determine it.
16:51 hagarth we would need to do either early morning or early evening Pacific time to get optimal overlap
16:51 hagarth early morning Pacific would be better since we get more TZs covered that way
16:52 davemc we could do the 4Am my time I guess
16:52 davemc and we want it to be a video event right?
16:54 hagarth davemc: even 5 AM your time would work I think
16:54 hagarth davemc: yes something like a google hangout
16:54 davemc that's slightly easier on me
16:55 davemc K, I'll start pulling this together and hit the announce circuit. I'll se the BitRot feature page as a source for material
16:56 hagarth davemc: ok, cool. If you want to make it more convenient, 6 AM could work for overclk too. We would need to check with him though.
16:56 davemc we'll  go with 5AM. I need to get this out today
16:56 hagarth davemc: ok
17:11 bernardo joined #gluster-dev
19:18 davemc We will be discussing approaches to providing BitRot detection in future GlusterFS releases. Please join us to discuss your ideas and learn more about GlusterFS futures. http://bit.ly/1uXNIIL for details
19:19 davemc For more background, please visit http://www.gluster.org/community/doc​umentation/index.php/Features/BitRot
19:37 rafi1 joined #gluster-dev
19:57 jobewan joined #gluster-dev
20:40 badone joined #gluster-dev
20:57 badone joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary