Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2014-11-10

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:46 JustinClift joined #gluster-dev
01:02 badone joined #gluster-dev
01:15 topshare joined #gluster-dev
02:46 ilbot3 joined #gluster-dev
02:46 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
03:27 kshlm joined #gluster-dev
03:34 badone joined #gluster-dev
03:44 kanagaraj joined #gluster-dev
04:21 shubhendu joined #gluster-dev
04:24 kdhananjay joined #gluster-dev
04:27 nkhare joined #gluster-dev
04:29 atinmu joined #gluster-dev
04:38 ppai joined #gluster-dev
04:44 rafi1 joined #gluster-dev
04:44 Rafi_kc joined #gluster-dev
04:45 anoopcs joined #gluster-dev
04:48 soumya__ joined #gluster-dev
04:49 ndarshan joined #gluster-dev
05:14 atalur joined #gluster-dev
05:20 hagarth joined #gluster-dev
05:20 anoopcs joined #gluster-dev
05:22 spandit joined #gluster-dev
05:22 jiffin joined #gluster-dev
05:25 topshare joined #gluster-dev
05:31 nishanth joined #gluster-dev
05:31 aravindavk joined #gluster-dev
05:38 lalatenduM joined #gluster-dev
05:48 ndarshan joined #gluster-dev
05:48 shubhendu joined #gluster-dev
05:48 nishanth joined #gluster-dev
05:51 bala joined #gluster-dev
06:20 shubhendu joined #gluster-dev
06:21 nishanth joined #gluster-dev
06:21 ndarshan joined #gluster-dev
06:25 bala joined #gluster-dev
06:31 krishnan_p joined #gluster-dev
06:44 soumya joined #gluster-dev
06:44 pranithk joined #gluster-dev
06:57 vimal joined #gluster-dev
07:04 soumya joined #gluster-dev
07:05 bala joined #gluster-dev
07:15 aravindavk joined #gluster-dev
07:16 ppai joined #gluster-dev
07:26 Humble https://bugzilla.redhat.com/show_bug.cgi?id=1161893 lalatenduM ndevos
07:26 glusterbot Bug 1161893: urgent, urgent, ---, bugs, NEW , volume no longer available after update to 3.6.1
07:27 lalatenduM Humble, yeah looks scary
07:27 Humble yep .. :(
07:27 Humble even with fresh installation of 3.6.1 he is facing this issue..
07:28 Humble https://bugzilla.redhat.com/show_bug.cgi?id=1161893#c1 ^^
07:28 glusterbot Bug 1161893: urgent, urgent, ---, bugs, NEW , volume no longer available after update to 3.6.1
07:28 Humble till step 6 it looks good..
07:28 Humble step 7 says :  set nfs.disable on for the volume since I have several nfs exported filesystems and they would conflict
07:33 topshare joined #gluster-dev
07:38 _Bryan_ joined #gluster-dev
07:55 ppai joined #gluster-dev
08:11 ndevos lalatenduM, Humble: has that entry in /etc/fstab been marked with _netdev?
08:30 ppai joined #gluster-dev
08:39 lalatenduM ndevos, good point
08:39 lalatenduM ndevos, however all works fine with 3.5.2 it seems
08:40 hagarth Humble: might be worthwhile to get the logs too
08:40 Humble ndevos, _netdev affect only the booting process mount right..
08:40 Humble he faced the issue right after the upgrade.
08:43 Humble hagarth, do u see any possibility of client server version compatibility here ?
08:43 Humble here/issue here /g
08:43 Humble hagarth, it looks like he is having a replica setup
08:45 vimal joined #gluster-dev
08:46 hagarth Humble: "volume status shows volume as not online"
08:46 hagarth seems suspicious to me
08:46 Humble oh.. missed that..
08:46 hagarth I think it would be better to get glusterfs logs from the setup
08:46 hagarth and see why 3.6.1 failed to startup
08:47 Humble yep .. its better to request logs..
08:50 lalatenduM hagarth, Humble is the volume is  going to "not started" after update or something
08:50 hagarth lalatenduM: I believe that to be the case
08:50 Humble lalatenduM, could be..
08:51 topshare joined #gluster-dev
08:52 Humble ndevos, does "nfs.disable" any role here for the volume to not start after upgrade ?
08:52 Humble inject(have) :)
08:58 ndevos Humble: it should not!
09:10 ZhangHuan joined #gluster-dev
09:26 lalatenduM hagarth, regarding the CentOS Dojo,  do we know who is going to give session on Gluster on CentOS
09:28 hagarth lalatenduM: krishnan_p might be able to do that .. haven't checked as yet
09:29 krishnan_p hagarth, lalatenduM when is this session? Does it matter that I haven't used CentOS distribution before?
09:29 lalatenduM hagarth, cool, we just one slot empty for the say
09:30 lalatenduM krishnan_p,  it should be related to CentOS,  check http://wiki.centos.org/Events/Dojo/Bangalore2014
09:31 krishnan_p lalatenduM, Oh. I am not a CentOS user :(
09:32 lalatenduM krishnan_p, you can give a demo on CentOS , it should be ok
09:34 lalatenduM krishnan_p, CentOS is binary clone of RHEL :)
09:49 krishnan_p lalatenduM, I will let you know if I can make it to the CentOS dojo.
09:49 lalatenduM krishnan_p, I am pretty sure you will, hagarth and I counting on you :)
09:55 vimal joined #gluster-dev
09:59 pranithk joined #gluster-dev
10:08 soumya joined #gluster-dev
10:13 suliba joined #gluster-dev
10:17 soumya joined #gluster-dev
10:18 ppai joined #gluster-dev
10:52 kshlm joined #gluster-dev
11:43 soumya__ joined #gluster-dev
11:43 ndevos hagarth: btw, is there a reason you are holding back a 3.6.1 announcement?
11:44 shyam joined #gluster-dev
11:44 ndevos I'm wondering if I should close the bugs that are fixed in 3.6.1 (and 3.6.0), but I'd like to point to an email when doing that :)
11:44 hagarth ndevos: no! attribute it to my copious amount of time I'm getting to do things right now :)
11:45 hagarth ndevos: let me send one right away
11:45 ndevos hagarth: ah, okay, cool!
11:49 krishnan_p xavih, there?
11:52 ws2k3 is there a gluster 3.6 debian repo already ?
11:54 xavih krishnan_p: Hi
11:55 krishnan_p xavih, I wanted to bounce off an idea that I have regarding the timer_call_cancel problem you posted on the devel mailing list.
11:57 krishnan_p xavih, I am not sure how to control the execution of the call back, since the scheduling of the timer event is abstracted from the caller.
11:57 xavih krishnan_p: I'm just writing a possible solution
11:57 krishnan_p xavih, I was wondering if we could extended the call back to have another argument, just like synctask_cbk_fn, that reports if callback was executed or cancelled
11:59 krishnan_p for e.g, timer_cbk_fn (void *data, int op_ret) - Here if op_ret is zero would be interpreted as the call back was executed otherwise we can infer that the callback was cancelled
11:59 krishnan_p Does that make sense?
12:00 xavih krishnan_p: well that could be a solution. I was trying to give that information to the caller of the cancel
12:01 krishnan_p xavih, OK. It would be interesting to see your solution.
12:01 xavih krishnan_p: maybe using your idea simplifies the logic of the caller of cancel. And use the additional parameter to simply convert the callback in a noop
12:02 xavih krishnan_p: basically I was returning 1 if the callback has been cancelled, and 0 if it has been executed (or it's being executed)
12:02 hagarth ndevos: done
12:03 ndevos hagarth++ thanks!
12:03 glusterbot ndevos: hagarth's karma is now 23
12:03 krishnan_p xavih, OK. I get your idea.
12:03 xavih krishnan_p: this approach seems to be compatible with current code because there's no place where return value of gf_timer_call_cancel() is used
12:03 xavih krishnan_p: however I like your idea...
12:04 krishnan_p xavih, the way I think of the timer API is like a mechanism to defer execution. This allows the caller to break down the execution into "top-half" and "bottom-half"
12:05 krishnan_p xavih, it would be convenient for the caller to have the bottom half safely stashed into one place, the timer_cbk. With op_ret, the caller must be able to either perform the bottom-half or not.
12:05 krishnan_p xavih, but your idea is compatible with existing code.
12:05 krishnan_p xavih, I was thinking if we can get a little bold and change the API, once we are confident we got it right and make all callers move to the new (and improved) one :)
12:06 krishnan_p xavih, Lets see what others have to say about this in the list
12:06 xavih krishnan_p: I'm thinking about your solution. What happens if the callback is being executed (no one has cancelled it yet), but just when it's starting to execute, another thread calls gf_timer_call_cancel() ?
12:08 edward1 joined #gluster-dev
12:09 krishnan_p xavih, let me try answering that. Bear with me :)
12:09 xavih krishnan_p: the caller of gf_timer_call_cancel() will think it has cancelled the callback
12:09 krishnan_p xavih, the way I am thinking, the callback will be executed always. Hold your breath :)
12:10 krishnan_p xavih, if cancel was called just in time, then the call back is called with a non-zero op_ret and zero otherwise
12:10 krishnan_p xavih, this way, the call back knows if the cancel suceeded or not but not the caller of timer_cancel itself.
12:11 xavih krishnan_p: then all cleanup code will need to be placed in the callback, even if someonce cancels the timer ?
12:11 krishnan_p xavih, yes.
12:11 xavih krishnan_p: couldn't this defer the release of some resources much more time than needed ?
12:12 Humble lalatenduM++
12:12 glusterbot Humble: lalatenduM's karma is now 42
12:12 xavih krishnan_p: and what would happen if the caller of gf_timer_call_cancel() wants to cancel the callback to continue using some resource ?
12:13 xavih krishnan_p: for example I'm using a timer to delay the release of a lock (similar to what afr does), but if a compatible request arrives in time, I cancel the timer to reuse the lock.
12:16 krishnan_p xavih, sorry was afk a bit
12:16 krishnan_p xavih, let me read your questions ...
12:16 xavih krishnan_p: np :)
12:17 soumya__ joined #gluster-dev
12:17 krishnan_p xavih, i am going to try and save my approach :)
12:18 krishnan_p xavih, so, you could call the callback in the timer_cancel code, (even) if you managed to 'dequeue' the callback. i.e before it was executed.
12:19 krishnan_p xavih, does it cover your use case. This way we don't make resource reclamation lazy and therefore non-deterministic
12:19 xavih krishnan_p: that sounds much better :)
12:19 xavih krishnan_p: if I understand correctly, when you cancel a timer 3 things could happen:
12:20 xavih krishnan_p: 1. The callback has already been executed: Nothing to do here
12:20 xavih krishnan_p: 2: The callback has not been executed: It's called passing op_ret != 0
12:22 xavih krishnan_p: 3. The callback is being executed. Will we need to wait here for completion before returning to the gf_timer_call_cancel() caller ?
12:23 krishnan_p xavih, In case 3, the cancellation will fail the same way as case 1.
12:23 krishnan_p xavih, I am assuming, being executed is the same as, not being able to find the event in the list of events that are waiting to be scheduled
12:24 xavih krishnan_p: but in this case, when gf_timer_call_cancel() returns, the caller does not have a way to be sure that the callback is not being executed concurrently
12:24 krishnan_p xavih, sorry I am not assuming, in fact, being executed is true if and only if the event has been dequeued from the list of events to be scheduled
12:24 krishnan_p xavih, hmm. yes.
12:24 xavih krishnan_p: couldn't this be problematic ?
12:26 xavih krishnan_p: I don't have a specific case, but what if the caller want's to reuse something that the callback can also use if not cancelled ? if the callback is still executing, there could be races between both threads
12:27 krishnan_p xavih, I have no problem with the cancel timer returning whether it managed to cancel the timer event
12:27 krishnan_p xavih, my aim is make it easier for consumers of timer API to be able to keep the resource cleanup code in one place, i.e bottom-half.
12:28 xavih krishnan_p: yes, I agree, but I see some potential problems with the case 3, specially when there are some resource management
12:29 krishnan_p xavih, wouldn't it be enough if we returned the success of the cancel?
12:30 krishnan_p xavih, in addition to other parts of the approach I explained to you.
12:30 foster joined #gluster-dev
12:31 krishnan_p xavih, by this we can address both the styles of resource management. 1) timer call back handles resource cleanup in both "executed" and 'cancelled" mode. (cleanup code in one function)
12:31 xavih krishnan_p: yes, I think this would be the bare minimum. At least the caller must know if the callback has been executed normally or with "cancel" semantics
12:31 krishnan_p xavih, 2) call cancel returns success, which can be used to perform the resource cleanup, assuming that the call back works only in the "executed" mode
12:33 krishnan_p xavih, Without the call cancel returning success/failure, we would allow only method 1) possible, which may be restricting (I guess)
12:34 xavih krishnan_p: even if the callback takes care of the resource cleanup, it's necessary to know if a cancel succeeded or not, not to release the resources, but to know if a resource can be used (not released by the callback) or not (the callback was executed before cancelling it)
12:35 krishnan_p xavih, Yes. That was very well put.
12:35 ws2k3 is there a gluster 3.6 debian repo already ?
12:35 ws2k3 for 3.5.2 there is a repo but for 3.6 i am unable to find it
12:36 xavih krishnan_p: I think the use of an op_ret argument on the callbacks will allow to write some of the code in a single place, which is good
12:36 rafi1 joined #gluster-dev
12:37 krishnan_p xavih, thanks. Do you know places in your code where you would find it useful?
12:38 xavih krishnan_p: this combined with an inline call to the callback if a cancel is done before executing it and allowing the caller of the cancel to know if it has been cancelled, I think it covers all possibilities
12:39 xavih krishnan_p: I have a race that I think is related to not correctly handling the timer callbacks (in my case basically because the caller of the cancel does not know if the callback has been executed)
12:39 krishnan_p xavih, the only concern I have with my idea is that consumers who don't care for resource cleanup in a single place are forced to check op_ret :(
12:40 krishnan_p xavih, oh ok. Given the restriction my idea puts to the consumer, I would defer suggesting it until I discover use for structuring code that way.
12:40 xavih krishnan_p: maybe we could add an argument to gf_timer_call_cancel() to specify if the caller wants the callback function to be called. This would allow both approaches
12:40 shubhendu joined #gluster-dev
12:41 krishnan_p xavih, the more I think of it, my approach is not really solving anything, but trying to impose (subjective) code structure aesthetics :)
12:42 xavih krishnan_p: I would need to think more about that to see if there are pros/cons to both approaches
12:42 krishnan_p xavih, thanks for humouring this discussion :) Your approach (alone) does the trick for existing issues caused due to lack of knowing if the call back was executed.
12:43 krishnan_p xavih, let me know if it makes sense to you later.
12:43 Rafi_kc joined #gluster-dev
12:44 xavih krishnan_p: I'll think more about it. Let's see if someone else answers the mail to see other points of view :)
12:46 krishnan_p xavih, thanks.
12:46 xavih krishnan_p: thanks to you for sharing your ideas :)
12:53 ws2k3 for 3.5.2 there is a repo but for 3.6 i am unable to find it
12:53 ws2k3 is there a gluster 3.6 debian repo already ?
12:59 anoopcs joined #gluster-dev
13:12 Rafi_kc joined #gluster-dev
13:37 JustinClift ndevos: Just noticed the disconnection notice in Jenkins for slave21 about the permission denied thing
13:37 JustinClift Want me to fix it?
13:41 hagarth joined #gluster-dev
13:42 kanagaraj joined #gluster-dev
13:49 ndevos JustinClift: yes please, and I was wondering if there is a real fix for that in Gerrit somewhere?
13:54 rafi1 joined #gluster-dev
13:56 aravindavk joined #gluster-dev
14:02 JustinClift ndevos: Thinking about it, we can probably add a sudo chown line to the Jenkins script to permanently keep the permissions in such a way that the jenkins user can overwrite it
14:02 JustinClift ndevos: But it does seem a bit weird this is suddenly happening
14:03 * JustinClift hasn't investigated it properly, to figure out what changed
14:03 ndevos JustinClift: yeah, and that does not really sound like a proper fix
14:03 JustinClift Dodgy but effective workaround vs "proper"
14:03 JustinClift ;)
14:05 JustinClift ndevos: Trying to look up the maintiners for glusterd in the MAINTAINERS files.
14:05 JustinClift file
14:05 JustinClift ndevos: Is that the "Management daemon" ?
14:06 JustinClift eg KP and Kaushal M?
14:06 kshlm JustinClift, yep. That is us.
14:06 JustinClift Cool
14:07 JustinClift The "xlators/mgmt/" entry was confusing for me. Kind of expected some mention of "glusterd". ;)
14:07 JustinClift kshlm: Do you have a minute to respond back to Atin about the mgmt_v3-locks.t ?
14:08 JustinClift This spurious failure has been hanging around too long.  Lets get it fixed. :)
14:09 nishanth joined #gluster-dev
14:09 kshlm If you go further, its actually 'xlators/mgmt/glusterd'
14:10 kshlm I'll catch up with Atin on the failure.
14:10 JustinClift kshlm: http://fpaste.org/149336/41562861/ ?
14:10 JustinClift kshlm: Thx
14:11 JustinClift Ahhh gotcha.  In the filesystem it's xlators/mgmt/glusterd/
14:11 JustinClift Not in the MAINTAINERS file tho (would be useful). ;)
14:12 krishnan_p joined #gluster-dev
14:12 * JustinClift may get around to sending a CR for that
14:38 shyam joined #gluster-dev
14:48 topshare joined #gluster-dev
14:48 topshare joined #gluster-dev
14:49 topshare joined #gluster-dev
14:54 kshlm joined #gluster-dev
14:59 topshare joined #gluster-dev
15:05 topshare joined #gluster-dev
15:08 wushudoin joined #gluster-dev
15:31 topshare joined #gluster-dev
15:33 _Bryan_ joined #gluster-dev
15:34 topshare joined #gluster-dev
15:38 pranithk joined #gluster-dev
15:42 tdasilva joined #gluster-dev
16:02 lpabon joined #gluster-dev
16:23 soumya__ joined #gluster-dev
16:24 lpabon joined #gluster-dev
16:27 davemc hey, where are we on 3.5.3? beta available yet?
16:27 davemc <lost in the San Franciusco fog today>
16:29 davemc never mine, found it, http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.3beta2 on 5-Nov
16:45 bala joined #gluster-dev
16:46 aravindavk joined #gluster-dev
17:02 kanagaraj joined #gluster-dev
17:05 _Bryan_ joined #gluster-dev
17:13 xavih it would be interesting to run a smoke test on each patch for each volume type. Currently it's only checked for a replicated volume
17:16 hagarth xavih: +1, can you post this request on gluster-infra ML ?
17:16 xavih hagarth: ok, will do
17:30 jobewan joined #gluster-dev
17:50 lalatenduM joined #gluster-dev
17:55 JustinClift hagarth: Ahhh, so kube1 & kube2 are yours?
17:55 hagarth JustinClift: no
17:55 hagarth JustinClift: it was just an educated guess from me :)
17:55 JustinClift The naming makes sense though
17:55 JustinClift Yeah
17:56 JustinClift I really wish people would email the list when they create VM's, so we know what's what
17:56 JustinClift It's not like that's hard to do. ;)
17:57 JustinClift Hmmm.  Corvid tech email about "inode needed for version checking"
17:59 JustinClift That warning is showing up for an older Coverity run on GlusterFS to
17:59 hagarth JustinClift: responded ;)
17:59 JustinClift too
17:59 JustinClift hagarth: Tx :)
17:59 JustinClift kkeithley: Would it impact us badly to keep the Coverity runs around for longer?
18:00 JustinClift eg 2 months instead of um, the 1 month or so it seems to be
18:00 JustinClift kkeithley: Asking only because I'm getting a 404 on a Google search result, that's pointing back to Coverity static analysis results in 3.6 release from September
18:00 JustinClift Unsure if keeping them longer would be helpful tho ;)
18:23 hagarth JustinClift: I think we just need to integrate Sonar with gerrit
18:24 hagarth Humble and misc were discussing a gerrit upgrade on gluster-infra. If we can move gerrit to 2.8+, we can certainly integrate with sonar and nip static analysis violations before commits happen to the repository.
18:35 lalatenduM hagarth, +1
19:02 _Bryan_ joined #gluster-dev
19:27 lalatenduM joined #gluster-dev
20:58 Mexolotl joined #gluster-dev
20:58 Mexolotl hi all :)
20:59 Mexolotl I got a question, I want to build a distributed owncloud running on banana pi's with mysql cluster and glusterfs . Actually I got just one banana pi. So my question is can I build the geo replication later and install/run my owncloud till I got the second pi?
21:47 _Bryan_ joined #gluster-dev
21:57 _Bryan_ joined #gluster-dev
21:59 _BryanHM_ joined #gluster-dev
22:00 Mexolotl left #gluster-dev
22:40 badone joined #gluster-dev
23:46 _Bryan_ joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary