Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2017-07-31

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 baojg joined #gluster-dev
01:20 baojg joined #gluster-dev
01:29 deep-book-gk joined #gluster-dev
01:30 deep-book-gk left #gluster-dev
01:51 ilbot3 joined #gluster-dev
01:51 Topic for #gluster-dev is now Gluster Development Channel - https://www.gluster.org | For general chat go to #gluster | Patches - https://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:02 [o__o] joined #gluster-dev
02:04 prasanth joined #gluster-dev
02:08 [o__o] joined #gluster-dev
02:24 baojg joined #gluster-dev
03:25 baojg joined #gluster-dev
03:38 aravindavk joined #gluster-dev
03:48 ppai joined #gluster-dev
03:48 riyas joined #gluster-dev
03:55 aravindavk joined #gluster-dev
04:06 itisravi joined #gluster-dev
04:25 apandey joined #gluster-dev
04:27 atinm joined #gluster-dev
04:28 baojg joined #gluster-dev
04:31 rejy joined #gluster-dev
04:34 nbalacha joined #gluster-dev
04:42 poornima joined #gluster-dev
04:54 Shu6h3ndu joined #gluster-dev
04:57 skumar joined #gluster-dev
05:07 susant joined #gluster-dev
05:16 hgowtham joined #gluster-dev
05:17 gobindadas joined #gluster-dev
05:23 rraja joined #gluster-dev
05:25 sanoj joined #gluster-dev
05:25 ankitr joined #gluster-dev
05:29 baojg joined #gluster-dev
05:29 Saravanakmr joined #gluster-dev
05:32 ndarshan joined #gluster-dev
05:32 smohan[m] joined #gluster-dev
05:34 sanoj joined #gluster-dev
05:34 sanoj joined #gluster-dev
05:42 skoduri joined #gluster-dev
05:42 apandey joined #gluster-dev
05:43 atinm joined #gluster-dev
05:48 sahina joined #gluster-dev
05:52 GK1wmSU joined #gluster-dev
05:54 GK1wmSU left #gluster-dev
05:55 karthik_us joined #gluster-dev
06:00 pkalever joined #gluster-dev
06:04 apandey_ joined #gluster-dev
06:05 msvbhat joined #gluster-dev
06:05 _GK1wmSU joined #gluster-dev
06:06 _GK1wmSU left #gluster-dev
06:06 apandey_ joined #gluster-dev
06:07 jiffin joined #gluster-dev
06:13 pranithk1 joined #gluster-dev
06:15 rafi joined #gluster-dev
06:17 atinm joined #gluster-dev
06:20 kdhananjay joined #gluster-dev
06:23 apandey__ joined #gluster-dev
06:30 kotreshhr joined #gluster-dev
06:39 aravindavk joined #gluster-dev
06:43 pkalever joined #gluster-dev
06:53 amarts joined #gluster-dev
07:08 nbalacha joined #gluster-dev
07:20 aravindavk joined #gluster-dev
07:35 baojg joined #gluster-dev
07:45 Venkata joined #gluster-dev
07:46 Venkata ndevos, I would like to work on libgfapi bug, can you assign me any bug to start with
07:47 ndevos Venkata: https://bugzilla.redhat.com/show_bug.cgi?id=1463191 looks like something reasonable to start with
07:47 glusterbot Bug 1463191: unspecified, unspecified, ---, bugs, NEW , gfapi: discard glfs object when volume is deleted
07:47 Venkata ndevos, ok. I will work on that
07:48 skoduri joined #gluster-dev
07:48 ndevos Venkata: the problem is basically 1. connect to a gluster volume, 2. have something delete the volume, 3. have something create a new volume, 4. connection rom (1) still thinks it is connected to the original volume
07:49 Venkata ndevos, ok. Thanks for description.
07:49 ndevos Venkata: ideally that should return a fatal error, maybe something like ENODEV, there is no way to recover
07:50 ndevos Venkata: I started with some work for it, no idea where I left that off, and surely did not test it, but it may give you an idea where to start
07:50 * ndevos looks for the patch
07:50 Venkata ndevos, I guess this will  be reproducible even with python bindings also.
07:51 ndevos Venkata: yes, any bindings would have that
07:51 Venkata ndevos, creating new file using old glfs object
07:51 Venkata ndevos, Ok. I will work on that and will update the bz on findings
07:52 ndevos Venkata: or even reading from an opened file that is on the volume that was deleted
07:55 ndevos Venkata: this is what I had so far, it has a few TODO comments - there are basically two approaches in it: 1. through notify() / graph switch, 2. through mgmt_get_volinfo_cbk()
07:55 ndevos Venkata: oh, and the patches of course - https://paste.fedoraproject.org/paste/9x~elhVAUpqAvr~Ito5iwg/raw
07:56 ndevos Venkata: do you have permissions to assign that bug to yourself and change the status to ASSIGNED?
07:57 Venkata ndevos, assign the bug to me, seems like I dont have permissions
07:58 ndevos Venkata: ok, will do, and I'll also request the permissions for you
08:03 Venkata ndevos, do i need to apply these patches  https://paste.fedoraproject.org/paste/9x~elhVAUpqAvr~Ito5iwg/raw
08:04 ndevos Venkata: those are my work-in-progress patches with what I started, I was planning to work on that bug myself but got distracted with other things
08:04 Venkata ndevos, ok . Got it
08:04 ndevos Venkata: you can use them as a base, or to get an idea where the changes most likely will need to be made
08:05 Venkata ndevos, ok. Sure
08:29 ndevos jiffin: could you +1/+2 this change? I think answered your question. https://review.gluster.org/17898
08:37 baojg joined #gluster-dev
08:39 sanoj joined #gluster-dev
08:41 msvbhat joined #gluster-dev
08:49 pkalever joined #gluster-dev
09:02 sona joined #gluster-dev
09:25 msvbhat joined #gluster-dev
09:38 baojg joined #gluster-dev
09:40 pranithk1 joined #gluster-dev
09:42 ndevos csaba: https://review.gluster.org/17728 has been merged already, netbsd tests are now run periodically on the HEAD of some branches
09:44 csaba ndevos: yes, that's the change I found as relevant. I didn't know of the periodic testing policy though
09:45 ndevos csaba: there have been quite a few changes in the mem-pool lately, it was not possible to initialize/cleanup correctly, maybe the problem has been fixed already?
09:46 ndevos csaba: some of the more recent netbsd tests seem to have failed without segfaults, so it might have improved
09:46 csaba ndevos: my hypothesis is that https://review.gluster.org/17728 is the change that fixes the crash in libgfapi-fini-hang.t
09:46 ndevos csaba: I dont know if "recheck netbsd" still does anything, it may be ignored... nigelb should be able to tell us
09:47 csaba ndevos: I added the recheck netbsd to test that hypothesis. Anyway, I just tried :) I already contacted nigelb.
09:48 ndevos csaba: well, it definitely fixes bits in the gfapi init/fini case, but the patch was not correct and additional changes were made
09:56 msvbhat joined #gluster-dev
09:59 csaba ndevos: I don't know of the other vice, but the particular issue of crashing in libgfapi-fini-hang.t was fixed by that patch AFAICS.
10:01 ndevos csaba: that is possible, the test does some uncommon thing that was definitely not tested before
10:01 ndevos (init and fini, without actually using it, I think)
10:03 csaba ndevos: yeajh libgfapi-fini-hang.c calls fini but not init and that this is not the case just that was asserted up 'till 17728.
10:04 nishanth joined #gluster-dev
10:09 skoduri joined #gluster-dev
10:12 hgowtham joined #gluster-dev
10:35 kotreshhr left #gluster-dev
10:36 ndevos jiffin: did you have any wishes to see https://review.gluster.org/17898 updated? I'll change it a little because of skoduris comment in a bit
10:39 baojg joined #gluster-dev
10:39 hgowtham joined #gluster-dev
10:51 sona joined #gluster-dev
10:57 kotreshhr joined #gluster-dev
11:03 aravindavk joined #gluster-dev
11:03 rafi joined #gluster-dev
11:05 skumar_ joined #gluster-dev
11:30 jiffin ndevos: i give first look which was okay
11:32 ndevos jiffin: ok, thanks
11:34 sona joined #gluster-dev
11:38 rafi1 joined #gluster-dev
11:41 baojg joined #gluster-dev
11:58 sac` joined #gluster-dev
12:02 skumar joined #gluster-dev
12:02 skumar_ joined #gluster-dev
12:11 msvbhat joined #gluster-dev
12:14 nbalacha joined #gluster-dev
12:16 msvbhat joined #gluster-dev
12:19 sona joined #gluster-dev
12:41 baojg joined #gluster-dev
12:48 rraja joined #gluster-dev
12:51 baojg joined #gluster-dev
12:54 baojg joined #gluster-dev
12:56 kkeithley hmm, that's interesting.
12:56 kkeithley /var/lib/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh: line 5: [: cluster.localtime-logging: integer expression expected
12:57 kkeithley man test says INTEGER1 -eq INTEGER2.
12:57 kkeithley Off hand I'd say "$key" (-eq) "enabled-shared-storage" are not integers
12:58 kkeithley that bug has been there for quite some time...
12:58 ndevos wow, yes, looks like it
13:01 jstrunk joined #gluster-dev
13:07 amarts joined #gluster-dev
13:09 kkeithley and even if it had been correct,   key != enable-shared-storage or key = cluster.enable-shared-storage then exit?   don't we want to do this if  it is cluster.enabled-shared-storage?
13:09 kkeithley bah
13:10 sahina joined #gluster-dev
13:15 kkeithley and everyone who ever touched that is gone. :-/
13:16 kkeithley So I can fix the blame. ;-)
13:16 baojg joined #gluster-dev
13:22 msvbhat joined #gluster-dev
13:26 kotreshhr left #gluster-dev
13:46 amarts joined #gluster-dev
13:48 shyam joined #gluster-dev
13:51 sthakkar joined #gluster-dev
13:53 msvbhat joined #gluster-dev
13:57 jiffin joined #gluster-dev
13:58 amarts joined #gluster-dev
14:09 jiffin joined #gluster-dev
14:20 nbalacha joined #gluster-dev
14:37 vbellur joined #gluster-dev
14:38 smohan[m] joined #gluster-dev
14:39 vbellur joined #gluster-dev
14:57 skoduri joined #gluster-dev
14:57 vbellur joined #gluster-dev
14:58 vbellur joined #gluster-dev
15:05 wushudoin joined #gluster-dev
15:10 jiffin joined #gluster-dev
15:14 kkeithley /var/lib/glusterd/options is erased every time the glusterfs-server package is removed (including updated? update is remove+install?)
15:15 kkeithley this is where things like cluster.enable-shared-storage and perhaps soon cluster.localtime-logging is persisted.
15:15 kkeithley seems like this should be a %config(noreplace) file in the glusterfs.spec(.in)
15:16 kkeithley ndevos, mchangir: ^^^
15:20 kkeithley atinm: ^^^
15:21 susant joined #gluster-dev
15:21 atinm kkeithley, yes, this shouldn't be removed IMO
15:52 gobindadas joined #gluster-dev
15:53 jiffin joined #gluster-dev
15:59 vbellur joined #gluster-dev
16:32 pranithk1 joined #gluster-dev
16:49 jiffin joined #gluster-dev
16:58 pranithk1 joined #gluster-dev
17:01 jiffin joined #gluster-dev
17:12 kkeithley Is 3.10 going to EOL with 4.0 now that 4.0 is going to be a STM release?
17:15 msvbhat joined #gluster-dev
18:06 kkeithley just asking wrt accuracy of https://www.gluster.org/community/release-schedule/#
18:07 kkeithley also noting that 3.10.5 did not get released on the 30th
18:37 wushudoin joined #gluster-dev
19:27 msvbhat joined #gluster-dev
20:54 JoeJulian @Humble: I saw that you started working on a helm chart for gluster (or gluster and heketi?). What ever happened to that?
22:09 vbellur joined #gluster-dev
22:18 obnox JoeJulian: see this:
22:18 obnox https://github.com/gluster/gluster-kubernetes/issues/39
22:18 obnox https://github.com/heketi/heketi/issues/631
22:21 obnox JoeJulian: I know there was something that Humble did, as well, but I can currently not find it
23:05 obnox gosh, gerrit is again driving me crazy.
23:06 obnox how can I really save a comment in the diff that I'm adding (or rather replying to). when i click on "save" it tells me "Draft" and I don't find a button to save it for good ... :-/
23:08 obnox wow. go up to main view, click on reply and then on post. that's a whacky flow ;-)
23:12 shyam joined #gluster-dev
23:45 msvbhat joined #gluster-dev
23:51 https_GK1wmSU joined #gluster-dev
23:52 https_GK1wmSU left #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary