Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-06-17

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:53 rafi joined #gluster-dev
00:59 Alghost_ joined #gluster-dev
01:35 kotreshhr joined #gluster-dev
01:35 kotreshhr left #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:35 magrawal joined #gluster-dev
03:22 raghug joined #gluster-dev
03:27 pranithk1 joined #gluster-dev
03:40 nishanth joined #gluster-dev
03:49 gem joined #gluster-dev
04:05 itisravi joined #gluster-dev
04:07 shubhendu joined #gluster-dev
04:12 atinm joined #gluster-dev
04:18 poornimag joined #gluster-dev
04:29 aspandey joined #gluster-dev
04:29 kotreshhr joined #gluster-dev
04:36 kotreshhr joined #gluster-dev
04:41 msvbhat joined #gluster-dev
04:42 nbalacha joined #gluster-dev
04:55 ramky joined #gluster-dev
04:58 aravindavk joined #gluster-dev
04:59 prasanth joined #gluster-dev
05:00 kshlm joined #gluster-dev
05:02 kotreshhr joined #gluster-dev
05:02 ndarshan joined #gluster-dev
05:08 nbalacha joined #gluster-dev
05:11 Saravanakmr joined #gluster-dev
05:11 sakshi joined #gluster-dev
05:17 karthik___ joined #gluster-dev
05:17 mchangir joined #gluster-dev
05:24 ppai joined #gluster-dev
05:27 itisravi joined #gluster-dev
05:28 kramdoss_ joined #gluster-dev
05:28 jiffin joined #gluster-dev
05:32 Apeksha joined #gluster-dev
05:34 Manikandan joined #gluster-dev
05:43 skoduri joined #gluster-dev
05:44 Humble joined #gluster-dev
05:55 asengupt joined #gluster-dev
06:03 ashiq joined #gluster-dev
06:03 pur_ joined #gluster-dev
06:04 aspandey joined #gluster-dev
06:07 hgowtham joined #gluster-dev
06:17 pkalever joined #gluster-dev
06:22 spalai joined #gluster-dev
06:23 atalur joined #gluster-dev
06:26 pranithk1 joined #gluster-dev
06:40 Manikandan_ joined #gluster-dev
06:45 rafi joined #gluster-dev
06:46 anoopcs pranithk1, https://review.gluster.org/#/c/14188/ has passed regressions.
06:47 rafi joined #gluster-dev
06:53 pranithk1 anoopcs++: Thanks!
06:53 glusterbot pranithk1: anoopcs's karma is now 34
06:55 ramky joined #gluster-dev
06:57 Manikandan joined #gluster-dev
06:57 ashiq joined #gluster-dev
07:11 gem joined #gluster-dev
07:21 spalai left #gluster-dev
07:21 spalai joined #gluster-dev
07:32 gem_ joined #gluster-dev
08:17 Manikandan joined #gluster-dev
08:23 nixpanic joined #gluster-dev
08:23 nixpanic joined #gluster-dev
08:23 v12aml joined #gluster-dev
08:23 ndk_ joined #gluster-dev
08:24 zoldar joined #gluster-dev
08:36 kotreshhr1 joined #gluster-dev
08:36 magrawal_ joined #gluster-dev
08:40 rafi1 joined #gluster-dev
08:41 atalur_ joined #gluster-dev
08:59 itisravi joined #gluster-dev
09:25 sankarshan joined #gluster-dev
09:49 karthik___ joined #gluster-dev
09:58 ndevos Apeksha: do you know who "karthick" on http://review.gluster.org/14283 is, and ask him to configure an email on http://review.gluster.org/#/settings/contact ?
10:11 jiffin ndevos: karthick is kramdoss_
10:13 lpabon joined #gluster-dev
10:14 kramdoss_ ndevos, done
10:15 kramdoss_ thanks jiffin
10:16 Manikandan joined #gluster-dev
10:19 jiffin kramdoss_: np
10:33 ndevos kramdoss_, jiffin: thanks!
10:35 ira joined #gluster-dev
10:47 atinm joined #gluster-dev
10:51 micw joined #gluster-dev
10:51 micw hi
10:51 micw i have strange behaviour with quotas which might be a bug
10:51 micw am i right here or should i discuss it first on #gluster?
10:59 karthik___ joined #gluster-dev
11:08 jiffin Manikandan: ^^
11:10 micw i run glusterfs with 12 bricks (3 replica). each brick is on it's own zfs volume
11:10 micw when I enable quota, i get an error:
11:10 micw quota command failed : Commit failed on localhost. Please check the log file for more details.
11:11 micw http://paste.ubuntu.com/17428935/
11:12 micw from now on i can neither disable quota (gluster says it's not enabled) nor enable it (gluster says it's already enabled)
11:12 micw nor I can do any other quota-replated operations
11:19 ira joined #gluster-dev
11:20 jiffin micw: basically in one of ur node (local host I guess), setting quota option failed
11:20 jiffin but in other node it worked
11:20 micw strange. all nodes are equal
11:21 jiffin can check gluster vol info <volname> in every node?
11:25 micw looks identical. nothing about quota
11:25 micw only on the 1st node is a quotad log
11:25 micw ah sorry
11:26 micw i re-created the whole volume
11:26 micw and did not enable quota again
11:26 micw now i did so
11:26 micw on the 1st node it shows that quopta is on
11:26 micw on the 2 others not
11:28 atinm joined #gluster-dev
11:31 jiffin check whether any quota related failures on the 2 in the /var/log/glusterfs/etc-...log
11:33 micw i cleared the whole cluster (removed volumes deleted brick's contents) and start again
11:34 micw created the volume: no errors (except that the brick directory already exists)
11:34 micw started
11:34 jiffin what does peer status show?
11:34 jiffin gluster v peer status
11:34 jiffin sorry
11:34 Manikandan jiffin, sorry, I was away, checking now
11:35 jiffin Manikandan: np
11:35 jiffin gluster peer status
11:35 micw Number of Peers: 2
11:35 micw both conneted
11:35 micw while starting the gluster, i got some errors
11:36 jiffin micw: good
11:36 micw http://pastebin.ubuntu.com/17429294/
11:37 Manikandan micw, which version are you using
11:37 micw latest from debian repo
11:38 micw that's 3.8.0
11:38 Manikandan Can you precisely say what's happening?
11:38 micw Manikandan, thought I did it ;-)
11:38 micw but i start again:
11:38 Manikandan micw, My connection got lost in between, I could not look at older messages, sorry
11:39 micw i try to create a glusterfs. i have 3 machines, all in different locations
11:39 micw low latency connections, 1 gbit
11:39 micw (average latency ~0.3ms)
11:39 micw machines are not firewalled yet
11:40 micw each machine has 4 disks, 4tb each
11:40 jiffin Manikandan: if I understand it correctly, when micw enables quota, it failed on one of the node in the cluster
11:40 micw i created zfs on each disk
11:40 jiffin other it worked
11:40 micw jiffin, quota failure might be only a symptom, so i explain the whole thing
11:41 Manikandan micw, jiffin that should not ideally happen
11:41 micw now i create a volume "backups", replicas 3, machine1:zfs1 machine2:zfs1 machine3:zfs1 machine1:zfs2 ...
11:41 jiffin micw: carry on
11:41 Manikandan micw, can you paste the gluster v status output from all the nodes, let's check if the daemon is down
11:41 micw so i have 3 mirrors over 3 machines and 4x distributed
11:42 micw oh, i have enabled tsl on eas node for management
11:42 micw tls
11:43 micw on each node
11:43 micw (typo)
11:43 micw here's the status:
11:43 micw http://pastebin.ubuntu.com/17429399/
11:44 micw maybe i should mention that i have all hosts with their ip in /etc/hosts
11:44 micw e.g. "1.1.1.1 node1", "1.1.1.2 node2", "1.1.1.3 node3"
11:44 micw there's no "127.0.0.1 node1" on node1
11:44 micw maybe this matters
11:45 Manikandan micw, no that doesn't matter
11:45 micw please have a look on the log when i started the volume: http://pastebin.ubuntu.com/17429294/
11:45 micw i have many "glustershd has disconnected from glusterd"
11:45 micw and:  0-mgmt: failed to fetch volume file (key:gluster/glustershd)
11:46 micw with quota i have a similar error
11:46 micw (failed to fetch volume file)
11:46 micw oh and i have the brick on th mountpoint not a directory. if that matters
11:47 Manikandan Is quota even enabled on your volume "backups"
11:49 micw not yet
11:49 micw shall i enable it now
11:49 micw (i did a fresh install a few minutes ago)
11:49 Manikandan micw, yep
11:50 micw quota command failed : Commit failed on localhost. Please check the log file for more details.
11:50 micw http://pastebin.ubuntu.com/17429490/
11:51 micw and absolute nothing in the logs of the other nodes
11:52 micw in the 1st node i have the status line: Quota Daemon on localhost                   N/A       N/A        N       N/A
11:52 micw same on the other
11:53 micw when i enable it on each host separately, at least i can disable it again ;-)
11:53 micw when i enable it on only one, i cannot even disbale it anymore
11:54 Manikandan micw, please send all your logs
11:54 Manikandan micw, we will have a look and get back to you soon
11:54 kkeithley misc, nigelb: what are all the /tmp/ansible-tmp* dirs on download.g.o?  Is somthing not cleaning up correctly?
11:56 micw Manikandan, that's http://pastebin.ubuntu.com/17429294/ and http://pastebin.ubuntu.com/17429490/
11:57 micw I had a tail -f on *.log
11:57 micw the 1st is from cluster creation, the 2nd from quota enable
11:58 kkeithley skoduri,jiffin,ndevos: do we have anything we need to "sign off" on for 3.7.12 that hagarth is waiting for?
11:59 Manikandan micw, It is not only quotad but even self heal daemon also fails to fetch the volume file
11:59 kkeithley is hagarth waiting for us to sign off on anything for 3.7.12?
11:59 micw strange that self healing works
11:59 micw i tested this before
12:00 Manikandan micw, okay we will try on our setup
12:06 misc kkeithley: likely, that's harmless, but I was investigating why some job never finish
12:06 misc so it could be a side effect
12:07 Manikandan jiffin++, thanks a lot :-)
12:07 glusterbot Manikandan: jiffin's karma is now 43
12:07 jiffin kkeithley: there are two patches not yet merge in 3.7 branch which passed all the regression
12:07 jiffin http://review.gluster.org/14610
12:07 jiffin http://review.gluster.org/14659
12:08 jiffin kkeithley, ndevos, skoduri: do we need to merge those changes?
12:08 kkeithley jiffin: 3.7 is frozen, so we can't merge those until after the release.  I'm only asking about "sign off" on the rc
12:08 jiffin K'
12:09 jiffin apart that I have no concerns
12:09 kkeithley okay
12:10 skoduri kkeithley, sorry.. I hadn't run any specific tests to comment on any of the component..
12:10 jiffin micw: can to try the same without enabling ssl
12:10 jiffin ?
12:10 micw sure
12:11 kkeithley skoduri: fair enough ;-)  What is your comfort level with 3.7.12rc1 as is, without testing?
12:11 kkeithley jiffin, ndevos: ^^^
12:12 skoduri kkeithley, I know of one issue with nfs-ganesha which is broken with latest 3.7.12 build which jiffin is working..
12:13 skoduri kkeithley, http://review.gluster.org/14701
12:13 skoduri kkeithley, apart from that I do not see/recall any issues
12:14 kkeithley ganesha doesn't work or crashes? Or it just leaks memory?
12:14 micw lot more output
12:15 micw e.g. Server and Client lk-version numbers are not same, reopening the fds
12:15 micw and stuff like "Connected to backups-client-9, attached to remote volume '/vol/vol4/brick1'"
12:15 micw there was nothing about this with ssl enabled
12:16 micw enable quota takes long time now...
12:16 micw glustershd is running successfully
12:16 micw quota succeeded
12:16 micw so seems to be an issue with ssl
12:19 skoduri kkeithley, it crashes when it receives upcall I guess.. jiffin, can you please provide the details
12:19 skoduri kkeithley, it crashes while trying to free an upcall callback object created by gfapi ..its a regression caused by one of the patches recently got merged
12:20 kkeithley ah, okay
12:21 jiffin Removing of file will result in a crash
12:21 kkeithley sounds like a blocker to me then.
12:21 kkeithley and we should not sign off on the RC
12:21 jiffin It requires changes in both ganesha and gluster
12:25 jiffin kkeithley: there is hacky a fix , which can solve the issue
12:25 jiffin right now I can send it only for 3.7
12:27 kkeithley :-/ if we're going to put something in, shouldn't it be the real fix?
12:27 skoduri kkeithley, whatever jiffin mentioned as hacky fix was the way it was originally implemented
12:28 skoduri kkeithley, we are just reverting to old behavior ..and then improvise things
12:28 kkeithley you mean revert the thing that broke it?
12:28 skoduri kkeithley, kind of..we reverting part of that fix
12:28 kkeithley okay, I can live with that I think
12:29 skoduri kkeithley, okay..then jiffin ..will you be sending that patch?
12:29 kkeithley then do we ask to unfreeze to put this change in?  Or let 3.7.12 ship broken and fix it in 3.7.13?
12:29 jiffin skoduri, kkeithley: then I will send it right away
12:30 kkeithley I'm tempted to say we don't have many ganesha users, but that may not be true any more
12:30 skoduri right..
12:31 post-factum > let 3.7.12 ship broken
12:31 post-factum too many broken releases already encountered in 3.7 branch
12:31 post-factum since 3.7.6
12:31 post-factum please don't :)
12:36 skoduri kkeithley, since we have shorter releases planned , may be we can fix it in 3.7.13..but I suggest we revert that code first whenever possible and then get the proper fix in
12:40 kkeithley nope, we won't sign off on it as is.
12:40 kkeithley we'll fix it
12:41 skoduri okay sure
12:42 mchangir joined #gluster-dev
12:46 jiffin kkeithley, skoduri: patch in master
12:46 jiffin http://review.gluster.org/14759
12:46 skoduri jiffin++ thanks
12:46 glusterbot skoduri: jiffin's karma is now 44
12:47 spalai1 joined #gluster-dev
12:48 nishanth joined #gluster-dev
12:49 kkeithley master?
12:50 kkeithley I suppose this means it's broken in 3.8 too then?
12:50 skoduri kkeithley, yup
12:51 kkeithley bleah
12:52 kkeithley I figured as much. ;-)
12:52 ndevos jiffin: isnt http://review.gluster.org/14701 the proper approach?
12:53 kkeithley yes I think, but it requires a change in ganesha too, doesn't it?
12:53 kkeithley we need a fix that works with the existing ganesha
12:53 skoduri yupp right
12:54 ndevos well, I'm not sure... what happens when ganesha is linked with jemalloc and its free() is diverted to that library...
12:56 ndevos from what I see, we need a matching malloc() and free(), and we can only guarantee that when libgfapi takes care of doing both
12:58 ndevos for ganesha, the 'workaround' is to disable upcalls, most community users seem to run with a single ganesha process in any case?
13:00 ndevos kkeithley: how about a hack in ganesha, calling __gf_free()? The symbol should be part of libglusterfs...
13:00 ndevos YUCK!
13:00 kkeithley Or we put the correct fix in....   And we can rebuild nfs-ganesha pkgs with a patch
13:01 kkeithley yeah, yuck.
13:01 ndevos yes, that definitely has my preference
13:01 skoduri rebuild all the versions? V2.1 to V2.4?
13:02 ndevos ah, that reminds me, I did not tag ganesha for the CentOS Storage SIG gluster-38 release yet
13:02 skoduri or just the ones being maintained ? 2.3 and 2.4?
13:05 ndevos skoduri: 'maintained' where? Fedora/EPEL can carry the patches for older versions if those are still important
13:05 skoduri ndevos, I meant the versions maintained by the community
13:06 kkeithley 2.3.2 in Fedora 23.  2.4dev21 in Fedora24.  I need to do new builds for Ubuntu and Debian anyway. Ubuntu only has 2.3.1 in Launchpad PPA. Debian has 2.3.0 on d.g.o
13:07 kkeithley There aren't an V2.1 or V2.2 anywhere. At least not that I/we care about.
13:07 ndevos skoduri: hah! the Ganesha community, or the communities for the different distributions ? ;-)
13:07 ndevos kkeithley: the centos-6 storage sig packages seem to have v2.2
13:08 kkeithley oh?
13:08 ndevos well, according to the github repo
13:08 ndevos but it seems there are no packages in the yum repositories
13:09 skoduri ndevos, :) I dint know other than Ganesha community if anyone would be back-porting the patches
13:09 ndevos (no ganesha packages, that is)
13:09 shubhendu joined #gluster-dev
13:10 ndevos skoduri: that is also difficult to know about, I also only know about Fedora/EPEL and CentOS, kkeithley seems to do Debian+Ubuntu for Ganesha too, others may do even more
13:10 kkeithley I have no idea if anyone actually uses the Debian and Ubuntu packages that I build, but they're out there
13:11 skoduri ndevos, so the best way is to communicate through community mailing lists to rebuild the packages once the ganesha fix is in the next branch (atleast) ?
13:11 micw since the bugs i had (e.g. quota) are only with tls, what can i do to debug/fix it?
13:11 micw what i've done to set up tls:
13:11 micw used xcs to create a ca (SHA256, Key is RSA/4096)
13:12 micw used xca to create a server certificate for each node (SHA256, Key is RSA/4096), signed by CA
13:12 micw used openssl to create a 4096 bit dhparam file
13:12 micw placed CA cert as /etc/ssl/glusterfs.ca on each server
13:12 micw placed cert cert as /etc/ssl/glusterfs.pem on each server
13:12 micw placed key as /etc/ssl/glusterfs.key on each server
13:13 micw enabled placed CA cert as /etc/ssl/glusterfs.ca
13:13 micw oops
13:13 micw enabled secure-access
13:13 jiffin skoduri, kkeithley, ndevos: for 3.7.12 are we planning to live the workaround as disable upcall for ganesha and wait for correct fix in next release
13:13 jiffin ?
13:16 ndevos jiffin: yes, I this that is the best approach
13:17 jiffin micw: can you please send out a mail to gluster-devel with details mentioned here and issues which u are facing which will provide more eyes
13:18 kkeithley jiffin: we either revert (disable upcalls), or we get the real fix (in master, 3.8, 3.7) plus the matching fix in ganesha (at least in the next branch) and rebuild community ganesha packages with a patch.
13:18 kkeithley a) revert (disable upcalls)
13:18 kkeithley b) real fix and rebuild ganesha with a patch
13:19 kkeithley both require unfreezing 3.7.12 to fix
13:20 skoduri kkeithley, upcalls can be disabled with CLI option "gluster v set features.cache-invalidation off'
13:21 kkeithley so we can have c) change nothing in 3.7.12 and get the real fix in 3.7.13 and 3.8.1
13:21 ndevos c!
13:21 skoduri I think C
13:22 jiffin ndevos, kkeithley, skoduri: thanks
13:22 kkeithley jiffin: is that a vote for C ?
13:23 jiffin kkeithley: we already have the majority for C, right
13:23 kkeithley I haven't voted yet. ;-)
13:23 jiffin do I need  to cast my vote?
13:23 jiffin oh
13:23 ndevos jiffin: we can outvote him!
13:24 jiffin then c
13:24 * ndevos |o/
13:24 kkeithley Then it's unanimous.
13:24 ndevos wait, can I still change my mind?
13:24 kkeithley you can change your mind, but your ballot has already been counted
13:24 kkeithley ;-)
13:25 ndevos oh, ok, I'll stick with my choice then :D
13:25 jiffin i agree with kkeithley
13:25 shyam joined #gluster-dev
13:27 shyam left #gluster-dev
13:31 skoduri I am signing out.. Happy weekend :)
13:31 ramky joined #gluster-dev
13:31 kkeithley have a good weekend. ttyl
13:31 ndevos bye skoduri!
13:32 shyam joined #gluster-dev
13:36 kkeithley where's the email thread for maintainers to give feedback for 3.7.12rc?
13:36 kkeithley kshlm: ^^^
13:36 ndevos kkeithley: on the maintainers list
13:36 kkeithley hah, naturally
13:36 ndevos subject: Maintainers acks needed for 3.7.12
13:39 kkeithley I need to create a folder and a filter for that.
13:42 nigelb So, who's good with the Jenkins stuff? I've got somewhat of a working XML with Jenkins job builder. It's subtly different from our existing job.
13:42 nigelb My idea is to create a new job and let it run along side the old job
13:42 nigelb while we figure out how different they are.
13:42 nigelb Anyone has suggestions on a better way?
13:44 nbalacha joined #gluster-dev
13:53 micw could anyone have a look if this ssl certs may cause my errors? http://pastebin.ubuntu.com/17431864/
13:53 micw maybe an unsupported key length or such?
13:55 mchangir nigelb, fyi, I have some updates at the google doc
14:01 nishanth joined #gluster-dev
14:02 shubhendu joined #gluster-dev
14:22 post-factum so, releasing broken 3.7.12?
14:30 anoopcs hagarth, Are you around?
14:32 hagarth anoopcs: yes
14:34 anoopcs hagarth, We have 3 dependent patches which are fixes to smoke failures on review for release-3.7: https://review.gluster.org/#/q/topic:bug-1336137
14:34 Humble joined #gluster-dev
14:34 anoopcs hagarth, For which the base patch is https://review.gluster.org/#/c/14338/
14:35 ndevos post-factum: yes, see http://thread.gmane.org/gmane.comp.file-s​ystems.gluster.maintainers/912/focus=924
14:36 ndevos kkeithley: does that ^ email mean you tested Gluster/NFS too? or is that still on my todo list?
14:37 post-factum ndevos: any links to pending but proper fixes?
14:37 anoopcs hagarth, Here is the thread regarding the issue: http://www.gluster.org/pipermail/g​luster-infra/2016-June/002274.html
14:37 kkeithley I did not test gnfs. That's why I asked for your and everyone else's inputs
14:39 kkeithley post-factum: we have a work-around for ganesha. Nobody else would hit that bug AFAIK.
14:39 hagarth anoopcs: why do some of them have a -1 from you?
14:39 post-factum ndevos: kkeithley: http://review.gluster.org/#/c/14759/ ?
14:40 anoopcs hagarth, That was by mistake I rebased two of them resolving the conflicts manually without knowing that there was a base patch.
14:42 ndevos post-factum: http://review.gluster.org/14701
14:42 anoopcs hagarth, After base patch gets merged, I will revert back those patches to their respective previous patch sets(as mentioned in comments for patches)
14:42 hagarth anoopcs: can you revoke your -1s
14:42 kkeithley post-factum: ndevos correctly points out that calloc/free in gluster is not jemalloc's calloc/free in ganesha. It's not the correct fix.
14:43 pur_ joined #gluster-dev
14:43 post-factum ndevos: kkeithley: got that, thanks
14:43 ndevos kkeithley: only if jemalloc is used, I think that is only done in EPEL
14:43 kkeithley and it is in Fedora
14:44 kkeithley too
14:44 kkeithley now if only we hadn't built it with jemalloc. :-/
14:44 ndevos ah, I wasnt aware of the Fedora config
14:45 anoopcs hagarth, I can remove those -1s.
14:45 anoopcs hagarth, and base patch has passed all regressions.
14:46 hagarth anoopcs: but are these really needed for 3.7.12?
14:46 ppai left #gluster-dev
14:46 ppai joined #gluster-dev
14:47 anoopcs hagarth, Not urgent I guess.
14:48 anoopcs hagarth, We can take it up after 3.7.12. No worries.
14:48 kkeithley built nfs-ganesha with jemalloc on Ubuntu Launchpad too
14:50 hagarth anoopcs: right.. will keep these patches in my radar
14:50 anoopcs hagarth, Thanks.
14:51 oskar joined #gluster-dev
15:02 mchangir joined #gluster-dev
15:07 spalai1 left #gluster-dev
15:18 kramdoss_ joined #gluster-dev
15:43 ndevos rafi_mtg: 3.5 is EOL, can you abandon your patches? http://review.gluster.org/#/q/project:​glusterfs+status:open+branch:release-3.5
15:43 rafi_mtg ndevos: will do it
15:43 rafi_mtg ndevos: thanks for reminding it
15:43 ndevos rafi++ thanks!
15:43 glusterbot ndevos: rafi's karma is now 50
15:44 rafi_mtg ndevos: :)
15:48 rafi_mtg ndevos: it is clean now
15:49 ndevos rafi: great, thanks!
15:49 rafi ndevos: np
15:54 ndevos kkeithley: could you make sure that people can not report new bugs against 3.5.x?
15:55 kotreshhr joined #gluster-dev
15:58 ndevos congrats to us, we have a +119400% improvement in our ticket closure rate, according to http://projects.bitergia.com/redhat-​glusterfs-dashboard/browser/its.html
16:01 mchangir joined #gluster-dev
16:11 poornimag joined #gluster-dev
16:36 kkeithley ndevos: done
16:39 pkalever joined #gluster-dev
16:47 Apeksha joined #gluster-dev
16:48 rraja joined #gluster-dev
16:57 shaunm joined #gluster-dev
17:01 jiffin joined #gluster-dev
17:06 spalai joined #gluster-dev
17:19 ndevos kkeithley++ thanks!
17:19 glusterbot ndevos: kkeithley's karma is now 128
17:23 spalai left #gluster-dev
17:24 kkeithley ndevos: opened the ticket anyway. It'll probably get done Monday
17:24 jiffin joined #gluster-dev
17:40 pkalever left #gluster-dev
17:59 pkalever joined #gluster-dev
19:21 Acinonyx joined #gluster-dev
20:41 ashiq joined #gluster-dev
21:34 shaunm joined #gluster-dev
22:21 penguinRaider joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary