Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster-dev, 2016-09-28

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 wushudoin joined #gluster-dev
00:43 suliba joined #gluster-dev
01:04 lpabon joined #gluster-dev
01:23 pranithk1 joined #gluster-dev
01:35 EinstCrazy joined #gluster-dev
01:48 ilbot3 joined #gluster-dev
01:48 Topic for #gluster-dev is now Gluster Development Channel - http://gluster.org | For general chat go to #gluster | Patches - http://review.gluster.org/ | Channel Logs - https://botbot.me/freenode/gluster-dev/ & http://irclog.perlgeek.de/gluster-dev/
02:07 misc joined #gluster-dev
02:37 spalai joined #gluster-dev
03:06 gem_ joined #gluster-dev
03:16 magrawal joined #gluster-dev
04:01 atinm joined #gluster-dev
04:06 ppai joined #gluster-dev
04:08 itisravi joined #gluster-dev
04:17 kdhananjay joined #gluster-dev
04:33 Guest85806 joined #gluster-dev
04:44 nbalacha joined #gluster-dev
04:53 rafi joined #gluster-dev
05:03 rafi joined #gluster-dev
05:13 apandey joined #gluster-dev
05:15 prasanth joined #gluster-dev
05:17 ndarshan joined #gluster-dev
05:17 itisravi joined #gluster-dev
05:24 nbalacha joined #gluster-dev
05:29 ramky joined #gluster-dev
05:30 ppai joined #gluster-dev
05:31 devyani7_ joined #gluster-dev
05:31 devyani7_ joined #gluster-dev
05:33 aravindavk joined #gluster-dev
05:34 ankitraj joined #gluster-dev
05:39 Muthu joined #gluster-dev
05:43 nishanth joined #gluster-dev
05:43 atinm ndevos_, there? can you please take a look at the 3.8 backport patch http://review.gluster.org/#/c/15567 ?
05:47 asengupt joined #gluster-dev
05:49 nbalacha joined #gluster-dev
05:50 hgowtham joined #gluster-dev
05:53 spalai joined #gluster-dev
05:57 nbalacha joined #gluster-dev
05:58 hchiramm joined #gluster-dev
06:00 skoduri joined #gluster-dev
06:06 Bhaskarakiran joined #gluster-dev
06:07 kotreshhr joined #gluster-dev
06:11 pranithk1 joined #gluster-dev
06:12 hchiramm joined #gluster-dev
06:18 spalai joined #gluster-dev
06:19 jiffin joined #gluster-dev
06:21 ndevos_ atinm: there is no commit message, we definitely need one for backports, otherwise I can not judge it is a bugfix or something else
06:22 atinm magrawal, ^^
06:22 atinm magrawal, this is related to http://review.gluster.org/#/c/15567
06:23 atinm ndevos_, AFAIK (looking at the patch) its not a feature and definitely a bug fix
06:23 magrawal atinm, sure will update it.
06:24 spalai ndevos: Do we run fcntl lock test cases as part of netbsd regression?
06:24 ndevos atinm: that would be my guess too, but whatever, we should almost never accept changes without an explanation
06:24 rraja joined #gluster-dev
06:24 atinm ndevos, agreed
06:25 ndevos spalai: I dont know, maybe you can see it in the output of the tests? or check the glusterfs-patch-acceptence repository for the tests themselves?
06:26 spalai ndevos: thanks.
06:42 hchiramm joined #gluster-dev
06:44 nbalacha joined #gluster-dev
06:48 rafi joined #gluster-dev
06:50 Byreddy joined #gluster-dev
06:57 k4n0 joined #gluster-dev
07:21 ndevos skoduri, jiffin: care to review http://review.gluster.org/14701 ?
07:21 ndevos kshlm: I'd really like http://review.gluster.org/14701 to get included in 3.7.16 (still needs review+backport)
07:22 skoduri ndevos, reviewing
07:22 ndevos aravindavk: ^ that counts for 3.9 too, otherwise we'll release gfapi in some in-between state
07:22 ndevos skoduri: thanks!
07:24 rastar joined #gluster-dev
07:26 msvbhat joined #gluster-dev
07:37 spalai nigelb: logs missing from https://build.gluster.org/job/netbsd7-regression/885/consoleFull
07:38 spalai nigelb: run as part of http://review.gluster.org/#/c/14492/
07:38 owlbot joined #gluster-dev
07:40 Muthu joined #gluster-dev
07:47 nigelb spalai: http://nbslave7h.cloud.gluster.org/logs/glusterfs-logs-20160928051152.tgz
07:47 nigelb the output is incorrect.
07:47 nigelb but logs are around.
07:59 atinm ndevos, http://review.gluster.org/15567 has passed the smoke :)
08:03 devyani7_ joined #gluster-dev
08:14 _nixpanic joined #gluster-dev
08:14 _nixpanic joined #gluster-dev
08:17 k4n0 joined #gluster-dev
08:19 hchiramm joined #gluster-dev
08:22 jiffin joined #gluster-dev
08:22 itisravi joined #gluster-dev
08:24 atinm Muthu, http://review.gluster.org/#/c/15352/ has failed the regression, could you check if its a genuine failure?
08:27 Muthu atinm, ya i will look into it.
08:31 karthik joined #gluster-dev
08:33 spalai nigelb: Thanks
08:47 Bhaskarakiran joined #gluster-dev
08:51 Muthu joined #gluster-dev
08:52 ashiq joined #gluster-dev
08:54 ppai joined #gluster-dev
08:54 rastar joined #gluster-dev
08:55 kotreshhr joined #gluster-dev
09:02 Bhaskarakiran joined #gluster-dev
09:08 skoduri ndevos, wrt http://review.gluster.org/14701 I just have one minor comment..I will give '+1' post that
09:08 k4n0 joined #gluster-dev
09:09 skoduri ndevos++ and thanks for the changes...it looks much cleaner now :)
09:09 glusterbot skoduri: ndevos's karma is now 316
09:23 itisravi joined #gluster-dev
09:27 riyas joined #gluster-dev
09:32 mchangir joined #gluster-dev
09:38 skoduri_ joined #gluster-dev
09:45 nishanth joined #gluster-dev
09:56 aravindavk joined #gluster-dev
10:01 misc mhh, something happened to supercolony ?
10:04 misc ok seems to be working, but we have likely network issue :/
10:08 EinstCrazy joined #gluster-dev
10:24 ppai joined #gluster-dev
10:31 atinm joined #gluster-dev
10:31 nishanth joined #gluster-dev
10:31 ankit-raj joined #gluster-dev
10:31 rraja joined #gluster-dev
10:32 devyani7_ joined #gluster-dev
10:37 ankitraj joined #gluster-dev
10:46 kotreshhr joined #gluster-dev
10:46 ndevos skoduri_: left a note in http://review.gluster.org/14701 for you
10:54 msvbhat joined #gluster-dev
11:08 kotreshhr joined #gluster-dev
11:16 msvbhat joined #gluster-dev
11:18 mchangir joined #gluster-dev
11:27 aravindavk joined #gluster-dev
11:31 pfactum nbalacha, ping bz 1369364
11:31 nbalacha post-factum, hi
11:31 post-factum nbalacha, i'd wonder how did you run valgrid on that? simple memcheck?
11:31 nbalacha post-factum, yes
11:31 nbalacha without massif
11:32 samikshan REMINDER: Gluster community meeting to take place in ~30 minutes on #gluster-meeting
11:32 post-factum nbalacha, but you have to run massif to see the memleak
11:32 post-factum nbalacha, memcheck did show nothing for me as well
11:33 nbalacha post-factum, I did look at the massif logs as well but there was nothing that jumped out there either
11:33 nbalacha I can see the mem usage rise with the tests
11:33 post-factum nbalacha, but you saw my massif logs?
11:33 nbalacha post-factum, yes
11:33 nbalacha one of them
11:34 nbalacha was there a particular entry you saw?
11:35 post-factum nbalacha, i cannot say for sure. have you parsed it by massif-visualizer-tool? it presents the output in cozy graphs
11:35 post-factum nbalacha, i saw multiple big consumers there
11:35 nbalacha no, I have not tried that. I went through some of the code paths listed - they eventually free the memory
11:36 nbalacha post-factum, your mempools will be allocated up front and stick around
11:36 nbalacha so those will be some fairly large allocs
11:37 nbalacha the other thing I realised while testing is that we do a lot of callocs for a single touch file
11:37 nbalacha and a lot of small allocs
11:37 nbalacha and for the readdirp, dht duplicates a good chuunk of the entries
11:37 post-factum nbalacha, may i do that for you? just to show the graphs
11:37 nbalacha post-factum, yes, that would be great
11:37 post-factum nbalacha, give me 5 mins
11:50 Muthu joined #gluster-dev
11:51 post-factum nbalacha, please, see bz, i've attached jpeg
11:51 nbalacha post-factum, will do
11:52 kkeithley if you configure with debugging, then mempools are turned off and you may find leaks that could be masked by the use of mempools.
11:53 post-factum btw, fedora's visualiser cannot do such a visualisation somewhy. i guess one need some extra kde component for that...
11:54 post-factum anyway, nbalacha, you'll see that big memory allocations come from different sources
11:59 nbalacha post-factum, was this taken after a readdir or during?
11:59 post-factum nbalacha, this is the visualisation of last snapshot, after readdir
12:00 post-factum nbalacha, #54, where mem consumption is the highest
12:01 samikshan REMINDER: Gluster community meeting starting now on #gluster-meeting
12:01 nbalacha post-factum, any idea how many files ?
12:01 post-factum nbalacha, the volume holds ~15M files, i guess i've scanned half of them or so
12:01 nbalacha because there will be memory used for the inodes, and for the gf_entries depending on when the snapshot was taken
12:01 post-factum nbalacha, iow, many :)
12:01 nbalacha :)
12:02 nbalacha so from my tests, I could not find a mem leak but I did see high memusage
12:02 nbalacha one reason being because multiple translators calloc multiple bits of info
12:02 nbalacha and almost entry is duplicated because of the way the dht_readdirp works (I used a single brick volume)
12:03 nbalacha however the memory should get freed post the call
12:03 nbalacha that is what I was trying to check and codewise those paths seem fine
12:03 post-factum nbalacha, i guess you'll get better results with volume layout similar to mine
12:03 nbalacha oh I cna see the mem rise
12:03 post-factum replica 2, 5 bricks in each replica
12:03 nbalacha it goes from roughly 30MB in the beginning to over 100MB
12:04 post-factum nbalacha, in my case, to 1.4G+
12:04 nbalacha right - mine is a smaller set
12:04 nbalacha roughly 100K zero byte files
12:04 nbalacha the question now becomes why doesnt it come down after I delete the files
12:05 nbalacha statedumps dont show inode leaks which was my first guess
12:05 skoduri_ joined #gluster-dev
12:05 nbalacha so then I started wondering if there were pages that couldnt be freed because of severe mem fragmentations
12:05 nbalacha if I rerun the tests, I dont see such a drastic rise after the first run
12:06 nbalacha we do a _lot_ of small memory allocations
12:06 nbalacha each entry in readdirp for instance
12:06 nbalacha plus a dictionary and inode for each
12:07 nbalacha in the client layer
12:07 nbalacha and then again in the dht layer
12:07 post-factum that doesn't sound as if it should scale well
12:07 nbalacha no it doesnt
12:08 nbalacha I'm looking into that as well
12:08 nbalacha you see the issue mainly in readdirp with a lot of files, correct?
12:08 hchiramm joined #gluster-dev
12:11 post-factum nbalacha, from the user prospective i see the issue in high memory consumtion, from dev's prospective i'm not that aware of gluster mem internals, so that is why i ask you to dig into that ;)
12:11 post-factum nbalacha, but yes, the issue appears *only* while dealing with lots of files
12:11 post-factum nbalacha, if the volume holds small number of files, the issue does not arise
12:12 post-factum nbalacha, and yes, i've narrowed it to simple readdirp+stat
12:13 nbalacha loads of readdirp entries perhaps
12:13 nbalacha post-factum, understood. Just want you to know that I have not dropped it
12:13 nbalacha and I will keep looking but it is a slow process
12:13 post-factum nbalacha, thanks
12:22 magrawal joined #gluster-dev
12:29 rastar joined #gluster-dev
12:30 ira joined #gluster-dev
12:31 itisravi joined #gluster-dev
12:46 ppai joined #gluster-dev
12:47 atinm ndevos, can you merge this patch?
12:47 atinm http://review.gluster.org/#/c/15567/
12:47 atinm ndevos, ^^
12:48 ndevos atinm: possibly after the meeting?
12:49 atinm ndevos, sure
12:49 atinm ndevos, even I am in a different meeting
12:51 kotreshhr joined #gluster-dev
12:55 kkeithley nigelb, misc: you're up in #gluster-meeting.  Anything you want to share?
13:06 ndevos kkeithley: what kind of bugs would be filed under a (nfs-)ganesha sub-component for glusterfs in Bugzilla?
13:06 ndevos kkeithley: and should those bugs not be filed in the upstream NFS-Ganesha project in Bugzilla instead?
13:07 post-factum samikshan++
13:07 glusterbot post-factum: samikshan's karma is now 5
13:08 kkeithley <shrug> FSAL_GLUSTER bugs that don't get filed in Red Hat bugzilla under NFS-Ganesha, or in the github issue tracker.
13:08 kkeithley samikshan++
13:08 glusterbot kkeithley: samikshan's karma is now 6
13:08 ndevos kkeithley: those really should be moved to the NFS-Ganesha Product, with Component=FSAL_GLUSTER
13:08 kkeithley yes
13:08 ndevos so, no need to confuse users and have a nfs-ganesha (sub)component for glusterfs?
13:09 kkeithley except hardly anyone looks at those BZs
13:09 kkeithley e.g. during bug triage
13:09 ndevos well, that is an upstream NFS-Ganesha issue
13:09 ndevos we're also not triaging kernel/FUSE or SELinux bugs that relate to gluster
13:10 ndevos or upstream Samba bugs for that matter
13:10 kkeithley it's the same people for both.
13:10 kkeithley I'm not going to insist on having an nfs-ganesha subcomponent under Gluster
13:10 ndevos then the same people need to learn to track two upstream projects, just like Samba folks do
13:11 ndevos ok, we'll scratch the (nfs-)ganesha sub-component for the glusterfs component then :)
13:11 kkeithley yes
13:11 ndevos Muthu: ^
13:11 kkeithley okay
13:11 Muthu ndevos, ya got it i will do it
13:11 ndevos Muthu: thanks!
13:12 Muthu ndevos, no problem :)
13:12 ndevos Muthu: can you reply to each of the emails that gave feedback as well? tomorrow is fine for that
13:12 kkeithley and I don't know why we wouldn't triage at least nfs-ganesha/FSAL_GLUSTER and samba/VFS_GLUSTER bugs in our bug triage meetings
13:12 kkeithley seems like that would be a useful thing to do
13:13 Muthu ndevos, ya i will do that also
13:13 ndevos kkeithley: oh, we can, but I doubt many of us have a login for the Samba bugzilla instance
13:13 ndevos Muthu++ thank you
13:13 glusterbot ndevos: Muthu's karma is now 6
13:14 ndevos kkeithley: hmm, we probably should add GitHub/Issues+PR to the bug triage list, sometimes people use that
13:15 kkeithley yes
13:16 Muthu ndevos, thank you ;)
13:16 kkeithley Colleen is a Red Hat mark-comm person.
13:20 gem_ joined #gluster-dev
13:31 anrao joined #gluster-dev
13:35 ndevos jiffin: are you happy my comment in http://review.gluster.org/14701 ? anything else that prevents you from +1'ing it?
13:38 jiffin1 joined #gluster-dev
13:38 mchangir joined #gluster-dev
13:38 spalai left #gluster-dev
13:49 ndevos 15:35 < ndevos> jiffin: are you happy my comment in http://review.gluster.org/14701 ? anything else that prevents you from +1'ing it?
13:54 skoduri_ joined #gluster-dev
14:09 k4n0 joined #gluster-dev
14:14 ndevos who likes Erlang?
14:14 shyam joined #gluster-dev
14:15 ndevos shyam: are you a happy Erlang developer?
14:17 ndevos oh, kkeithley also likes obscure things, maybe Erlang is something he fancies?
14:17 kkeithley pffft.
14:18 kkeithley If I was drinking coffee you'd owe me a new keyboard.
14:18 misc I think when he say about obscure things, he was speaking of his coffee
14:19 ndevos hey, I'd get you a coffee if you write some Erlang bindings for gfapi
14:19 atinm joined #gluster-dev
14:19 ndevos ... and you probably need to maintain them too, I've just looked at Erlang and its like Haskell/Clean etc :-(
14:20 msvbhat joined #gluster-dev
14:21 kotreshhr joined #gluster-dev
14:25 kkeithley Maybe you want some Ada bindings too? ;-)
14:29 ndevos Prolog?
14:29 post-factum Fortran!
14:30 * misc would suggest that as a topic for BoF in Berlin
14:32 mchangir joined #gluster-dev
14:32 nbalacha joined #gluster-dev
14:39 * kkeithley wonders what the status of semiosis' Java bindings is
14:40 kkeithley1 joined #gluster-dev
14:41 kkeithley1 left #gluster-dev
14:42 kkeithley hmm.  posted to gluster-dev mailing list 10+ minutes ago and haven't seen it come back yet.
14:43 ndevos kkeithley: topic?
14:43 kkeithley community packaging matrix
14:43 kkeithley updated
14:44 ndevos I dont seem to have it yet either
14:48 ndevos kkeithley: here you are! http://www.gluster.org/pipermail/gluster-devel/2016-September/051054.html
14:49 ndevos those tables hurt my eyes...
14:50 shyam ndevos: Errr... lang?
14:50 ndevos shyam: yeah, something like that
14:50 kkeithley pipermail really mangled them
14:51 nigelb kkeithley: do you mind me putting that onto a github wiki page?
14:51 nigelb perhaps on the packaging github repo?
14:51 ndevos nigelb: I guess it should be in the glusterdocs repo
14:51 nigelb Or that.
14:52 ndevos with clicky links so that users can easily find the installation procedures
14:52 ndevos kkeithley: for the Storage SIG, I plan to keep 3.8 as default, with opt-in for 3.9
14:52 shyam ndevos: So someone needs erlang bindings for gfapi? (why? can they do it? etc..)
14:52 ndevos kkeithley: basically what I described here https://lists.centos.org/pipermail/centos-devel/2016-September/015197.html
14:53 ndevos shyam: I asked on the users list what S3-compatible server is commonly used, maybe we can extend it... Riak CS was a reply and it is in Erlang
14:54 kkeithley with 27 8x10 color glossy pictures with the circles and arrows and the paragraph on the back of each one?
14:54 kkeithley ndevos: okay, but there will be packages.  In a repo.
14:55 ndevos kkeithley: yes, "yum install centos-release-gluster39" instead of "yum install centos-release-gluster"
14:56 kkeithley so the table is basically correct?
15:01 kkeithley well, my reply (replies) to Muthu's bugzilla subcomponents email that I sent at ~08:00 EDT are only just come back now (11:00 EDT)
15:01 kkeithley s/are/have/
15:03 ndevos I'm having difficulties reading the table in the archive, I do not have it in my email client yet...
15:04 kkeithley where am I supposed to post this? Wiki? docs?
15:04 kkeithley I forwarded a copy to you
15:05 ndevos I;m sure I'll get it, eventually :)
15:06 ndevos it would be best to have it in the repo that backs gluster.readthedocs.io
15:06 kkeithley in three hours
15:06 ndevos or, maybe on the download page on gluster.org
15:06 ndevos or a reference from the download page to the (newly created) docs
15:07 ndevos oh, wow, *that* is a table!
15:07 kkeithley :-/  it changes frequently.  It's going to be out of date in a couple of months
15:07 kkeithley you like that?
15:07 kkeithley ;-)
15:08 misc kkeithley: there was a issue on supercolony, syslog was slowing everything down, so it could have been the cause
15:09 ndevos kkeithley: the Storage SIG does not have any el5 packages, none of the SIGs do
15:09 kkeithley oh, right
15:09 ndevos other than that, CentOS/Fedora looks good to me, I dont know much about the others
15:10 kkeithley well, that's tentative.  Nobody asked for anything the last time I sent that around
15:10 ndevos and keeping the page up to date (by others?) would be better than not having the details at all (or hidden in some email archive)
15:11 kkeithley so, where's the repo that backs gluster.readthedocs.io?
15:11 ndevos click the link in the upper-right (?) corner
15:11 kkeithley too easy
15:11 misc https://github.com/gluster/glusterdocs
15:11 * kkeithley wonders if I have a commit bit
15:12 ndevos I think you're an admin in the GitHub organization, that means you can break any repo we have there
15:12 kkeithley woohoo
15:12 ndevos please send a pull-request instead of committing directly, and have one of the doc maintainers merge it
15:13 * ndevos drops off for now, will be back later
15:13 kkeithley and where in all that should I put it?
15:14 ndevos somewhere under the install guide?
15:15 kkeithley Oh wow.  For Fedora: yum install ...   That's a bit dated
15:15 kkeithley :q
15:16 nigelb still works though :)
15:16 misc for now
15:17 wushudoin joined #gluster-dev
15:21 gem_ joined #gluster-dev
15:21 kkeithley @later tell ndevos the table I sent doesn't say anything about EL5 packages in Storage SIG.
15:21 glusterbot kkeithley: The operation succeeded.
15:22 mchangir joined #gluster-dev
15:47 kotreshhr left #gluster-dev
15:57 xavih joined #gluster-dev
15:57 hchiramm joined #gluster-dev
16:09 nbalacha joined #gluster-dev
16:11 tdasilva joined #gluster-dev
16:24 msvbhat joined #gluster-dev
16:55 kkeithley grrr, pull request
17:10 semiosis kkeithley: https://github.com/semiosis/glusterfs-java-filesystem
17:11 semiosis haven't touched it in a while but it works.  even heard from someone using it for internal stuff at a movie studio!
17:23 semiosis at least, it worked with glusterfs 3.4.
17:24 ndevos semiosis: do you have any interest to move that repository to the gluster organization in github?
17:25 ndevos that way, it would be the main repository, and users might find it quicker
17:25 ndevos you should then fork it to you account, so that there is still a reference and existing users can still reach it
17:26 ndevos kkeithley: ah right, no X for the el5 versions, only "d.g.o."
17:36 hchiramm joined #gluster-dev
17:40 semiosis ndevos: i guess so.  i figured people looking for it would just google 'gluster java' and it's already no. 1 there :)
17:42 riyas joined #gluster-dev
17:42 semiosis ndevos: think anyone else might be interested in working on it?  it could use an update to current glusterfs (probably easy) and a refactor of the integration tests/examples (moderate)
17:44 semiosis it would be a great learning experience for a student project.  i had a couple seniors from the local university CS dept work on it for course credit
17:45 semiosis i'd be open to mentoring
17:45 jiffin joined #gluster-dev
17:47 ndevos semiosis: I think we should promote it more indeed
17:47 ndevos some student or intern project should be possible
17:54 semiosis ok great.  i appreciate the interest.  i'll dust it off a bit, make sure it still works like it used to, and get back to you in a couple days.
17:55 jiffin ndevos: +1 for http://review.gluster.org/14701
17:55 ndevos jiffin++ thanks!
17:55 glusterbot ndevos: jiffin's karma is now 51
17:56 jiffin ndevos: :)
17:56 ndevos semiosis: as gfapi miantainer I'm happy to help, although my Java is *very* rusty
17:56 ndevos *maintainer even
17:57 ndevos semiosis: if you like, we can move the repo to the gluster org, and you can have full permissions to commit there
17:57 semiosis thanks!  i'm sure i'll have questions.  gluster has come a long way since i've been on the sidelines
17:58 semiosis yeah, i'd need full access to both repos, glusterfs-java-filesystem & libgfapi-jni
17:58 ndevos thats no problem, if you can transfer them to me (nixpanic on GitHub), I can move them to the right place and make you repo admin
18:00 ndevos just ping me when did the transfer, github sends an email with a link to accept it, so I can check it when its pending on me
18:00 semiosis ok
18:00 semiosis bbiab, lunch
18:01 ndevos sure, I'll be offline and back tomorrow (you know, European timezones)
18:05 gem joined #gluster-dev
18:09 raghu joined #gluster-dev
18:15 msvbhat joined #gluster-dev
18:16 tdasilva joined #gluster-dev
18:23 jiffin joined #gluster-dev
18:27 jiffin1 joined #gluster-dev
18:54 msvbhat joined #gluster-dev
19:04 msvbhat joined #gluster-dev
19:05 k4n0 joined #gluster-dev
19:22 msvbhat joined #gluster-dev
19:58 a2 joined #gluster-dev
20:11 msvbhat joined #gluster-dev
21:21 misc joined #gluster-dev

| Channels | #gluster-dev index | Today | | Search | Google Search | Plain-Text | summary