Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 al joined #gluster
00:09 ninkotech joined #gluster
00:46 dmojoryder A few days back I asked why fopen-keep-cache fuse mount option wasn't documented or enabled by default since I saw significant improvements in synthetic single user tests (cached on fuse client rather than hitting gluster). However in my real world tests under high concurrent load its seems to have some blocking issues as requests would queue up every min or so (and loadavg would spike but cpu usage would remain low) when it was enabled. Removing fopen-
00:47 JoeJulian Line length overflow at "Removing fopen-k"...
00:49 dmojoryder JoeJulian: Removing fopen-keep-cache resolved the queuing seen (at apache), the high loadavg, etc
00:49 dmojoryder JoeJulian: It appears fopen-keep-cache has issue under high load
00:49 JoeJulian Not sure why it's not documented. It may be because it's new and the documentation is always frustratingly lagging.
00:50 awheeler joined #gluster
00:59 JoeJulian Based on your results, I would guess that it wasn't on be default because it hadn't yet been tested in real-world applications.
00:59 JoeJulian I would compile your test results and file a bug report.
00:59 glusterbot http://goo.gl/UUuCq
01:00 asias joined #gluster
01:12 dmojoryder JoeJulian: Will do. Its caused me a few days of grief. Every time I brought apache online with even moderate load (> 20 concurrent reqs) it would inevitably queue up requests. There must be some global lock on the fuse client related to its client side caching
01:13 jporterfield joined #gluster
01:14 bala joined #gluster
01:17 awheeler joined #gluster
01:17 awheeler joined #gluster
01:23 jporterfield joined #gluster
01:29 awheeler joined #gluster
01:29 awheeler joined #gluster
01:32 kevein joined #gluster
01:32 bennyturns joined #gluster
01:49 phansold joined #gluster
01:50 harish__ joined #gluster
01:51 phansold hello, I'm new to glusterFS...trying to install the gluster native client to connect to 2 nodes hosting the server volumes...but I cannot find the glusterfs-core rpm on te download page.... I'm using RHEL 6.4
01:58 purpleidea phansold: http://download.gluster.org/pub/gluster/​glusterfs/LATEST/CentOS/epel-6.4/x86_64/
01:58 glusterbot <http://goo.gl/hNgecy> (at download.gluster.org)
02:00 lpabon purpleidea: i just installed epel on Centos 6.4 and when I did a yum install of glusterfs-server it installed v3.2.. is that right?
02:01 purpleidea lpabon: no
02:01 phansold purpleidea: I'm using 32-bit RHEL 6.4 ..so I checked http://download.gluster.org/pub/gluster/​glusterfs/3.4/LATEST/RHEL/epel-6.4/i386/ ...but I do not find a glusterfs-core*rpm for native client... is glusterfs-3.4.0-8.el6.i686.rpm the core rpm?
02:01 glusterbot <http://goo.gl/HoSzEl> (at download.gluster.org)
02:01 purpleidea phansold: you want glusterfs-server
02:01 purpleidea phansold: if you want details on specific packages, have a look at my puppet module for reference: https://github.com/purpleidea/puppet-gluster
02:01 glusterbot Title: purpleidea/puppet-gluster · GitHub (at github.com)
02:02 lpabon purpleidea: check the glusterfs files here:  http://dl.fedoraproject.or​g/pub/epel/6Server/x86_64/
02:02 glusterbot <http://goo.gl/LYVnWR> (at dl.fedoraproject.org)
02:03 purpleidea lpabon: don't use those
02:03 lpabon lol, that is that yum pull down from epel
02:03 purpleidea lpabon: ?
02:04 lpabon This is what i did.. Went here:  https://fedoraproject.org/wiki/EPEL
02:04 glusterbot Title: EPEL - FedoraProject (at fedoraproject.org)
02:04 purpleidea lpabon: you want to install gluster, right?
02:04 lpabon Then clicked on "The newest version of 'epel-release' for EL6
02:04 lpabon yeah
02:04 \_pol joined #gluster
02:05 purpleidea lpabon: so *don't* use the epel gluster. you need to also add the OTHER repo: http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/
02:05 glusterbot <http://goo.gl/zO0Fa> (at download.gluster.org)
02:05 phansold purpleidea: thank you
02:05 lpabon Ah!
02:05 purpleidea lpabon: ;)
02:06 purpleidea phansold: lpabon: you're both doing the same thing :P good luck!
02:06 lpabon lol thanks
02:06 lpabon how can the epel files be upgraded?
02:06 purpleidea lpabon: be more specific with your question
02:06 lpabon maybe I'll mention this to kaleb
02:07 lpabon i mean, how is it that epel-release (which it seems to have v3.2 of glusterfs) be upgraded to the latest (3.4)
02:07 purpleidea lpabon: oh you mean how come epel isn't the latest version? epel is slower than gluster repo :P
02:07 lpabon yeah, that's what i mean.. but not even 3.3.1.. geeesh
02:08 purpleidea lpabon: erase all the gluster* packages you installed from epel. then install from the repo i gave you! :P
02:09 lpabon from the words of Darth Vader:  Epel has "failed me for the last time"
02:10 asias joined #gluster
02:14 purpleidea joined #gluster
02:21 hchiramm_ joined #gluster
02:22 awheeler joined #gluster
02:22 RameshN joined #gluster
02:28 sprachgenerator joined #gluster
02:35 vshankar joined #gluster
02:35 davinder joined #gluster
02:43 jag3773 joined #gluster
02:46 vshankar joined #gluster
02:50 saurabh joined #gluster
02:57 bharata-rao joined #gluster
03:08 kshlm joined #gluster
03:16 Felix102 joined #gluster
03:16 phansold how do I do rpm install of gluster FS? .i.e. using rpm -ivh ? I get failed dependencies ...
03:18 johnbot11 joined #gluster
03:27 jporterfield joined #gluster
03:31 fkautz phansold: you'll probably want to use yum, which can auto resolve the dependencies
03:32 fkautz http://www.gluster.org/community/documen​tation/index.php/Getting_started_install
03:32 glusterbot <http://goo.gl/chDN9> (at www.gluster.org)
03:32 fkautz near the bottom are instructions
03:36 shubhendu joined #gluster
03:38 phansold fkautz: my bad..missed an rpm ..but now when I have glusterd running on both nodes, "gluster peer probe" throws a "connection failed error"...the cluster daemon is running on the other node and I can telnet port 24007 (both nodes in same subnet)
03:45 phansold fixed..localhost => 127.0.0.1
03:46 meghanam joined #gluster
03:46 meghanam_ joined #gluster
03:51 fkautz cool, glad it works :)
03:52 johnbot11 joined #gluster
03:55 shruti joined #gluster
03:55 vpshastry joined #gluster
04:00 itisravi joined #gluster
04:02 bulde joined #gluster
04:03 sgowda joined #gluster
04:13 jporterfield joined #gluster
04:14 bala joined #gluster
04:14 ppai joined #gluster
04:24 mohankumar joined #gluster
04:25 mohankumar joined #gluster
04:27 davinder2 joined #gluster
04:28 jporterfield joined #gluster
04:33 shylesh joined #gluster
04:33 ababu joined #gluster
04:34 ndarshan joined #gluster
04:44 meghanam joined #gluster
04:44 meghanam_ joined #gluster
04:55 jporterfield joined #gluster
05:00 dusmant joined #gluster
05:05 raghu joined #gluster
05:17 kanagaraj joined #gluster
05:18 jporterfield joined #gluster
05:26 ajha joined #gluster
05:27 rjoseph joined #gluster
05:28 jporterfield joined #gluster
05:35 rjoseph joined #gluster
05:40 anands joined #gluster
05:45 lalatenduM joined #gluster
05:57 aravindavk joined #gluster
05:58 nshaikh joined #gluster
06:03 ndarshan joined #gluster
06:05 hagarth joined #gluster
06:15 kPb_in_ joined #gluster
06:18 jtux joined #gluster
06:20 mooperd_ joined #gluster
06:22 bulde joined #gluster
06:24 meghanam_ joined #gluster
06:24 meghanam joined #gluster
06:31 ngoswami joined #gluster
06:34 CheRi joined #gluster
06:35 sgowda joined #gluster
06:40 psharma joined #gluster
06:40 satheesh joined #gluster
06:45 ricky-ticky joined #gluster
06:47 ngoswami joined #gluster
06:52 davinder joined #gluster
06:59 ricky-ticky joined #gluster
06:59 glusterbot New news from newglusterbugs: [Bug 1006698] glfs_preadv_async fails for large read with io-cache xlator on <http://goo.gl/QrTCJq>
07:01 sgowda joined #gluster
07:04 puebele joined #gluster
07:07 itisravi joined #gluster
07:13 spandit joined #gluster
07:19 hagarth joined #gluster
07:23 eseyman joined #gluster
07:26 spresser joined #gluster
07:27 meghanam_ joined #gluster
07:29 glusterbot New news from newglusterbugs: [Bug 1005164] Add code for syncenv_destroy() and clean up syncenv_new() <http://goo.gl/HkJjya>
07:33 wgao joined #gluster
07:35 tryggvil joined #gluster
07:41 ProT-0-TypE joined #gluster
07:47 jporterfield joined #gluster
07:47 andreask joined #gluster
07:47 harish__ joined #gluster
07:49 davinder joined #gluster
07:58 chirino_m joined #gluster
08:01 meghanam joined #gluster
08:01 meghanam_ joined #gluster
08:04 abyss^ I have some in gluster log smth like this: 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1017). But Iam checking and everything works... It's normal?
08:05 hagarth joined #gluster
08:09 eseyman joined #gluster
08:12 kanagaraj joined #gluster
08:12 vimal joined #gluster
08:15 StarBeast joined #gluster
08:21 mohankumar joined #gluster
08:23 StarBeast joined #gluster
08:47 jporterfield joined #gluster
08:48 dusmant joined #gluster
08:49 shubhendu joined #gluster
08:49 glusterbot New news from resolvedglusterbugs: [Bug 966659] hostname based nfs mounts fail when setting 'volume set nfs.addr-namelookup on' <http://goo.gl/cVLHx>
08:52 nshaikh left #gluster
08:59 jre1234 joined #gluster
09:02 mgebbe_ joined #gluster
09:13 jporterfield joined #gluster
09:21 manik joined #gluster
09:27 edward1 joined #gluster
09:36 shubhendu joined #gluster
09:40 tziOm joined #gluster
09:45 satheesh1 joined #gluster
09:57 jporterfield joined #gluster
09:58 dusmant joined #gluster
10:00 glusterbot New news from newglusterbugs: [Bug 1006776] Need to describe how to recover from split-brain. <http://goo.gl/DD1NnE>
10:05 vpshastry1 joined #gluster
10:10 VeggieMeat joined #gluster
10:10 tryggvil joined #gluster
10:10 kaushal_ joined #gluster
10:12 nshaikh joined #gluster
10:17 manik joined #gluster
10:24 dneary joined #gluster
10:26 jporterfield joined #gluster
10:28 meghanam joined #gluster
10:28 meghanam_ joined #gluster
10:28 nshaikh left #gluster
10:36 rgustafs joined #gluster
10:49 jporterfield joined #gluster
10:49 satheesh joined #gluster
10:56 jtux joined #gluster
11:00 glusterbot New news from newglusterbugs: [Bug 1006813] xml output of gluster volume 'rebalance status' and 'remove-brick status' have missing status information in section <http://goo.gl/FJkwG7>
11:02 ninkotech joined #gluster
11:06 failshell joined #gluster
11:09 nshaikh joined #gluster
11:09 andreask joined #gluster
11:10 failshell joined #gluster
11:15 sprachgenerator joined #gluster
11:19 CheRi joined #gluster
11:22 ppai joined #gluster
11:28 kaushal_ joined #gluster
11:31 the-me joined #gluster
11:32 mohankumar joined #gluster
11:34 asias joined #gluster
11:38 samppah_ @planning
11:38 glusterbot samppah_: GlusterFS 3.4.1 backport candidates http://goo.gl/vYhSkT & GlusterFS 3.5 planning http://goo.gl/8cr8iW
11:41 kkeithley joined #gluster
11:41 bulde joined #gluster
11:42 bennyturns joined #gluster
11:46 sgowda joined #gluster
11:46 spandit joined #gluster
11:47 shruti joined #gluster
11:47 an joined #gluster
11:47 Alpinist joined #gluster
11:51 shubhendu joined #gluster
11:52 rjoseph joined #gluster
12:00 diegows_ joined #gluster
12:02 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
12:04 samppah kkeithley: anyplans to build epel compatible rpm's of 3.4.1 qa1?
12:05 vimal joined #gluster
12:10 kkeithley I didn't do builds for of the qa releases for 3.4.0, just the alpha and beta releases. But.....
12:10 kkeithley hagarth, hagarth_: How short/quick is this release cycle going to be? Are we going to do alpha and beta releases?
12:11 hagarth kkeithley: no alpah or beta. this is going to be a very quick release cycle
12:13 hagarth I plan to release it by Sunday if no issues are found in testing
12:13 kkeithley Okay, my concern was that I didn't want to commit to 27 sets of qa release RPM builds.
12:14 kkeithley I'll build a set of RPMs from qa1
12:14 samppah kkeithley: nice, thank you very much :)
12:18 dusmant joined #gluster
12:18 kkeithley I wish we had the new glusterfs-openstack-swift packaging ready :-(
12:26 CheRi joined #gluster
12:45 spandit joined #gluster
12:46 rcheleguini joined #gluster
12:54 hagarth joined #gluster
12:56 awheeler joined #gluster
12:57 shubhendu joined #gluster
12:57 awheeler joined #gluster
13:04 vpshastry1 left #gluster
13:05 rwheeler joined #gluster
13:07 aib_007 joined #gluster
13:11 andreask1 joined #gluster
13:31 itisravi joined #gluster
13:31 vpshastry joined #gluster
13:31 vpshastry left #gluster
13:31 asias joined #gluster
13:36 bfoster joined #gluster
13:40 plarsen joined #gluster
13:44 kaptk2 joined #gluster
13:46 harish joined #gluster
14:04 bugs_ joined #gluster
14:10 rcheleguini joined #gluster
14:13 rcheleguini joined #gluster
14:23 B21956 joined #gluster
14:25 ababu joined #gluster
14:34 bcdonadio joined #gluster
14:35 bcdonadio Is it safe to use Gluster with EXT4 and Linux 3.2.0-53 of Ubuntu 12.04?
14:37 bcdonadio (aka: could this http://lwn.net/Articles/544298/ bug had been backported to this kernel?)
14:37 glusterbot Title: A kernel change breaks GlusterFS [LWN.net] (at lwn.net)
14:38 kkeithley It's safe to use GlusterFS-3.4.0 with any kernel version.
14:42 kkeithley YUM repos with 3.4.1qa1 for el5, el6, f18, f19, and pidora-18 are now available at http://download.gluster.org/pub/glus​ter/glusterfs/qa-releases/3.4.1qa1/
14:42 glusterbot <http://goo.gl/EZ4XPR> (at download.gluster.org)
14:45 kkeithley samppah: ^^^
14:47 bcdonadio kkeithley, I have gluster 3.2.5 on ubuntu 12.04 repos, should I stick with this version or upgrade?
14:47 kkeithley @ppa
14:47 glusterbot kkeithley: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
14:48 kkeithley you should upgrade
14:50 samppah kkeithley: great, thanks!
14:51 kkeithley samppah: let us know how it goes
14:53 tryggvil joined #gluster
14:55 Sonicos joined #gluster
14:56 bcdonadio kkeithley, ok, I will do so, thanks ^^
14:58 awheele__ joined #gluster
14:58 mohankumar joined #gluster
15:03 zerick joined #gluster
15:13 awheeler joined #gluster
15:17 vpshastry joined #gluster
15:20 socinoS joined #gluster
15:20 daMaestro joined #gluster
15:21 GLHMarmot joined #gluster
15:24 rwheeler joined #gluster
15:34 puebele joined #gluster
15:34 jclift joined #gluster
15:34 NuxRo from which version on has upstream samba included the glusterfs patches?
15:48 zaitcev joined #gluster
15:48 kkeithley 4.1.0rc3 has .../source3/modules/vfs_glusterfs.c,  4.0.9 does not
15:51 NuxRo ah, rc3
15:51 NuxRo okay
15:51 bala joined #gluster
15:51 johnbot11 joined #gluster
15:51 NuxRo kkeithley: do you know of any RPMs of this for EL6?
15:55 kkeithley not off hand. Samba isn't in EPEL because it's in RHEL (EPEL doesn't ship anything that would conflict with RHEL). Maybe in CentOS?
15:55 kkeithley EPEL doesn't ship packages that would conflict with genuine RHEL packages.
15:55 NuxRo samba is in centos, i was referring to the glusterfs samba module
15:57 kkeithley The glusterfs vfs module isn't available unbundled from samba AFAIK, chertel or jrivera would have a better idea, they're MIA here. :-(
15:58 vpshastry left #gluster
15:58 kkeithley s/chertel/crh/
15:58 glusterbot What kkeithley meant to say was: The glusterfs vfs module isn't available unbundled from samba AFAIK, crh or jrivera would have a better idea, they're MIA here. :-(
15:59 kkeithley s/jrivera/jarrpa/
15:59 glusterbot What kkeithley meant to say was: The glusterfs vfs module isn't available unbundled from samba AFAIK, chertel or jarrpa would have a better idea, they're MIA here. :-(
16:00 NuxRo right
16:00 NuxRo README says:
16:00 NuxRo 1.) Download and unpack a Samba source tree.
16:00 NuxRo I take it yum install samba-devel is not enough
16:03 davinder joined #gluster
16:06 dkorzhevin joined #gluster
16:09 johnbot1_ joined #gluster
16:11 bdeb4 joined #gluster
16:11 bcdonadio I'm getting a "/export/brick1 or a prefix of it is already part of a volume" error, but when I do "gluster volume info" there's no volumes. Why so?
16:11 glusterbot bcdonadio: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
16:11 bcdonadio Hmm, I liked this bot :P
16:12 johnbot__ joined #gluster
16:12 bdeb4 I rebooted a gluster server that's part of a replica, and when I try to start it back up, I am getting Unkown key: brick-0 and other errors (http://pastebin.com/72fyHH8N). Any advice?
16:12 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:18 NuxRo kkeithley: Cannot find api/glfs.h. Please specify --with-glusterfs=dir if necessary <- any idea what it wants from me? I have installed glusterfs-server, glusterfs-devel, glusterfs-fuse, glusterfs-api-devel, glusterfs-api
16:18 NuxRo sorry, the error is actually "Cannot link to gfapi (glfs_init)."
16:28 Mo_ joined #gluster
16:42 kkeithley NuxRo: does nm -D /usr/lib64/libgfapi.so | egrep glfs_init   show 0000000000006690 T glfs_init
16:42 kkeithley or something similar
16:44 dusmant joined #gluster
16:45 sjoeboo joined #gluster
16:46 NuxRo kkeithley: I had to abandon the task for the moment, will resume next week and bug you then, thanks :)
16:46 kkeithley yw
16:47 bdeb4 Actually my error appears to be glusterd: resolve brick failed in restore
16:51 \_pol joined #gluster
16:53 \_pol_ joined #gluster
16:54 aliguori joined #gluster
16:56 lpabon joined #gluster
16:58 kkeithley NuxRo: so, when you come back to this——    when I run (samba's) configure, it tells me it can't find the gfapi shlib. This is on f19, fwiw. Right off the bat that makes me suspicious.
16:59 NuxRo kkeithley: thanks I'll probably raise the issue on the ml
16:59 ndk joined #gluster
17:02 \_pol joined #gluster
17:02 mooperd_ joined #gluster
17:10 NuxRo do you guys know how long do i have to wait after issuing a "volume top clear" to get new data?
17:15 [o__o] left #gluster
17:17 [o__o] joined #gluster
17:19 [o__o] left #gluster
17:20 [o__o] joined #gluster
17:22 [o__o] left #gluster
17:24 [o__o] joined #gluster
17:28 kPb_in_ joined #gluster
17:30 \_pol joined #gluster
17:37 \_pol_ joined #gluster
17:38 \_pol joined #gluster
17:44 \_pol joined #gluster
17:45 piotrektt joined #gluster
17:45 piotrektt joined #gluster
17:56 [o__o] left #gluster
17:58 [o__o] joined #gluster
18:00 rwheeler joined #gluster
18:15 lalatenduM joined #gluster
18:18 kkeithley NuxRo: ignore my previous. pilot error, didn't have glusterfs-devel installed. configure only reported the umbrella test failure, not the actual cause.
18:19 kkeithley glusterfs-api-devel should require glusterfs-devel
18:43 tryggvil joined #gluster
18:45 neofob joined #gluster
18:47 vpshastry joined #gluster
18:48 vpshastry left #gluster
18:52 ndevos @channelstats
18:52 glusterbot ndevos: On #gluster there have been 180725 messages, containing 7529318 characters, 1257419 words, 4995 smileys, and 666 frowns; 1087 of those messages were ACTIONs. There have been 71132 joins, 2186 parts, 68938 quits, 23 kicks, 168 mode changes, and 7 topic changes. There are currently 210 users and the channel has peaked at 239 users.
18:52 puebele1 joined #gluster
18:52 JoeJulian bdeb4: yes, that's the error that's causing the failure. Try "glusterd --debug" to see if that's any clearer.
18:53 bdeb4 JoeJulian: thanks. actually i solved it by deleted the folder in vol/ and it just rebuilt it from the master server
18:54 jporterfield joined #gluster
18:54 JoeJulian bdeb4: Cool. Sorry nobody else gave you that answer, too. That's probably where we would have ended up.
18:55 bdeb4 no problem :)
19:17 awheeler_ joined #gluster
19:26 manik joined #gluster
19:29 RedShift joined #gluster
19:58 jporterfield joined #gluster
20:02 \_pol joined #gluster
20:05 tg2 JoeJulian, is there an easy way to upgrade from 3.3 to 3.4 in production?
20:05 tg2 do servers first then clients, visa versa etc
20:06 rwheeler joined #gluster
20:07 tg2 and 3.4.0 has the ext4 bugfix in it 100%? ie: i can run it with a current kernel no problem?
20:23 jporterfield joined #gluster
20:26 manik joined #gluster
20:28 JoeJulian @3.4
20:28 glusterbot JoeJulian: (#1) 3.4 sources and packages are available at http://goo.gl/zO0Fa Also see @3.4 release notes and @3.4 upgrade notes, or (#2) To replace a brick with a blank one, see http://goo.gl/bhbwd2
20:28 JoeJulian 3.4 upgrade notes | tg2
20:28 JoeJulian ~3.4 upgrade notes | tg2
20:28 glusterbot tg2: http://goo.gl/SXX7P
20:28 jporterfield joined #gluster
20:29 JoeJulian And yes. The workaround for the ext4 issue is in 3.4.0.
20:35 rwheeler joined #gluster
20:35 \_pol joined #gluster
20:53 jporterfield joined #gluster
20:53 jclift left #gluster
20:56 rotbeard joined #gluster
21:00 rfortier joined #gluster
21:23 andreask joined #gluster
21:23 andreask joined #gluster
21:40 manik joined #gluster
21:46 awheeler_ joined #gluster
21:47 awheeler joined #gluster
21:49 awheeler_ joined #gluster
22:15 jporterfield joined #gluster
22:16 ingard joined #gluster
22:23 tjstansell curious if anyone who helps manage the content on the www.gluster.org or forge.gluster.org sites has any comments about how stuff like the existing 3.4.1q1 release should be published.
22:24 tjstansell i'm on the mailing list and saw it was out, but wanted to see if i could find any info on what's changed, etc... i've gone through the website and can't find anything even mentioning it.
22:24 tjstansell shouldn't there be some way for folks to find this kind of info?
22:27 tjstansell it would be great to have something simple like zfsonlinux.org's site that lists the current releases, links to the changelogs, etc.
22:32 tjstansell i can't even search bugzilla.redhat.com for what's changed since there's nothing in the versions or target milesones that mentions 3.4.1 ...
22:37 tjstansell i want something like: https://github.com/gluster/gluster​fs/compare/release-3.4...3.4.1qa1
22:37 glusterbot <http://goo.gl/RVhaVd> (at github.com)
22:37 tjstansell but github's mirror doesn't seem to know about the tags.
22:38 tjstansell and i can't find anything similar in forge.gluster.org
22:39 tjstansell the closest is https://forge.gluster.org/gluster​fs-core/glusterfs/graph/v3.4.1qa1 and then just manually going down the graph until you hit the v3.4.0 tag
22:39 glusterbot <http://goo.gl/OaWlLi> (at forge.gluster.org)
22:46 JoeJulian http://www.gluster.org/community/docu​mentation/index.php/Backport_Wishlist
22:46 glusterbot <http://goo.gl/6LCcg> (at www.gluster.org)
22:46 JoeJulian tjstansell: ^
22:48 tjstansell i don't believe that's everything, though. maintaining a list on a wiki is far from exact.
22:48 JoeJulian True
22:49 tjstansell not to mention my main point is that without already knowing about that page, i'd never find it off the main site.
22:50 JoeJulian http://review.gluster.org/#/q/​status:merged+branch:release-3.4,n,z is another inexact reference that might be useful.
22:50 glusterbot <http://goo.gl/lkFhFd> (at review.gluster.org)
22:52 tjstansell inexact seems right ... almost all of those reference bug 1000131
22:52 glusterbot Bug http://goo.gl/JOatTA high, unspecified, ---, amarts, NEW , Users Belonging To Many Groups Cannot Access Mounted Volume
22:52 tjstansell but interesting, nonetheless.
22:53 JoeJulian https://github.com/gluster/glust​erfs/compare/v3.4.0...v3.4.1qa1
22:53 glusterbot <http://goo.gl/3vyMD8> (at github.com)
22:56 tjstansell JoeJulian: thanks. yeah, that makes much more sense than my github comparison ... which explains why it actually works.
22:56 JoeJulian lol
23:00 andreask1 joined #gluster
23:07 asias joined #gluster
23:27 rcheleguini joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary