Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 Ark joined #gluster
00:05 Warwickshaw joined #gluster
00:05 tjikkun joined #gluster
00:12 gildub joined #gluster
00:16 a2_ sh_t, your write block sizes are too small and therefore fuse context switches are dominating the latency.. in which case write-behind cannot do much
00:50 gmcwhist_ joined #gluster
01:08 sjm joined #gluster
01:16 ultrabizweb joined #gluster
01:29 jmarley joined #gluster
01:29 jmarley joined #gluster
01:37 shapemaker joined #gluster
01:39 bala joined #gluster
01:47 sjm left #gluster
01:49 calum_ joined #gluster
02:03 harish_ joined #gluster
02:12 lalatenduM joined #gluster
02:20 Ark joined #gluster
02:23 suliba joined #gluster
02:37 bharata-rao joined #gluster
02:49 saurabh joined #gluster
02:49 raghug joined #gluster
03:04 sjm joined #gluster
03:04 rjoseph joined #gluster
03:15 hagarth1 joined #gluster
03:15 hchiramm__ joined #gluster
03:24 hagarth joined #gluster
03:36 jag3773 joined #gluster
03:38 nbalachandran joined #gluster
03:42 itisravi joined #gluster
03:44 kanagaraj joined #gluster
03:45 RameshN joined #gluster
03:50 spandit joined #gluster
03:53 rastar joined #gluster
04:05 davinder8 joined #gluster
04:12 dusmant joined #gluster
04:16 JoeJulian @later tell frankenspanker Nope, my idea didn't pan out. I'll need more info.
04:16 glusterbot JoeJulian: The operation succeeded.
04:18 haomaiwang joined #gluster
04:20 haomai___ joined #gluster
04:21 haom_____ joined #gluster
04:23 ndarshan joined #gluster
04:24 haomaiwa_ joined #gluster
04:29 jag3773 joined #gluster
04:45 kdhananjay joined #gluster
04:53 nishanth joined #gluster
04:53 jason_ joined #gluster
04:57 psharma joined #gluster
04:58 shubhendu joined #gluster
05:01 bala joined #gluster
05:01 ramteid joined #gluster
05:03 haomaiwang joined #gluster
05:03 glusterbot New news from newglusterbugs: [Bug 1108448] selinux alerts starting glusterd in f20 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1108448>
05:08 rejy joined #gluster
05:11 kshlm joined #gluster
05:14 prasanthp joined #gluster
05:25 ppai joined #gluster
05:26 jobewan joined #gluster
05:31 bala joined #gluster
05:31 chirino_m joined #gluster
05:38 vimal joined #gluster
05:40 ricky-ti1 joined #gluster
05:44 raghu joined #gluster
05:46 nishanth joined #gluster
05:47 shylesh__ joined #gluster
05:48 hagarth joined #gluster
05:49 aravindavk joined #gluster
05:50 n0de joined #gluster
05:51 Philambdo joined #gluster
05:51 dusmant joined #gluster
05:56 rjoseph joined #gluster
06:02 mbukatov joined #gluster
06:06 LebedevRI joined #gluster
06:22 shubhendu joined #gluster
06:34 jason_ joined #gluster
06:36 ekuric joined #gluster
06:39 vpshastry joined #gluster
06:40 aravindavk joined #gluster
06:40 meghanam joined #gluster
06:40 meghanam_ joined #gluster
06:43 hagarth joined #gluster
06:44 vimal joined #gluster
06:45 ctria joined #gluster
06:46 jason__ joined #gluster
06:47 dusmant joined #gluster
06:50 rjoseph joined #gluster
07:03 chirino joined #gluster
07:03 eseyman joined #gluster
07:11 jason__ joined #gluster
07:13 haomaiwang joined #gluster
07:21 64MAAZSEV joined #gluster
07:23 glusterbot New news from resolvedglusterbugs: [Bug 764890] Keep code more readable and clean <https://bugzilla.redhat.com/show_bug.cgi?id=764890>
07:24 ktosiek joined #gluster
07:26 ProT-0-TypE joined #gluster
07:30 Thilam joined #gluster
07:32 fsimonce joined #gluster
07:33 glusterbot New news from newglusterbugs: [Bug 1108502] [RFE] Internalize master/slave verification (gverify) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1108502>
07:54 liquidat joined #gluster
07:55 brad[] joined #gluster
08:07 liquidat joined #gluster
08:08 MacWinner joined #gluster
08:23 haomaiwa_ joined #gluster
08:23 jiku joined #gluster
08:26 shubhendu joined #gluster
08:32 bchilds joined #gluster
08:34 chirino_m joined #gluster
08:34 rossi_ joined #gluster
08:35 MacWinner in a production environment, is it common to need to restart gluster nodes in replica sets?  I just ran into an issue on one of my 4-node-replica2 clusters where one of the nodes was having trouble accessing the fuse mount point.. a bunch of process got stuck waiting for I/O.. I was not even able to kill -9 the processes as root..
08:56 n0de joined #gluster
09:03 glusterbot New news from newglusterbugs: [Bug 1040355] NT ACL : User is able to change the ownership of folder <https://bugzilla.redhat.co​m/show_bug.cgi?id=1040355>
09:18 vpshastry joined #gluster
09:20 dusmant joined #gluster
09:21 meghanam joined #gluster
09:21 meghanam_ joined #gluster
09:24 Thilam Hi, is there a way to view easilly the brick occupation on a volume?
09:32 Thilam found
09:32 Thilam gluster volume status <volume> detail
09:32 Thilam for whose who can be interested
09:33 jmarley joined #gluster
09:33 jmarley joined #gluster
09:38 jag3773 joined #gluster
09:58 TvL2386 joined #gluster
10:17 deepakcs joined #gluster
10:18 prasanthp joined #gluster
10:23 RameshN joined #gluster
10:24 ndarshan joined #gluster
10:26 haomaiwa_ joined #gluster
10:26 bala joined #gluster
10:37 gildub joined #gluster
10:52 edward1 joined #gluster
10:59 harish_ joined #gluster
11:03 haomaiwa_ joined #gluster
11:04 itisravi joined #gluster
11:19 ndarshan joined #gluster
11:26 RameshN joined #gluster
11:26 bala joined #gluster
11:27 dusmant joined #gluster
11:34 ppai joined #gluster
11:36 Slashman joined #gluster
11:37 rjoseph joined #gluster
11:40 prasanthp joined #gluster
11:50 dusmant joined #gluster
12:01 tdasilva joined #gluster
12:02 ekuric joined #gluster
12:05 ricky-ticky1 joined #gluster
12:11 prasanthp joined #gluster
12:15 cfeller joined #gluster
12:15 rnz joined #gluster
12:16 dusmant joined #gluster
12:28 lpabon joined #gluster
12:34 glusterbot New news from newglusterbugs: [Bug 1070685] glusterfs ipv6 functionality not working <https://bugzilla.redhat.co​m/show_bug.cgi?id=1070685> || [Bug 1108669] RHEL 5.8 mount fails "cannot open /dev/fuse" <https://bugzilla.redhat.co​m/show_bug.cgi?id=1108669>
12:37 sroy joined #gluster
12:48 ndk joined #gluster
12:56 firemanxbr joined #gluster
12:58 haomaiwang joined #gluster
12:59 haomai___ joined #gluster
12:59 shyam joined #gluster
13:01 julim joined #gluster
13:02 DV joined #gluster
13:03 DV__ joined #gluster
13:03 ctria joined #gluster
13:03 Ark joined #gluster
13:06 jskinner joined #gluster
13:08 japuzzo joined #gluster
13:08 hagarth joined #gluster
13:19 lalatenduM joined #gluster
13:21 kdhananjay joined #gluster
13:27 shubhendu joined #gluster
13:32 davinder8 joined #gluster
13:34 glusterbot New news from newglusterbugs: [Bug 1103756] inode lru limit reconfigure option does not actually change the lru list of the inode tabke <https://bugzilla.redhat.co​m/show_bug.cgi?id=1103756>
13:37 kanagaraj joined #gluster
13:39 jmarley joined #gluster
13:40 brad_mssw joined #gluster
13:55 RameshN joined #gluster
13:55 haomaiwang joined #gluster
14:05 calum_ joined #gluster
14:19 shyam joined #gluster
14:23 haomaiwang joined #gluster
14:23 ctria joined #gluster
14:25 wushudoin joined #gluster
14:29 B21956 joined #gluster
14:34 haomaiwa_ joined #gluster
14:39 jcsp joined #gluster
14:49 NCommander joined #gluster
14:55 deepakcs joined #gluster
15:02 sroy_ joined #gluster
15:03 ctria joined #gluster
15:06 Ark joined #gluster
15:07 davinder8 joined #gluster
15:09 _dist joined #gluster
15:13 Alex joined #gluster
15:14 Alex Hello all. I have an odd issue where under high-ish load, my Gluster brick process seems to have lots of file handles (to the same file) left open. The setup I have is two nodes, each acting as both server and client, and 10 bricks on each. In the event of high load for a file, I migrate it away from gluster automagically with nginx slowfs cache - so I would hence expect the file handles from the brick proc to the file to go away too, but they don't.
15:15 Alex I can also see that there's nothing using the file on the gluster mount according to fuser, and only the gluster brick process on the underlying brick file
15:17 Alex and by 'many file handles', I mean ~40k and ~20k to the top two offenders :)
15:21 theron joined #gluster
15:21 sputnik13 joined #gluster
15:22 theron joined #gluster
15:22 davinder8 joined #gluster
15:26 davinder9 joined #gluster
15:30 davinder10 joined #gluster
15:31 jobewan joined #gluster
15:34 ndk joined #gluster
15:35 hchiramm__ joined #gluster
15:36 bet_ joined #gluster
15:46 semiosis JoeJulian: how'd it go with the patched debs?
15:50 bene2 joined #gluster
15:53 hchiramm__ joined #gluster
15:56 Chewi joined #gluster
15:56 mortuar joined #gluster
15:57 Chewi overclk: gave pre-syncing a shot today. on first attempt, realised I hadn't rsync'd xattrs. d'oh. now I have but I'm not sure if it's working out. still in hybrid crawl mode at the moment and getting errors on the slave like this.
15:57 Chewi [2014-06-12 15:47:12.455538] W [client-rpc-fops.c:256:client3_3_mknod_cbk] 0-gv0-client-0: remote operation failed: File exists. Path: <gfid:231f8607-47dc-495b-a5c​c-58d9b1a3ec9c>/com_3415.htm
15:57 Chewi [2014-06-12 15:47:12.455589] W [fuse-bridge.c:1206:fuse_err_cbk] 0-glusterfs-fuse: 26047: SETXATTR() /.gfid/231f8607-47dc-495b-a5cc-58d9b1a3ec9c => -1 (Invalid argument)
15:57 Chewi looks like it's gradually doing that for every file
15:58 ctria joined #gluster
15:59 dusmant joined #gluster
16:06 dusmantkp_ joined #gluster
16:07 primechuck joined #gluster
16:07 rotbeard joined #gluster
16:09 jag3773 joined #gluster
16:09 dusmant joined #gluster
16:10 Matthaeus joined #gluster
16:13 dusmant joined #gluster
16:18 sjm joined #gluster
16:19 nikk_ joined #gluster
16:20 MacWinner joined #gluster
16:20 bala joined #gluster
16:21 MacWinner Hi guys, I ran into this issue last night:  https://access.redhat.com/site/solutions/254343  where my glusterfs client was hanging and caused my apache server and and some other processes to hang.. i wasn't even able to kill the processes with kill -9 as root
16:21 glusterbot Title: GlusterFS Native client mounts hang a RHEL-6 server and print stacktraces in the logs - Red Hat Customer Portal (at access.redhat.com)
16:21 MacWinner the link seems to indicate there is a solution, but I don't see it.. do I need to pay redhat for it?
16:22 zerick joined #gluster
16:22 semiosis @thp
16:22 glusterbot semiosis: There's an issue with khugepaged and it's interaction with userspace filesystems. Try echo never> /sys/kernel/mm/redhat_transparent_hugepage/enabled . See https://bugzilla.redhat.com/​show_bug.cgi?id=GLUSTER-3232 for more information.
16:22 semiosis old bug 3232
16:22 glusterbot Bug https://bugzilla.redhat.co​m:443/show_bug.cgi?id=3232 high, medium, ---, gafton, CLOSED CURRENTRELEASE, ide_info is missing from kernel-pcmcia-cs
16:23 semiosis hmm that doesnt look like the right link
16:23 semiosis Bug 764964
16:23 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=764964 low, medium, ---, aavati, CLOSED CURRENTRELEASE, deadlock related to transparent hugepage migration in kernels >= 2.6.32
16:23 semiosis there's the new BZ ID
16:23 MacWinner thank you so much!
16:23 semiosis @forget thp
16:23 glusterbot semiosis: The operation succeeded.
16:23 semiosis @learn thp as There's an issue with khugepaged and it's interaction with userspace filesystems. Try echo never> /sys/kernel/mm/redhat_transparent_hugepage/enabled . See https://bugzilla.redhat.com​:443/show_bug.cgi?id=764964 for more information.
16:23 glusterbot semiosis: The operation succeeded.
16:23 glusterbot Bug 764964: low, medium, ---, aavati, CLOSED CURRENTRELEASE, deadlock related to transparent hugepage migration in kernels >= 2.6.32
16:23 semiosis MacWinner: yw
16:23 semiosis @thp
16:23 glusterbot semiosis: There's an issue with khugepaged and it's interaction with userspace filesystems. Try echo never> /sys/kernel/mm/redhat_transparent_hugepage/enabled . See https://bugzilla.redhat.com​:443/show_bug.cgi?id=764964 for more information.
16:24 semiosis MacWinner: what version are you using?  the bug says this was fixed in 3.4.0.
16:25 Mo_ joined #gluster
16:25 MacWinner i should be on the latest and greatest.. let me check
16:25 semiosis hmm
16:25 semiosis i just tossed out the thp link because that page you gave said something about khugepaged
16:26 semiosis i'm not sure it's related, but if you try that fix and it works, then maybe this issue came back
16:26 MacWinner 3.5.0-2.el6
16:26 MacWinner i'm not sure how to reproduce the issue.. it appeared late night
16:27 MacWinner the error in my log looks almost like the the one on the redhat link I posted.. not sure if you're bug link is the same thing?
16:30 semiosis no idea
16:30 semiosis never seen that red hat solutions web site before, and can't see the solution either
16:31 MacWinner haha.. it's starting to feel like oracle
16:33 ramteid joined #gluster
16:35 frankenspanker joined #gluster
16:36 MacWinner trying to see the impact of doing "echo never > /sys/kernel/mm/redhat_tran​sparent_hugepage/enabled"
16:37 MacWinner any experience with this specific setting?
16:38 semiosis https://access.redhat.com/site/documentation​/en-US/Red_Hat_Enterprise_Linux/6/html/Perfo​rmance_Tuning_Guide/s-memory-transhuge.html
16:38 glusterbot Title: 5.2. Huge Pages and Transparent Huge Pages (at access.redhat.com)
16:38 semiosis sounds to me like the worst thing would be degraded performance
16:39 JoeJulian oldbug 3232
16:39 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=764964 low, medium, ---, aavati, CLOSED CURRENTRELEASE, deadlock related to transparent hugepage migration in kernels >= 2.6.32
16:39 semiosis a ha
16:40 semiosis JoeJulian: how'd it go with the patched debs?
16:40 jag3773 joined #gluster
16:41 GabrieleV joined #gluster
16:42 JoeJulian Testing in staging today.
16:42 semiosis niec
16:42 semiosis s/ec/ce/
16:42 glusterbot What semiosis meant to say was: nice
16:42 JoeJulian which part? The debs, or the fact that I actually have a staging environment? :D
16:43 semiosis all of the above
16:44 chirino joined #gluster
16:45 dusmant joined #gluster
16:46 ninkotech joined #gluster
16:46 ninkotech_ joined #gluster
16:47 MacWinner hi JoeJulian , I noticed that you were the original reporter of the bug.. I just made a pastebin of the kernel log in /var/log/messages here: http://pastebin.com/aVMT0ULp
16:47 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:47 MacWinner JoeJulian, does this bug look the same, or am I looking in the wrong direction?
16:48 JoeJulian Nope. Totally different, sorry.
16:49 JoeJulian MacWinner: Did you file a bug report with that already?
16:49 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:49 JoeJulian ... and do you have a client log?
16:49 Chewi overclk: good news. early days but looks like it's worked. eventually switched to changelog mode. only gripe was I have to make the volume id on the slave the same as the master. hopefully that's not a problem.
16:50 JoeJulian fyi, when there is a deadlock in fuse, any application trying to access the fuse mount will zombie. You can, however, umount -f (twice) and get them dead.
16:50 MacWinner JoeJulian, haven't yet.. happened at 3am last night, so was tired.. but can file a bug now.  I have all the log files
16:50 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
16:51 MacWinner JoeJulian, ahh.. so umount -f  twice will actually work even if kill -9 doesnt?
16:52 JoeJulian Would be good. People like Avati live and breathe race conditions and can see patterns where all I see is gibberish.
16:52 davinder10 joined #gluster
16:52 JoeJulian yes
16:53 MacWinner cool.. filing bug now.. thanks for the tips too
16:53 mjsmith2 joined #gluster
16:53 churnd what's the expected behavior if you shut down glusterd when someone's cwd is on the volume?
16:54 bala joined #gluster
16:57 LebedevRI joined #gluster
16:58 semiosis churnd: stopping/restarting glusterd should not affect existing mounts
16:58 semiosis ,,(processes)
16:58 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
16:58 semiosis the brick export daemons should continue running
16:58 churnd well also fsd
16:58 JoeJulian If you stop the volume or kill glusterfsd that mount will return ENOTCONN ("Transport endpoint is not connected").
16:58 churnd was trying to get the volume unmounted
16:58 [o__o] joined #gluster
16:58 churnd JoeJulian yeah I saw that
16:59 Matthaeus joined #gluster
16:59 churnd i did "service glusterfsd stop" then "service glusterd stop"
16:59 JoeJulian To get the volume unmounted, use umount.
16:59 churnd while it was mounted
17:00 JoeJulian If you had a replicated volume across different servers, that would have worked just fine.
17:00 churnd yeah it is replicated
17:00 churnd stopped it on both
17:00 JoeJulian then yes, that's expected behavior.
17:01 JoeJulian Is that answering the actual question you're trying to solve?
17:01 mortuar joined #gluster
17:01 churnd on the first, it hung up hard after a few mins
17:01 churnd i still don't know why
17:01 churnd on the second, i noticed after stopping those init processes, some "glusterfs" processes were still running
17:02 JoeJulian Yes, nfs and glustershd (self heal daemon) will continue to run.
17:02 churnd i'm assuming the first one hung up hard because of some open processes on the filesystem
17:02 kumar joined #gluster
17:03 churnd that i had unmounted
17:04 shyam joined #gluster
17:05 glusterbot New news from newglusterbugs: [Bug 1108850] GlusterFS hangs which causes zombie processes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1108850>
17:09 churnd got a bunch of these in the kernel logs:  Jun 12 09:38:21 serv1 kernel: Out of memory: Kill process 8471 (bash) score 4 or sacrifice child
17:11 churnd https://gist.github.com/ch​urnd/c1d2c8093fded2ed55b7
17:11 glusterbot Title: gluster_crash (at gist.github.com)
17:12 zaitcev joined #gluster
17:15 churnd mostly bash related errors... so i'm assuming it didn't like that user's $CWD being on the filesystem when it unmounted it
17:15 JoeJulian That wouldn't cause oom errors.
17:15 ctria joined #gluster
17:15 JoeJulian Running out of memory causes those.
17:17 LebedevRI joined #gluster
17:18 churnd memory usage was fine until i tried to stop gluster
17:21 JoeJulian Hmm, that's interesting.
17:22 churnd but then again i'm not very familiar with what happens when you force unmount a volume that has open file handles
17:41 ProT-0-TypE joined #gluster
17:47 Matthaeus joined #gluster
17:48 andreask joined #gluster
18:02 MacWinner any high level pointers to performance tuning paremeters I should be looking at if my use case is as follows:  XFS bricks, 4-nodes-replica2.  Lots of image files that are about 50k-100k each. with files up to 5-10M also common
18:03 MacWinner basically a bunch of PDFs, and their associated rendered pages as PNGs
18:03 MacWinner more read-heavy.. the writing is sporadic..
18:05 Ark joined #gluster
18:07 dusmant joined #gluster
18:14 semiosis MacWinner: self heal alg full will probably be better for your use case
18:15 plarsen joined #gluster
18:16 mortuar joined #gluster
18:18 tdasilva left #gluster
18:21 Matthaeus joined #gluster
18:27 JoeJulian MacWinner: and deadline or noop scheduler.
18:29 qdk_ joined #gluster
18:30 MacWinner semiosis, JoeJulian , thanks... I'll look those up
18:35 glusterbot New news from newglusterbugs: [Bug 1108887] [USS]: snapview-server should get the name of the entry by itself from path instead of depending on resolver <https://bugzilla.redhat.co​m/show_bug.cgi?id=1108887>
18:44 jskinner_ joined #gluster
18:45 chirino_m joined #gluster
18:48 MacWinner i am combing through the logs to get some hints on why my hang issue occurred.. even though all my components are upgrade to 3.5, I see this in my /var/log/glusterfs/glustershd.log:  [client-handshake.c:1659:sele​ct_server_supported_programs] 0-storage-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
18:48 MacWinner not sure why it's reporting GlusterFS 3.3?
18:48 MacWinner is it just a log bug, or some version that is completely unrelated to the main package versions?
18:58 JoeJulian MacWinner: Unrelated
18:59 MacWinner in my brick log, i get this error reported twice every 10 minutes:  [2014-06-12 18:51:23.753566] E [index.c:267:check_delete_stale_index_file] 0-storage-index: Base index is not createdunder index/base_indices_holder
19:00 MacWinner does this look very abnormal?
19:00 theron joined #gluster
19:01 jrcresawn joined #gluster
19:05 rotbeard joined #gluster
19:05 uebera|| joined #gluster
19:06 capri joined #gluster
19:08 Matthaeus joined #gluster
19:12 jag3773 joined #gluster
19:18 MacWinner is it wise to put my gluster nodes on a period reboot cycle?  ie, are there known memory/filehandle leaks (however minor) in any of the components?
19:21 JoeJulian MacWinner: There's one in 3.5.0. Will be fixed in 3.5.1beta3.
19:22 MacWinner thanks.. so would you recommend the restart cycle then?  or is it very minor?
19:23 JoeJulian I'm still running 3.4.
19:23 JoeJulian Apparently, though, the leak must be small enough that I haven't seen anyone complaining yet.
19:23 MacWinner got it, thanks.. i need to be less upgrade happy :)
19:28 rossi_ joined #gluster
19:28 semiosis i generally avoid .0 releases for production
19:29 semiosis usually happy to run latest dev build on my workstation though :)
19:32 bgpepi joined #gluster
19:33 Matthaeus joined #gluster
19:40 zerick joined #gluster
19:53 andreask joined #gluster
19:53 mortuar_ joined #gluster
19:58 andreask joined #gluster
19:59 cfeller a few questions:
19:59 cfeller 1) I noticed that http://download.gluster.org/pub​/gluster/glusterfs/3.4/LATEST/ still points at 3.4.3
19:59 glusterbot Title: Index of /pub/gluster/glusterfs/3.4/LATEST (at download.gluster.org)
19:59 cfeller 2) there have been no Fedora or EL 3.4.4 rpms built yet (but SuSE got rpms built 3 days ago).
19:59 mortuar joined #gluster
19:59 cfeller 3) Was there an issue 3.4.4, hence the delay, or there other things going on?  Should I wait or just modify the .spec file for 3.4.3 to build 3.4.4?
20:01 cfeller Also, http://www.gluster.org/download/, still mentions 3.4.3 as being the latest 3.4, yet 3.4.4 was announced days ago.
20:04 zerick joined #gluster
20:07 systemonkey joined #gluster
20:07 siel joined #gluster
20:23 MacWinner semiosis, I generally do too. though recently i've turned neurotic about running on the latest stuff for no really good reason.  Gonna need to chill on that now that we are going more into production.
20:24 MacWinner semiosis, any rough guesses on when the 3.5.1 release is due?
20:24 semiosis i imagine if you asked the devs the answer would be something like "Real Soon Now"
20:26 MacWinner nice.. most dev's would say f-off ;)
20:35 andreask joined #gluster
20:50 jobewan joined #gluster
20:51 eryc joined #gluster
20:51 eryc joined #gluster
21:14 cfeller answering one of my questions from earlier, it turns out there is a spec file in the src tarball, so for the impatient (such as myself), I can just "rpmbuild -ta glusterfs-3.4.4.tar.gz".
21:18 chirino joined #gluster
21:25 cfeller Ah... so is this: http://www.gluster.org/pipermail/g​luster-users/2014-June/040574.html the reason why official 3.4.4 RPMs haven't been built?  Because a 3.4.4-2 bugfix is immiment?
21:25 glusterbot Title: [Gluster-users] Reminder: Weekly Gluster Community meeting is in 27 mins (at www.gluster.org)
21:34 Matthaeus joined #gluster
21:41 sputnik13 joined #gluster
21:48 chirino_m joined #gluster
21:54 ctria joined #gluster
22:06 sputnik13 joined #gluster
22:06 sputnik13 joined #gluster
22:10 elico joined #gluster
22:11 jbd1 joined #gluster
22:24 bgpepi joined #gluster
22:37 a2 joined #gluster
22:58 fidevo joined #gluster
22:59 calum_ joined #gluster
23:12 gildub joined #gluster
23:38 Ark joined #gluster
23:56 sputnik13 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary