Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 Humble joined #gluster
00:48 luckybambu joined #gluster
01:19 Humble joined #gluster
01:25 semiosis @qa releases
01:26 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
01:28 _br_ joined #gluster
01:30 aliguori joined #gluster
01:34 _br_ joined #gluster
01:49 Humble joined #gluster
01:54 nueces joined #gluster
01:56 kevein joined #gluster
02:15 Humble joined #gluster
02:17 jvyas joined #gluster
02:18 jayunit100 hi gluster
02:18 jayunit100 trying to detach old peers, getting this error: " One of the peers is probably down. Check with 'peer status'. "
02:21 raven-np joined #gluster
02:21 JoeJulian That suggests to me that one of the peers is not running glusterd.
02:22 JoeJulian Did you check with "gluster peer status"?
02:24 semiosis doing the ubuntu 3.4.0 packages
02:25 kevein joined #gluster
02:25 JoeJulian cool
02:25 JoeJulian I've been thinking of trying to put together some test systems to see what I can break.
02:30 jag3773 joined #gluster
02:35 jayunit100 yeah gluster peer status sais "detached"
02:35 jayunit100 is that ok ? I just want to completely eliminate them.
02:35 jayunit100 i'd like to remove all peers and start over. is that possible?  some of the peers are "bad" (i.e. detached, disconnected, etc.)
02:39 JoeJulian jayunit100: Try adding the word "force" to the end of the command. If that doesn't work, and you're ready to wipe your entire configuration, remove /var/lib/glusterd/* from all your servers.
02:42 jayunit100 wow.
02:44 JoeJulian Well you did say you wanted to start over...
02:45 jayunit100 so - what do i lose when i wipe /var/lib/glusterd ?  Will gluster still "work" ?
02:45 JoeJulian Yep, that's just all the configuration state.
02:45 jayunit100 i dont have any data or anything, but im using RHS, so - i dont want to kill some RHS stuff
02:45 JoeJulian Still just configuration "state".
02:45 jayunit100 oh ok.  so nothing RHS specific :)
02:45 JoeJulian (to the best of my knowledge)
02:46 JoeJulian There's people that get paid to answer those kinds of questions though. :)
02:46 * JoeJulian is a ,,(volunteer)
02:46 glusterbot A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
02:46 jayunit100 JoeJulian, ha im one of them
02:46 JoeJulian hehe
02:47 jayunit100 or at least... i thought i was :)
02:47 JoeJulian Oh, I know your name...
02:47 * jayunit100 works for redhat but is not a gluster expert
02:47 JoeJulian why do I know your name...
02:48 jayunit100 hmmm... well.. i got a hadoop blog jayunit100.blogspot.com that some bigdata people know about.  i also work at redhat.
02:48 jayunit100 I might know you from redhat? westford?
02:48 JoeJulian Ah, actually it's probably the mailing list.
02:48 jayunit100 yeah i do harass quite a bit on there
02:49 JoeJulian Speaking of which... I'm surprised my last email to gluster-users/Stephan didn't get more of a reaction...
02:49 JoeJulian Happily surprised, but still surprised.
02:50 jayunit100 ah i got it !
02:50 jayunit100 /etc/init.d/glusterd star
02:51 JoeJulian You do know the ,,(processes), I assume.
02:51 glusterbot the GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/hJBvL for more information.
02:51 jayunit100 ah i got it ! rm -rf /var/lib/glusterd/peers/
02:51 jayunit100 so you, first, rm -rf the peers/
02:51 jayunit100 then, after, you restart the gluster service.
02:51 JoeJulian Yep
02:52 JoeJulian I would probably stop the service on all the servers before I ever mess with anything in /var/lib/glusterd
02:52 * JoeJulian lies... he likes to live on the edge...
02:52 overclk joined #gluster
02:55 semiosis building glusterfs 3.4.0 for ubuntu was easy, getting the package to include the upstart fix for mounting from localhost at boot is very difficult
02:55 semiosis :(
02:55 semiosis no errors, it just doesnt do it
02:56 JoeJulian :/
02:57 JoeJulian "bioinformatics phd" and you work in tech support?
02:57 __Bryan__ joined #gluster
02:58 glusterbot New news from resolvedglusterbugs: [Bug 908277] Poor performance with gluster volume set command execution <http://goo.gl/lC62E>
03:00 jayunit100 hahaha
03:00 semiosis turns out my 3.3 ppa packages dont install the upstart job either :(
03:01 semiosis meh
03:01 jayunit100 JoeJulian, nope i dont work in tech support.  :) more r&d type stuff i guess.
03:01 JoeJulian Oh, then it wasn't just some fluke based on the build I was using.
03:01 semiosis the glusterfs-server upstart job gets installed by default, but the additional one to block mounting until glusterd is running isn't installed
03:01 JoeJulian I would have reported that, but I thought it was on purpose.
03:02 semiosis i'm surprised no one (else) reported it either
03:02 JoeJulian Come to think of it.. I might have tried to ask you about it...
03:03 semiosis well it worked at some point
03:04 semiosis going back to check the 3.2.5 package in universe, that one *has* to work!
03:04 semiosis and it does
03:05 semiosis now where did things go wrong
03:05 JoeJulian http://irclog.perlgeek.de/g​luster/2013-01-07#i_6304711
03:05 glusterbot <http://goo.gl/fzxn5> (at irclog.perlgeek.de)
03:06 semiosis guess i missed that memo
03:06 JoeJulian guess
03:06 semiosis :O
03:08 Humble joined #gluster
03:11 semiosis so thats when you wrote that gist... that was new to me the other day, missed it the first time around
03:15 semiosis ok fixed the 3.3 package, uploading precise & quantal packages to launchpad ppa now
03:18 bharata joined #gluster
03:26 bulde joined #gluster
03:30 Humble joined #gluster
03:39 semiosis progress :)
03:41 ultrabizweb joined #gluster
03:53 sripathi joined #gluster
04:01 cmedeiros joined #gluster
04:01 cmedeiros badone, ping
04:01 badone cmedeiros: hey buddy
04:01 badone cmedeiros: how you doing?
04:02 cmedeiros badone, very well, thanks
04:02 badone cool
04:02 cmedeiros badone, you?
04:02 badone all good here man
04:02 cmedeiros badone, cool cool
04:03 badone cmedeiros: big gluster fan? Or just lurking? :)
04:03 cmedeiros badone, just lurking... polenta told me you would be here
04:04 badone cmedeiros: ahhh
04:04 polenta cmedeiros, badone , hehehe
04:04 polenta I didn't
04:04 polenta hahaha
04:05 badone hehe
04:05 sgowda joined #gluster
04:06 * badone waves to sgowda
04:07 badone carneassada: you'll always find me lurking somewhere on freenode
04:09 carneassada badone, sure
04:12 pai joined #gluster
04:20 squeak joined #gluster
04:21 cmedeiros joined #gluster
04:22 vigia joined #gluster
04:23 johnmorr joined #gluster
04:27 bala1 joined #gluster
04:33 sripathi1 joined #gluster
04:36 sripathi joined #gluster
04:38 deepakcs joined #gluster
04:43 daddmac2 joined #gluster
04:53 daddmac2 left #gluster
05:04 rastar joined #gluster
05:05 vpshastry joined #gluster
05:06 shylesh joined #gluster
05:07 ramkrsna joined #gluster
05:09 ramkrsna joined #gluster
05:09 ramkrsna joined #gluster
05:12 lala joined #gluster
05:26 mohankumar joined #gluster
05:33 lala joined #gluster
05:49 ramkrsna joined #gluster
05:50 ultrabizweb joined #gluster
06:05 raghu joined #gluster
06:07 sripathi1 joined #gluster
06:18 samppah deepakcs: ping
06:22 deepakcs samppah, hi
06:22 samppah how's going?
06:23 deepakcs samppah, good
06:25 samppah deepakcs: that's good to hear :) do you have any more information available about block device translator?
06:26 samppah i was thinking about trying it out too but i guess i need more information than gluster bd help provides
06:28 JoeJulian Looks like it's in master. I suppose that probably means it's in the current qa release...
06:28 JoeJulian ./storage/bd_map
06:28 deepakcs samppah, mohankumar is the right person
06:28 JoeJulian xlators/storage/bd_map
06:28 samppah deepakcs: okay thanks.. i'll bug him if i need to :)
06:28 deepakcs samppah, wc
06:28 samppah JoeJulian: yeah, it seems to be in qa
06:28 mohankumar samppah: feel free to disturb me :-)
06:28 samppah great :)
06:29 * JoeJulian was already disturbed...
06:29 samppah is is currently possible to create replicated volume with it? gluster bd help shows only "bd create <volname>:<bd> <size>"
06:31 mohankumar samppah: no, current limitation is only one brick for BD volume :-/
06:32 samppah mohankumar: ahh, do you know if that's going to make into 3.4?
06:32 mohankumar samppah: multi brick support for BD xlator will not be available in 3.4
06:33 samppah mohankumar: okay, thanks :)
06:33 sripathi joined #gluster
06:34 mohankumar np
06:39 bharata_ joined #gluster
06:47 vimal joined #gluster
06:56 kanagaraj joined #gluster
07:00 sripathi joined #gluster
07:05 sripathi joined #gluster
07:07 ramkrsna joined #gluster
07:07 tjikkun_ joined #gluster
07:18 jtux joined #gluster
07:20 joeto joined #gluster
07:32 ctria joined #gluster
07:33 raven-np1 joined #gluster
07:40 puebele joined #gluster
07:45 raven-np joined #gluster
07:46 Nevan joined #gluster
07:47 ekuric joined #gluster
07:57 jtux joined #gluster
07:58 puebele joined #gluster
08:00 guigui1 joined #gluster
08:09 tjikkun_work joined #gluster
08:09 rgustafs joined #gluster
08:34 deepakcs joined #gluster
08:34 Staples84 joined #gluster
08:36 Humble joined #gluster
08:40 dobber joined #gluster
09:05 sgowda joined #gluster
09:07 jiffe1 joined #gluster
09:30 sripathi1 joined #gluster
09:38 Ryan_Lane joined #gluster
09:49 manik joined #gluster
09:50 bauruine joined #gluster
09:50 sripathi joined #gluster
09:52 _benoit_ joined #gluster
10:04 raskas_ joined #gluster
10:04 raskas_ left #gluster
10:05 raskas_ joined #gluster
10:05 raskas_ Hi all!
10:05 raskas_ I'm testing my gluster setup
10:08 _br_ joined #gluster
10:11 raskas_ I noticed that during a replication heal my file is blocked
10:11 rgustafs joined #gluster
10:11 raskas_ so I can not read from or write to the file that is currenlty healed.
10:11 raskas_ is that normal behaviour?
10:17 _br_ joined #gluster
10:20 ndevos raskas_: I think that should not happen anymore with version 3.4...
10:20 raskas_ but it is normal in version 3.3.1?
10:21 raskas_ ndevos, when will 3.4 be released?
10:21 raskas_ and what else will be fixed in 3.4?
10:21 ndevos raskas_: alpha was released yesreterday
10:21 ndevos *yesterday
10:22 raven-np joined #gluster
10:22 _br_ joined #gluster
10:23 ndevos http://www.gluster.org/community/d​ocumentation/index.php/Planning34 contains a list of new features
10:23 glusterbot <http://goo.gl/4yWrh> (at www.gluster.org)
10:26 raskas_ ok thx!
10:28 disarone joined #gluster
10:30 sashko joined #gluster
10:31 raskas joined #gluster
10:34 venkat joined #gluster
10:34 venkat session is about to start.
10:52 bala1 joined #gluster
10:59 xymox joined #gluster
11:02 rastar joined #gluster
11:20 venkat joined #gluster
11:25 duerF joined #gluster
11:29 bauruine joined #gluster
11:29 hagarth joined #gluster
11:32 glusterbot New news from newglusterbugs: [Bug 908708] DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 (glusterfs/python/syncdaemon/syncdutils.py:175) <http://goo.gl/U3Og9>
11:33 xymox joined #gluster
11:35 rastar1 joined #gluster
11:36 venkat joined #gluster
11:37 xymox joined #gluster
11:44 xymox joined #gluster
11:53 bala1 joined #gluster
11:56 xymox joined #gluster
12:01 kkeithley1 joined #gluster
12:01 kkeithley1 left #gluster
12:01 kkeithley1 joined #gluster
12:09 ramkrsna joined #gluster
12:19 venkat joined #gluster
12:42 rgustafs joined #gluster
12:42 Staples84 joined #gluster
12:52 edward1 joined #gluster
12:53 Ryan_Lane joined #gluster
12:59 raskas when changing an option of my volume, my mountpoint hangs for some moments, is this normal?
13:23 __Bryan__ left #gluster
13:24 raven-np joined #gluster
13:31 x4rlos ?? who are you glusterbot?
13:31 raven-np1 joined #gluster
13:37 rwheeler joined #gluster
13:38 x4rlos Anyone know what kinda bot glusterbot is?
13:39 dustint joined #gluster
13:44 raven-np joined #gluster
13:56 flowouffff joined #gluster
14:03 raven-np1 joined #gluster
14:06 hybrid5121 joined #gluster
14:08 ramkrsna joined #gluster
14:09 lh joined #gluster
14:14 ndevos x4rlos: glusterbot is based on supybot iirc
14:23 dustint joined #gluster
14:25 deepakcs joined #gluster
14:26 balunasj joined #gluster
14:30 mohankumar joined #gluster
14:30 theron joined #gluster
14:34 bharata joined #gluster
14:42 x4rlos thanks. Its a good one :-)
14:45 pdurbin joined #gluster
14:53 georgeh joined #gluster
14:56 stopbit joined #gluster
15:03 ahunt joined #gluster
15:06 raven-np joined #gluster
15:10 ahunt I am running gluster 3.3.0 on a cluster of 40 machines, for storage and for Condor HA shared spool. I messed up when reinstalling the os on two brick machines and now I can't read from the spool directory. I just want to remove the directory entirely, but it hangs forever if I try through the client mount.
15:10 ahunt Any advice would be much appreciated.
15:12 johnmorr joined #gluster
15:20 raven-np joined #gluster
15:21 vpshastry joined #gluster
15:23 plarsen joined #gluster
15:24 bugs_ joined #gluster
15:26 Humble joined #gluster
15:28 wushudoin joined #gluster
15:28 vpshastry1 joined #gluster
15:31 * pdurbin wonders if johnmark is around
15:31 semiosis ahunt: possibly the ,,(ext4) bug?
15:31 glusterbot ahunt: Read about the ext4 problem at http://goo.gl/PEBQU
15:33 glusterbot New news from newglusterbugs: [Bug 905933] GlusterFS 3.3.1: NFS Too many levels of symbolic links/duplicate cookie <http://goo.gl/YA2vM>
15:36 luckybambu joined #gluster
15:37 luckybambu joined #gluster
15:37 fghaas joined #gluster
15:37 fghaas left #gluster
15:38 aliguori joined #gluster
15:40 sjoeboo joined #gluster
15:46 ahunt I am using ext4, but my kernel is 2.6.18-308.24.1.el5
15:47 ahunt thank you very much for the reply
15:48 ahunt I don't think that is the problem.
15:52 nueces joined #gluster
15:54 raven-np joined #gluster
15:56 balunasj joined #gluster
16:01 16WAAD817 joined #gluster
16:07 joshcarter joined #gluster
16:13 joshcarter anyone here familiar with how protocol/server lookup works on gluster 3.3?
16:14 joshcarter I've got a custom xlator that, on porting to 3.3, crashes the server because of changes in lookup's behavior.
16:15 joshcarter I've debugged the issue (it has to do with looking up a loc which already has the GFID filled in), I just don't quite understand the *why* of protocol/server's behavior.
16:25 hagarth joined #gluster
16:30 portante joined #gluster
16:31 rastar joined #gluster
16:33 vpshastry1 left #gluster
16:40 amccloud joined #gluster
16:48 semiosis joshcarter: you may have better luck with that kind of inquiry in #gluster-dev
16:48 joshcarter semiosis: oh, I didn't realize there was a #gluster-dev -- thanks!
16:48 semiosis ahunt: check the client log file
16:48 semiosis joshcarter: relatively new :)
16:49 semiosis ahunt: iirc there were some bugs in 3.3.0 related to clients reconnecting after brick outages, have you tried unmounting/remounting clients?
16:56 ahunt [2013-02-07 11:56:06.890780] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-17: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt [2013-02-07 11:56:06.891773] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-11: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt [2013-02-07 11:56:06.891960] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-21: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt [2013-02-07 11:56:06.892246] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-6: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt [2013-02-07 11:56:06.892283] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-9: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt [2013-02-07 11:56:06.892325] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-8: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt [2013-02-07 11:56:06.893059] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-10: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt [2013-02-07 11:56:06.893487] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-19: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt [2013-02-07 11:56:06.893707] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-16: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt [2013-02-07 11:56:06.893929] I [afr-self-heal-entry.c:1904:af​r_sh_entry_common_lookup_done] 0-sptchc-replicate-13: /lost+found: Skipping entry self-heal because of gfid absence
16:56 ahunt was kicked by glusterbot: message flood detected
16:57 * pdurbin swoons over glusterbot
16:57 semiosis glusterbot: awesome
16:57 glusterbot semiosis: ohhh yeeaah
16:57 pdurbin :)
16:58 samppah :O
16:58 ahunt joined #gluster
16:58 ahunt sorry for the flood
16:58 ndevos @channelstats
16:58 glusterbot ndevos: On #gluster there have been 83634 messages, containing 3725428 characters, 625562 words, 2539 smileys, and 313 frowns; 615 of those messages were ACTIONs. There have been 28421 joins, 1002 parts, 27462 quits, 11 kicks, 103 mode changes, and 5 topic changes. There are currently 187 users and the channel has peaked at 203 users.
16:58 semiosis ahunt: please use pastie.org or similar for multiline pastes, thx
16:58 ahunt I have tried remounting, restarting the glusterd on all servers
17:00 ahunt http://pastie.org/6088440
17:00 glusterbot Title: #6088440 - Pastie (at pastie.org)
17:00 fghaas joined #gluster
17:01 fghaas left #gluster
17:02 sashko joined #gluster
17:16 manik left #gluster
17:27 shylesh joined #gluster
17:28 lala joined #gluster
17:28 johnmark pdurbin: hello?
17:29 johnmark pdurbin: I've been around, but checking sporadically
17:30 johnmark pdurbin: and my IRC client doesn't highlight the line unless my nick appears at the very beginning
17:30 johnmark *sigh*
17:41 hagarth joined #gluster
17:41 manik joined #gluster
17:43 pdurbin johnmark: any advice for me on how to run a meeting over IRC? http://shibboleth.net/pipermail/​users/2013-February/008029.html
17:43 glusterbot <http://goo.gl/TLntR> (at shibboleth.net)
17:44 bluefoxxx joined #gluster
17:44 bluefoxxx Stripe vs Distributed
17:45 bluefoxxx is it sane to assume that i.e. the backing of a Web server (many small files) or git repos should go on Distirbuted, while backing i.e. a major database farm or VMware farm would be better served by Striped?
17:58 Zal joined #gluster
17:59 m0zes @stripe
17:59 glusterbot m0zes: Please see http://goo.gl/5ohqd about stripe volumes.
17:59 m0zes bluefoxxx: ^^
17:59 Zal Hi all, trying to debug an application issue here, running on top of GFS. When using a replicated GFS, is replication synchronous with regard to the write operation? Or can a write operation return after one block has been written, but before the data is replcated to another block?
18:00 bluefoxxx m0zes, so essentially yse
18:00 bluefoxxx yes
18:00 semiosis bluefoxxx: stripe is not usually recommended
18:00 Zal er, GlusterFS that is (not Global FS)
18:00 johnmark pdurbin: oh, ok
18:01 bluefoxxx semiosis, the use case considered here is with large files with random read/write and potentially high concurrent access to the cluster
18:01 pdurbin johnmark: #dvn is quieter if you want to talk there
18:02 bluefoxxx it's not relevant to me yet anyway
18:02 semiosis bluefoxxx: well i guess it would be worth testing to see if stripe really delivers real world benefits.
18:02 bluefoxxx semiosis, sounds like this isn't well-explored
18:02 Zal so: Client1 writes to the filesystem, and we wait for the write operation to return successful. Then Client2 examines the filesystem. Is there a potential race condition where Client2 might not see the data that Client1 wrote?
18:02 semiosis bluefoxxx: stripe was designed for HPC workloads with lots of client machines working on (different parts of) few very large files
18:03 semiosis bluefoxxx: that doesnt exactly map to VM disk hosting
18:03 bluefoxxx semiosis, ah, that's not quite a VMware farm hosting 300 VMs.
18:03 _br_ joined #gluster
18:03 bluefoxxx semiosis, one consideration I'm making is what happens when 70% of your VM image files wind up going toward one brick
18:03 semiosis Zal: replication is synchronous
18:04 Zal semiosis, great, thank you
18:04 bluefoxxx does it start putting them elsewhere because they don't fit
18:04 semiosis bluefoxxx: distribute more
18:04 bluefoxxx heh
18:04 bluefoxxx i.e. break down the disk into 2 ext4 file systems instead of 1 big one, publish 2 bricks from one server, distribute across them
18:04 semiosis bluefoxxx: files are allocated to a distribute subvolume (could be a brick or replica set) when created, based on a hash alg.  files are distributed evenly over all the distribute subvolumes
18:05 semiosis until a time when distrib subvols are added and a rebalance is done
18:05 bluefoxxx semiosis, they're distributed based on a hash algorithm which you hope doesn't accidentally bucket many large files to few servers
18:05 semiosis but that's a very expensive operation
18:05 bluefoxxx hash algorithms are great at evenly distributing large data sets like 10 million files (should be roughly 5 and 5 with very, very little deviation)
18:06 bluefoxxx 10 files might be 6 and 4 or 3 and 7 based on bad luck
18:06 semiosis that is correct
18:06 bluefoxxx but by the same logic if you have a huge VM farm, you'll probably evenly distribute your bulky VM images anyway
18:06 _br_ joined #gluster
18:07 bluefoxxx so the argument quickly becomes self-defeating
18:07 ctria joined #gluster
18:08 bluefoxxx early on the question is "why don't you have enough storage?" and later it's more of "wow, that's extremely bad luck, God hates you"
18:10 semiosis for any given workload/use case there are better ways & worse ways to use glusterfs, there are no one size fits all solutions afaik
18:11 semiosis gotta look at your situation, come up with some options, try them out & see what works best and how long it will work
18:12 semiosis there's no substitute for a full scale test of your production worload
18:12 semiosis s/orlo/orklo/
18:12 glusterbot semiosis: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
18:12 semiosis s/orlo/orklo/
18:12 glusterbot What semiosis meant to say was: there's no substitute for a full scale test of your production workload
18:13 semiosis glusterbot: reconnect
18:13 glusterbot semiosis: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
18:13 semiosis glusterbot: reconnect
18:13 glusterbot semiosis: Error: You don't have the owner capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
18:13 semiosis meh
18:13 semiosis i mean ,,(meh)
18:13 glusterbot I'm not happy about it either
18:13 pdurbin left #gluster
18:16 bdperkin joined #gluster
18:27 bauruine joined #gluster
18:33 glusterbot New news from newglusterbugs: [Bug 908128] READDIRPLUS support in glusterfs-fuse <http://goo.gl/DqTlR>
18:36 georgeh joined #gluster
18:44 JoeJulian /msg glusterbot reconnect
18:44 JoeJulian /msg glusterbot reconnect
18:45 glusterbot joined #gluster
18:47 sjoeboo joined #gluster
18:50 piotrektt_ joined #gluster
18:51 piotrektt_ hi. i have a question about windows client for gluster, is it already being developed?
18:52 johnmark piotrektt_: er, sort of?
18:52 pdurbin joined #gluster
18:52 johnmark we've been developing libgfapi, and we'll do some SAMBA VFS integration with the API
18:52 johnmark but that will be in 3.5, which might be out by the end of this year
18:53 pdurbin johnmark: can i just say... you're awesome: http://irclog.iq.harvard.edu/dvn/2013-02-07#i_569
18:53 glusterbot <http://goo.gl/1tNSk> (at irclog.iq.harvard.edu)
18:53 piotrektt_ in faq on gluster site, there is info about native windows client
18:53 johnmark pdurbin: yes, you can say that whenever you like! :)
18:53 pdurbin heh
18:53 johnmark piotrektt_: there have been thoughts on doing one
18:53 johnmark but never actually implemented
18:53 johnmark but once the samba integration is complete, that will be the windows client
18:53 johnmark because any client that speaks SMB will be able to connect
18:54 mkultras joined #gluster
18:54 piotrektt_ oh, so gluster will have samba in it
18:56 pdurbin left #gluster
18:57 mkultras hey there, I went through the steps to make a replicated volume with gluster 3.3 on 2 servers, i did "volume create kstorage replica 2 glusternode1:/storage glusternode2:/storage" , i tried rebalance, i dont see the files from glusternode1 on glusternode2, do i have to sync them manually first?
18:57 joshcarter semiosis: bluefoxxx: just catching up on earlier discussion. Has there been any thought towards a combined distribute+stripe xlator, which could (for example) distribute 1GB hunks of a file based on hash(filename + hunk #)?
18:58 semiosis joshcarter: not afaik
18:58 bluefoxxx not sure on the use
18:59 bluefoxxx at a point you have to accept that one solution makes sense
18:59 semiosis joshcarter: see ceph
18:59 bluefoxxx I mean, you're talking about what, striping files bigger than 1GB?
19:00 joshcarter right, allow distribute to break up large files, so it can distribute the pieces more evenly.
19:00 joshcarter semiosis: I'm aware of ceph. For my use, I need the layered design of Gluster.
19:01 semiosis joshcarter: thats an interesting idea
19:01 joshcarter anyway, it seems like bluefoxxx's questions about distribute + large files + bad luck would be very common questions.
19:01 bluefoxxx hmm.
19:01 semiosis people with few files usually use pure replicate volumes & make huge bricks afaik
19:01 bluefoxxx hunk_size option huh
19:02 bluefoxxx I mean that's all it comes down to
19:02 a2 joshcarter, saw your msg on #gluster-dev (and responded)
19:02 bluefoxxx it's striping with a big hunk size
19:02 bluefoxxx well, okay.  It's distributed, with triping... ok you got it.
19:02 joshcarter in my case, I'm using small bricks for one tier of storage, so I may run into the same problem with large files and bad luck.
19:02 amccloud joined #gluster
19:03 joshcarter a2: sorry, I must have missed it. lemme hop back there...
19:04 joshcarter anyway, on the distribute thing: it would, unfortunately, add a pile of complexity to edge cases in distribute, e.g. when a read/write crosses the hunk boundary.
19:05 semiosis joshcarter: and if that were to happen i'd love to see load balancing between replicas per-chunk
19:05 joshcarter maybe there's an elegant way to share the split-n-combine algorithm that stripe needs to use, but I can't think of it offhand.
19:06 joshcarter maybe stripe + distribute + replicate need to merge into one huge Voltron-like super-translator. ;)
19:06 cst joined #gluster
19:06 bluefoxxx no
19:06 bluefoxxx replicate is definitely cool where it is.
19:07 bluefoxxx there needs to be an "out of space" edge case
19:08 bluefoxxx oh god.  More command-based configuration?
19:08 * bluefoxxx was hoping he could drop a configuration file on the system and it would magically work.
19:09 rwheeler joined #gluster
19:10 piotrektt_ johnmark,  so will 3.5 support direct connection with windows? (without any client) and is that solution as efficient as current gluster?
19:14 bluefoxxx are all the relevant glusterfs config files kept in one place?
19:14 semiosis bluefoxxx: use the cli
19:15 semiosis it writes out state to /var/lib/glusterd, but you should not edit those files
19:15 bluefoxxx semiosis, I'm using puppet and it analyzes the existing configuration and if it's different it takes action.
19:15 semiosis you should do everything through the command
19:15 semiosis see ,,(puppet)
19:15 glusterbot (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
19:15 bluefoxxx purpleidea's module flatly does not work.
19:15 semiosis but keep in mind that gluster does its own config mgmt & orchestration
19:16 bluefoxxx nod
19:16 bluefoxxx I handled this in pacemaker by having crm dump the configuration to a file, which I then use against my generated config
19:16 bluefoxxx if it's different, it reloads it.  This involves stopping all resources and then loading the config, though
19:18 bluefoxxx (also purpleidea has scoping problems and is trying to make his module do everything from partitioning/formatting to installing the firewall, which is the wrong way to handle this by all refined practices)
19:20 bluefoxxx Bah, I can't find a command that returns the current GlusterFS configuration.
19:20 semiosis gluster volume info?
19:20 semiosis have you ,,(rtfm) ?
19:20 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
19:20 bluefoxxx I am reading the manual as we speak
19:21 bluefoxxx it's 144 pages long
19:21 semiosis and still not enough!
19:21 bluefoxxx yeah
19:21 semiosis you can interrogate the cli using help... gluster help, gluster volume help, gluster volume set help, ...
19:21 JoeJulian And there's undocumented stuff
19:21 JoeJulian like the --xml switch (which might interest you)
19:22 bluefoxxx oh god XML
19:22 semiosis you can try to interrogate JoeJulian as well, though that's less straightforward
19:22 johnmark haha!
19:22 * bluefoxxx has become a YAML fan where it's applicable
19:22 JoeJulian hehe
19:22 * johnmark listens
19:22 bluefoxxx semiosis,  I am always a pest
19:22 bluefoxxx well, not always
19:22 JoeJulian If I had another 48 hours in a day, I think I could convert the xml plugin to yaml or json (leaning toward json)
19:22 bluefoxxx I'm alternately a pest and answering questions, though usually the second part only happens after some period of the first part :)
19:23 bluefoxxx JoeJulian, not the point ;)
19:23 semiosis bbiab
19:23 bluefoxxx ok that's actually worse
19:23 * JoeJulian raises an eyebrow
19:24 bluefoxxx JoeJulian, all I need to know is how to get the complete configuration.  I can work backwards from there.  From my senses, 'gluster volume list' and 'gluster peer status' shows absolutely everything that affects the configuration?
19:24 JoeJulian "gluster volume info" and "gluster volume list"
19:24 JoeJulian er
19:24 sashko joined #gluster
19:24 JoeJulian "gluster volume info" and "gluster peer status"... mostly...
19:25 bluefoxxx ... mostly.
19:25 bluefoxxx Oh hell it's a start.
19:25 JoeJulian gluster peer status doesn't tell you about itself though.
19:26 bluefoxxx Ironically I can't stand purpleidea's module but I'll probably take a similar interface direction with the defined types.
19:27 bluefoxxx JoeJulian, I could be totally asinine and make each export a peer resource and just collect the resources from puppetDB
19:27 bluefoxxx so when you add things to the cluster, they reconfigure
19:27 bluefoxxx there's a module that does that I tihnk
19:28 bluefoxxx though, that means that I have to get the thing to tell me about itself.
19:28 purpleidea bluefoxxx: thanks for the compliments, why don't you write something constructive or submit a patch instead? using every single part of my module is optional. some people want it to do more, and some don't. stop complaining :P
19:29 bluefoxxx hah
19:29 bluefoxxx purpleidea, I reserve the right to attack every single part of your code without attacking you as a person ;)
19:29 bluefoxxx I've been flamed by Linus Torvalds a few times
19:29 purpleidea bluefoxxx: fine, but my comments about being constructive still stand. in case it's not obvious, i was paid $0 to write that puppet-gluster stuff.
19:30 purpleidea if you'd like to fund my development efforts, and you have some useful suggestions, maybe i could make it better for you
19:30 bluefoxxx I think I'll fund my own with sweat and blood
19:31 purpleidea bluefoxxx: please feel free to fork my code
19:31 bluefoxxx I tried that with camptocamp's pacemaker module
19:31 bluefoxxx Eventually i wound up tearing the whole thing out
19:32 bluefoxxx The Percona module I managed to extend though, so apparently I don't hate everybody's code.  Just almost everybody's.
19:32 purpleidea bluefoxxx: i'm not 100% awesome at remembering what everything on the internet means, but isn't that NIH?
19:32 bluefoxxx Potentially
19:32 bluefoxxx Linux itself is NIH
19:32 purpleidea bluefoxxx: still waiting for you to say something constructive. you know
19:32 Mo____ joined #gluster
19:32 bluefoxxx BSD wasn't good enough
19:32 purpleidea ... for progress
19:33 bluefoxxx also see Dragonfly BSD which  was a huge pile of "The FreeBSD guys are stupid so I'm going to go do this right while they screw around"
19:33 bluefoxxx Dragonfly is awesome though, you can freeze programs
19:33 purpleidea bluefoxxx: i really couldn't care much about having an NIH discussion. sometimes you need to rewrite, and sometimes you don't. i really dislike bsd though.
19:33 bluefoxxx like you can freeze Gnome, reboot into a new Dragonfly BSD kernel, then thaw Gnome and continue running o.O
19:34 bluefoxxx heh BSD is slow and annoying
19:34 bluefoxxx But was relevant to the discussion :p
19:34 purpleidea bluefoxxx: so..., constructive?
19:34 bluefoxxx shrug
19:34 bluefoxxx the only thing that comes out of me that's ever constructive is code
19:36 purpleidea bluefoxxx: that's what i thought. fwiw, i tried a bunch of different ways to construct the puppet-gluster module. none of them were optimal, i think in the end you'll realize you're wasting your time, but if you come up with some good code, let me know, and please release it with a license that's compatible with mine so i can merge in anything good. but i'm not optimistic :P
19:36 bluefoxxx BSD license tbh
19:36 purpleidea is it compatible with agplv3+ ?
19:36 bluefoxxx I don't much care what happens to my code once I've released it.
19:36 bluefoxxx yeah it basically says "don't take my name off it and claim you invented the thing"
19:37 bluefoxxx affero is like the most restrictive license
19:37 bluefoxxx I think MIT or BSD is the least but it's a toss-up
19:37 bluefoxxx there's CC-0 but it's somehow not valid in the whole world
19:37 purpleidea the problem with puppet-gluster is actually gluster. it would be much easier to have a perfect puppet module, but it would take some changes to gluster configuration
19:37 bluefoxxx yes well nothing was ever really easy
19:38 bluefoxxx the only intuitive interface is the nipple
19:38 bluefoxxx everything else is learned
19:38 purpleidea bluefoxxx: restrictive? preserves freedom? whichever, i also don't want to have a license fight. we can do vim vs. emacs if you like though :)
19:38 bluefoxxx you could but vim won long ago
19:38 purpleidea or my favourite, tabs vs spaces!
19:38 purpleidea agreed about vim!
19:38 bluefoxxx git vs bzr tbh
19:38 purpleidea git
19:38 bluefoxxx take that one up with shuttleworth
19:38 rwheeler joined #gluster
19:39 bluefoxxx also legit linus quote
19:39 purpleidea although i started on bzr, and i didn't like it
19:39 bluefoxxx "You're a moron"
19:39 bluefoxxx That was his sign-off on his second message in a thread on a Github pull request, talking about Github's shortcomings to the guy who wrote Git I think
19:39 bluefoxxx er, the guy who wrote Github
19:39 bluefoxxx Linus is like a teenager tbh
19:40 bluefoxxx he's what 50 years old?  And he throws a tantrum at every opportunity
19:40 purpleidea http://www.emacswiki.org/emacs/TabsSpacesBoth
19:40 glusterbot Title: EmacsWiki: Tabs Spaces Both (at www.emacswiki.org)
19:40 bluefoxxx https://github.com/torvalds/linux/pull/17 there you go
19:40 glusterbot Title: Add support for AR5BBU22 [0489:e03c] by WNeZRoS · Pull Request #17 · torvalds/linux · GitHub (at github.com)
19:41 bluefoxxx https://github.com/torvalds/lin​ux/pull/17#issuecomment-5659970
19:41 glusterbot <http://goo.gl/1sqji> (at github.com)
19:48 nueces joined #gluster
19:48 _br_ joined #gluster
19:52 _br_ joined #gluster
19:55 _br_ joined #gluster
20:00 semiosis @qa releases
20:00 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
20:02 Staples84 joined #gluster
20:10 plarsen joined #gluster
20:25 elyograg johnmark: I have that highlight problem too.  using irssi.  been looking for a solution.
20:31 jgillmanjr joined #gluster
20:33 elyograg i might have fixed the highlighting thing.  i'll need someone to put my nick in the middle of a line to test, though.
20:37 samppah hello elyograg, did you fix it?
20:37 bluefoxxx which happens first:  Replication or striping?
20:37 bluefoxxx There's no diagram in the docs
20:37 bluefoxxx oh, nm there is.
20:37 elyograg samppah: yes!  I did.  irssi command: /highlight -nick elyograg
20:38 elyograg s/\/highlight/\/hilight/
20:38 badone joined #gluster
20:39 elyograg actual command: /hilight -nick elyograg
20:39 samppah cool :)
20:40 elyograg now to figure out how to put that in the config file.
20:42 balunasj joined #gluster
20:43 samppah elyograg: hmm, i think you could save your current config with /save
20:43 samppah i'm noy sure.. it's long time since i configured irssi
20:43 fghaas joined #gluster
20:44 fghaas left #gluster
20:45 nhm joined #gluster
20:46 bluefoxxx ok so.  wait.  What?
20:46 bluefoxxx It distributes first, then replicates?  I guess that makes sense since it replicates across the distributed groups.
20:47 a2 bluefoxxx, the other way
20:47 a2 there are replicated pairs
20:47 JoeJulian first, last.... depends which direction the data's flowing... ;)
20:47 bluefoxxx brick1 brick2 brick3 brick4 => [brick1 brick2]<=replicate=>[brick3 brick4]
20:47 a2 *replicated server pairs
20:47 bluefoxxx a2, well, it's weird.
20:47 a2 and you distribute files across such units of replication
20:48 bluefoxxx it looks like if I say replicate=2 and I give it 4 servers, instead of going [replicate pair 1] [replicate pair 2] it goes [distribute 1] [distribute 2]
20:48 a2 what do you mean "going" ?
20:48 JoeJulian brick1 brick2 brick3 brick4 => [brick1 = brick2] + [brick3 = brick4] ("=" is replicate, "+" is distribute)
20:48 bluefoxxx JoeJulian, that's NOT what the documentation says.
20:48 a2 JoeJulian is right
20:49 a2 then it's a doc bug (or misinterpretation)
20:49 JoeJulian Where in the doc?
20:49 bluefoxxx page 16
20:49 bluefoxxx 26 in the PDF proper, numbered 16
20:49 bluefoxxx it shows file 1 replicated across exp1 and exp3
20:50 bluefoxxx in replicated volume 0 being exp1 and exp2, and replicated volume 1 being exp3 and exp4
20:50 bluefoxxx sorry I will find link to pdf
20:50 JoeJulian I'm loading it now
20:50 bluefoxxx 3.3.0 administration guide
20:50 JoeJulian #@@$% I need to change the damned default away from adobe's reader.
20:50 bluefoxxx lol
20:51 semiosis JoeJulian: okular
20:51 bluefoxxx evince
20:52 JoeJulian I usually use evince. Haven't heard of okular, will check that out.
20:52 bluefoxxx hint:  All PDF readers suck
20:53 * JoeJulian hates pdf documentation. A well known and oft repeated fact.
20:53 bluefoxxx Evince is pleasant but sometimes has rendering bugs.  Adobe is accurate but is slow, bloated, and often causes severe security problems.
20:53 semiosis JoeJulian: one of the kde apps that actually rocks
20:53 bluefoxxx I like pdf and ps
20:53 bluefoxxx usually evince is fine.  Just it's a fact that all PDF viewers are terrible.
20:53 JoeJulian It's text on a page. Just make a damned html page.
20:53 bluefoxxx All word processors are terrible too
20:54 bluefoxxx no, it's pagination.  It's text and images on a page rendered exactly identically, in an exactly identical layout.
20:54 semiosis bluefoxxx: we've established that you hate all software, so thx for that astute review :P
20:54 JoeJulian lol
20:54 bluefoxxx semiosis, there is a lot to hate about software.
20:55 semiosis for those of you who are inclined to hate, you have the widest selection in history
20:55 balunasj joined #gluster
20:55 bluefoxxx database software is interesting because a lot of it sucks, but typically the VAST majority of the problem is that databases are hard.
20:56 JoeJulian Oh, right... I've seen that image somewhere else. I never knew it came from the AG. Noticed it was wrong before.
20:56 bluefoxxx JoeJulian, yeah that is saying that 1 <-> 3 is a replication pair.
20:56 amccloud joined #gluster
20:56 JoeJulian heh
20:57 bluefoxxx I was thinking it would be 1 <-> 2, and each time you add another pair it adds a distributed target.
20:57 JoeJulian You're intuition is correct.
20:57 bluefoxxx ok cool.
20:58 bluefoxxx is the stripe-replicated image right?
20:58 bluefoxxx on page 18
20:58 jskinner_ joined #gluster
20:59 bluefoxxx That would indicate striped sets, which become replicated sets, which I assume become distributed sets but you have no diagram for that.
20:59 bluefoxxx er. other way
20:59 bluefoxxx no, that's right.  At least that's what the diagram says
21:00 bluefoxxx [1 ~ 3] = [2 ~ 4]
21:00 bluefoxxx I think at this stage I don't care but it's useful info.  Also fix the docs please :)
21:01 JoeJulian bluefoxxx: Here's the end of all my knowledge about ,,(stripe).
21:01 glusterbot bluefoxxx: Please see http://goo.gl/5ohqd about stripe volumes.
21:01 JoeJulian That's as far as I intend to know.
21:01 bluefoxxx haha
21:01 bluefoxxx so the docs have instructions that you don't actually know how they apply
21:02 bluefoxxx just this will work, it does some voodoo
21:02 JoeJulian I haven't even read the docs instructions.
21:02 bluefoxxx ;)
21:02 bluefoxxx ah
21:02 bluefoxxx you are not the doc writer then
21:02 JoeJulian Unless you already know how it all works, I never recommend stripe.
21:02 bluefoxxx i must find the git
21:02 johnmark hrm. there was a good blog post about how to rejigger the brick order
21:02 bluefoxxx is gluster aware of what a node is/was?
21:03 semiosis ,,(glossary)
21:03 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
21:03 hagarth joined #gluster
21:03 bluefoxxx cool.
21:35 flrichar joined #gluster
21:41 tryggvil joined #gluster
21:42 Ryan_Lane joined #gluster
21:51 luckybambu_ joined #gluster
21:57 polenta joined #gluster
22:04 amccloud joined #gluster
22:04 stigchri_ joined #gluster
22:18 georgeh I'm experiencing a strange issue with a replicated volume using gluster 3.3.1-1...the files appear to matchup across both bricks, if I start a 'volume heal volume1 full' then it shows a lot of files being healed (thousands, although it only ever displays 1023), the glustershd.log has lots of messages saying 'no gfid present skipping' for every file that is supposedly healed, if I check the trusted.gfid attribute it is the same on both bricks and the md5s
22:19 georgeh um checks out, the thing is, once the heal is done, if I do it again, I get the same error messages and show the same files being healed...is this a bug or is there something else I need to check/fix?
22:27 balunasj joined #gluster
22:35 JoeJulian That sounds like a bug. If the files are the same and the xattrs don't show pending transactions, I'm not sure where it would be getting a heal list from.
22:36 JoeJulian georgeh: I would file a bug report at the link glusterbot's about to spew, and include state dumps from glusterd and the glusterfs instance that's using the glustershd volume config (ps ax | grep glustershd). To get those state dumps, kill -USR1 $pid. The dumps will end up in /tmp.
22:36 glusterbot http://goo.gl/UUuCq
22:39 jjnash left #gluster
22:39 georgeh thanks, I'll do that
22:41 mtanner joined #gluster
22:51 raven-np joined #gluster
22:55 slowe joined #gluster
22:55 sashko joined #gluster
22:56 slowe New Gluster user here...I have two nodes up and running and everything appears to be working, but files aren't getting replicated from node 1 to node 2. I've checked peer status, volume status, and even issued a heal--but it doesn't change. What else should I check?
23:09 JoeJulian slowe: You're not writing directly to the brick, are you?
23:10 slowe JoeJulian: I don't know, I could be. (Sorry, Gluster is new to me.) I'm assuming that if I write the mounted file system (say, at /export/brick1), then I'm writing directly to the brick?
23:11 cicero yeah just make sure you're writing to the gluster mount point
23:11 cicero not the backend file dir
23:11 slowe So, if /export/brick1 is the XFS filesystem mount point, where is the Gluster mount point? (Sorry for the newbie questions.)
23:12 cicero $ mount | grep glusterfs
23:12 JoeJulian Once a filesystem is part of a volume, the glusterfs software should be the only thing writing to that filesystem. Mount the volume using either the fuse client or nfs and write to that directory.
23:12 cicero on the gluster daemon -- that's the dir you should be writing to
23:12 cicero er, on the gluster server.
23:12 cicero not daemon.
23:13 cicero # mount | grep glusterfs
23:13 cicero glusterfs1:/gluster-vol00 on /data type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072)
23:13 cicero whoops, spacey paste
23:13 JoeJulian http://www.gluster.org/community/docum​entation/index.php/QuickStart#Step_7_.E2.80.93_Testing_the_Gluster_volume
23:13 cicero but in that example, /data is what you should be writing to
23:13 glusterbot <http://goo.gl/2o3Mo> (at www.gluster.org)
23:13 cicero (and gluster-vol00 is the name of the volume)
23:14 cicero yeah or that :)
23:15 slowe Ah, OK--I see my mistake. Thanks for the clarification. Let me see if I can get this worked out.
23:16 cicero glhf
23:17 georgeh thanks JoeJulian for the previous response, out of curiosity, if I did find a file with no trusted.gfid attribute (on either brick) is there a way to fix that?
23:17 sjoeboo joined #gluster
23:20 JoeJulian If it's only one you /could/ setfattr to the same value as the other...
23:20 JoeJulian I would have expected it to heal though.
23:20 JoeJulian Or treat it as a split-brain and delete the "bad" one.
23:21 georgeh okay, but what if neither copy has a gfid, is there a way to set it (ie figure it out)
23:21 hattenator joined #gluster
23:21 elyograg copy the file, delete it, and put it back via the client mount?
23:22 JoeJulian ^^
23:22 JoeJulian Or just stat the file through a client. That should fix it too.
23:22 JoeJulian I suppose unless it's split-brain.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary