Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 micu1 joined #gluster
00:15 purpleidea joined #gluster
00:50 Jayunit100 joined #gluster
00:50 Jayunit100 what happens when bricks fill up?
00:51 roo9 io stalls
00:51 roo9 more usefully, reads don't work either so you can't even list/rm content
00:51 Jayunit100 oh wow
00:52 roo9 i suggest you don't let your bricks fill up.
00:52 Jayunit100 I'm surprised it doesn't just forward or rebalance or do some other magic
00:54 roo9 well rebalance is not trivial, i'm not sure why it doesn't just return with ENOSPC
01:37 jcsp joined #gluster
01:38 DV joined #gluster
01:41 cyberbootje1 joined #gluster
01:51 dtyarnell joined #gluster
01:54 harish joined #gluster
01:58 aliguori joined #gluster
02:29 pengembara joined #gluster
02:31 pengembara using 3.4.1-2 gluster, the glusterNFS always crash
02:32 pengembara the topology: replicated-2, and multiple Virtual IP on the node, client connect to different Virtual IP for nfs
02:33 pengembara is it okay for the glusterNFS server on multiple IP on the same node?
02:33 pengembara creating HA and LoadBalancing gluster and NFS using CTDB
02:36 bala joined #gluster
03:08 uebera|| joined #gluster
03:30 bala joined #gluster
03:45 nasso joined #gluster
03:45 emil joined #gluster
04:02 lalatenduM joined #gluster
04:23 glusterbot New news from newglusterbugs: [Bug 1005526] All null pending matrix <http://goo.gl/tu7Eh0>
04:43 Shdwdrgn joined #gluster
05:15 davinder joined #gluster
05:18 arusso joined #gluster
05:49 kPb_in joined #gluster
06:25 vimal joined #gluster
06:26 jtux joined #gluster
06:35 glusterbot New news from resolvedglusterbugs: [Bug 904005] tests: skip time consuming mock builds for code-only changes <http://goo.gl/h0kIz>
06:42 khushildep joined #gluster
06:56 rgustafs joined #gluster
07:01 rotbeard joined #gluster
07:03 ctria joined #gluster
07:11 ricky-ticky joined #gluster
07:16 ekuric joined #gluster
07:19 _ndevos joined #gluster
07:19 Bluefoxicy joined #gluster
07:36 eseyman joined #gluster
07:37 davinder joined #gluster
07:56 andreask joined #gluster
08:33 dneary joined #gluster
08:38 Rocky___2 left #gluster
08:54 ekuric joined #gluster
08:54 manik joined #gluster
09:10 Dga joined #gluster
09:12 manik left #gluster
09:15 mgebbe joined #gluster
09:39 msciciel joined #gluster
09:47 rgustafs joined #gluster
09:52 Debolaz joined #gluster
10:05 shylesh joined #gluster
10:21 sac_ joined #gluster
10:24 ffr1 joined #gluster
10:26 harish joined #gluster
10:43 gluslog joined #gluster
10:49 kwevers joined #gluster
10:59 andreask joined #gluster
11:03 B21956 joined #gluster
11:11 tziOm joined #gluster
11:24 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
11:41 eseyman joined #gluster
11:42 failshell joined #gluster
11:47 Maxence joined #gluster
12:17 fyxim joined #gluster
12:32 bala joined #gluster
12:32 failshell joined #gluster
13:08 manik joined #gluster
13:10 noob21 joined #gluster
13:11 noob21 gluster: anyone know if distributed georep is still being worked on?  The last commit i see was back in June
13:13 dtyarnell joined #gluster
13:23 bet_ joined #gluster
13:23 PatNarciso noob21, I'm not the authority, however I do believe it continues to be worked on.
13:23 noob21 ok
13:23 cfeller joined #gluster
13:23 noob21 yeah i found a redhat bug tracker that shows much more current progress
13:23 noob21 https://bugzilla.redhat.com/show_bug.cgi?id=847839
13:23 glusterbot <http://goo.gl/l4Gw2> (at bugzilla.redhat.com)
13:23 glusterbot Bug 847839: unspecified, unspecified, ---, csaba, ASSIGNED , [FEAT] Distributed geo-replication
13:24 PatNarciso noob21, the ability to do master-master replication is something I'd like to see, and I understand is in the works.
13:25 noob21 yeah I know of several hacked together ways to do it but the 'official' way would be nice
13:26 kkeithley yes, it's being worked on. Currently targeted to land in 3.7.
13:26 kkeithley multi-master or master-master that is
13:26 noob21 wow, not 3.5?
13:27 kkeithley beatings will increase until morale improves
13:27 noob21 haha
13:28 social_ =]
13:28 noob21 no i understand it's quite a difficult problem to solve
13:28 noob21 i've tried a few hacked together solutions myself.  multiple rsyncs, zeromq+inotify, etc
13:29 social_ I'd love to ask how insecure I should feel if I tried --fopen-keep-cache -
13:29 social_ -attribute-timeout=5 --entry-timeout=5
13:29 noob21 kkeithley: so what's the time frame for 3.7?  end of next year maybe?
13:29 social_ (sorry bad paste)
13:29 PatNarciso glusterbot: do you cringe when someone says Dropbox?
13:29 glusterbot PatNarciso: Error: You must be registered to use this command. If you are already registered, you must either identify (using the identify command) or add a hostmask matching your current hostmask (using the "hostmask add" command).
13:31 kkeithley six month release cadence: 3.5 soon, 3.6 maybe in April or May, 3.7 about this time next year perhaps? I never make promises.
13:31 noob21 gotcha.  yeah i know it's a super rough estimate :)
13:31 kkeithley feature freeze for 3.5 is supposed to be this week.
13:31 kkeithley we'll see
13:32 noob21 cool
13:32 * PatNarciso hears kkeithley
13:32 PatNarciso Is there a list of currently requested features?  May I see it?
13:32 kkeithley you may, it should be on gluster.org. hang on for a link
13:34 kkeithley http://www.gluster.org/community/d​ocumentation/index.php/Planning35
13:34 glusterbot <http://goo.gl/l2gjSh> (at www.gluster.org)
13:34 kkeithley Big "Roadmaps" button on the front page. ;-)
13:35 Jayunit100 is it a bug that , if RDMA rpms aren't installed, the only error message you get is a low level one in /var/log/ … ?  not sure wether i should file it or not.  but it seems like there could be more graceful treatment of missing rpms… maybe checking on glusterd  startup
13:41 PatNarciso thanks kkeithley
13:41 kkeithley What missing RPMs? If you install glusterfs-rdma, e.g. with YUM, yum will install the required dependencies. If you install with rpm, rpm will tell you what the addition RPMs are that you need to install.
13:41 kkeithley You're not still developing on f16 with `make install` by any chance, are you?
13:43 ekuric joined #gluster
13:43 Jayunit100 kkeithley: haha
13:43 kkeithley I had to ask. ;-)
13:43 Jayunit100 I'm on f19 now !
13:43 kkeithley excellent
13:44 Jayunit100 kkeithley: http://jayunit100.blogspot.com/2013/10/to​tally-ephemeral-gluster-development.html <-- my new dev environment recipe.  super simple.  posted on mailing list the other day.
13:44 glusterbot <http://goo.gl/ePbZt3> (at jayunit100.blogspot.com)
13:50 mohankumar joined #gluster
13:51 kaptk2 joined #gluster
13:53 plarsen joined #gluster
13:54 rwheeler joined #gluster
14:02 bugs_ joined #gluster
14:02 Nezdali joined #gluster
14:04 badone joined #gluster
14:08 zaitcev joined #gluster
14:12 Guest34797 joined #gluster
14:17 jbautista joined #gluster
14:19 wgao_ joined #gluster
14:19 squizzi joined #gluster
14:23 dtyarnell joined #gluster
14:26 squizzi joined #gluster
14:40 dneary joined #gluster
14:43 dtyarnell joined #gluster
14:52 keytab joined #gluster
14:53 squizzi joined #gluster
14:53 johnmark @channelstats
14:53 glusterbot johnmark: On #gluster there have been 189314 messages, containing 7822238 characters, 1306279 words, 5236 smileys, and 700 frowns; 1160 of those messages were ACTIONs. There have been 75294 joins, 2330 parts, 72968 quits, 23 kicks, 170 mode changes, and 7 topic changes. There are currently 200 users and the channel has peaked at 239 users.
14:57 cfeller How is GlusterFS 3.4.1 from a bugfix/stability perspective, or more specifically, from a "production" perspective? I notice that http://www.gluster.org/community/​documentation/index.php/Main_Page still states that "3.3.2 is the latest version of GlusterFS recommended for production environments.", but by the same token, is looks as if the page hasn't been updated to reflect that 3.4.1 is out...
14:57 glusterbot <http://goo.gl/eAVvs> (at www.gluster.org)
14:57 cfeller ...of QA...
14:57 cfeller In short, for production, 3.3.2, or 3.4.1?
14:59 social_ cfeller: we use 3.4.0 with nearly same patchset as 3.4.1 has on production and we HAD to jump to it because it was more stable than 3.3.2
15:03 B21956 left #gluster
15:05 dneary joined #gluster
15:06 B21956 joined #gluster
15:06 kkeithley At this point I'd personally recommend using 3.4.1. If I had to guess, I'd say anything you find in 3.3.2 will (only) be fixed in 3.4.2 or 3.5.
15:08 DV joined #gluster
15:08 ricky-ticky joined #gluster
15:10 cfeller Thanks for the feedback!
15:24 johnmark kkeithley: we need to find a maintainer for 3.3.x
15:24 johnmark kkeithley: this sort of lifecycle management is something all projects do
15:29 ricky-ticky joined #gluster
15:41 DV joined #gluster
15:48 phox joined #gluster
15:54 dneary joined #gluster
16:06 sprachgenerator joined #gluster
16:07 ffr1 joined #gluster
16:31 Mo__ joined #gluster
16:33 noob21 left #gluster
16:39 chirino joined #gluster
16:41 dneary joined #gluster
16:42 kkeithley johnmark: sure, but how long did we promise we were going to maintain 3.3?
16:45 dtyarnell joined #gluster
16:55 davinder2 joined #gluster
16:55 edward2 joined #gluster
16:55 satheesh1 joined #gluster
17:08 Staples84 joined #gluster
17:10 XpineX joined #gluster
17:13 diogo joined #gluster
17:13 diogo heya guys
17:14 diogo i have a question... maybe someone can help me out
17:14 diogo I just found a lot of big files on /brick/.glusterfs hidden directory
17:14 diogo is it safe to delete them?
17:15 diogo example: -rwxrwxrwx 2 1017 1017 3.2G Jul 24 05:57 ./f8/ae/f8ae7b90-3a18-4278-9830-01ec6c5c8eaf
17:15 diogo another one: -rw-r--r-- 1 500 500 2.2G Jul 25  2012 ./b2/8d/b28dea1a-cb6d-4165-9dfd-7b13c5f0b483
17:15 semiosis diogo: those are hard-links to files in the brick, they do not take up any space
17:16 semiosis you shouldn't delete them (except as part of manual split-brain recovery work)
17:16 semiosis @lucky unix hard links
17:16 glusterbot semiosis: http://kb.iu.edu/data/aibc.html
17:16 semiosis maybe i should've tested that google before running it
17:17 semiosis ah but indeed i was lucky
17:17 diogo semiosis: ahh ok! thanks for the tip
17:17 diogo I will leave them alone then
17:18 diogo last question... I have a system with 4 bricks, but the 1st brick is consuming a lot of space than the others
17:18 diogo here:
17:19 diogo 493G  463G   31G  94% /vol01
17:19 diogo 493G  363G  125G  75% /vol02
17:19 diogo 493G  349G  140G  72% /vol03
17:19 diogo 493G  342G  146G  71% /vol04
17:19 diogo any idea why that happens?
17:19 semiosis ok please use a ,,(paste) site for multiline
17:19 glusterbot For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
17:19 semiosis and try with df -i
17:19 semiosis that will give inode counts
17:20 diogo inode counts seems equal
17:20 semiosis glusterfs distributes files, not bytes.  so if you have a small number of files whose size varies widely it is more likely that the bytes will be distributed unevenly
17:20 semiosis kwim?
17:20 diogo we have lots of files
17:20 diogo mostly video/media files
17:21 diogo thousands of them
17:21 ndk joined #gluster
17:22 diogo is there a way to fix that up?
17:22 diogo I had to move a few files to another gluster volume
17:22 diogo that seems to happen in setups with more than 3 bricks
17:23 diogo I have a few setups with 2 bricks and the used space is perfecly splitted
17:24 semiosis what is the range of sizes of your files?
17:24 diogo 50MB-400MB
17:26 semiosis well i guess you could try a rebalance, but so far nothing makes me think that will help
17:26 semiosis it's possible, i just dont know how likely it is that it will
17:26 diogo yep, I was thinking about that, but problem is that rebalance take ages to complete
17:26 semiosis true
17:27 diogo and I cant stop this fs that long
17:27 semiosis well rebal works online, but still expensive
17:27 semiosis what version of gluster?
17:28 diogo 3.3.1
17:29 diogo but doing online is gonna slow down the entire filesystem right?
17:30 johnmark diogo: first thing I would do is upgrade to 3.3.2 or 3.4.1
17:30 rdo_ joined #gluster
17:30 johnmark diogo: because the rebal perf is supposed to be better
17:30 diogo ahh cool
17:31 johnmark there are also some nasty bugs in 3.3.1, eg. memory leaks and other random things, that were fixed in 3.3.2 and 3.4.1
17:31 diogo but is it safe to upgrade to a newer version without losing any bricks/data ?
17:32 johnmark diogo: anything I say there is suspect because I haven't done it personally
17:32 rdo_ hi all, I am wondering about gluster + cinder integration... obviously when using a "block device" that's backed by a file you're going to be concerned that said file is replicated for availability. Can someone please confirm how this works with using glusterfs and Cinder? Ideally I'd want to make sure that it could handle the loss of a node and that it would be seamless to the end user
17:32 diogo johnmark: ahh ok, but really thanks for the tip
17:33 johnmark diogo: *but* there is a blog post about upgrading, let me see if I can find it
17:33 diogo johnmark: ok, cool!
17:33 semiosis diogo: usually upgrading to a new patch/point release should be safe.  best practice is to upgrade all your servers, one at a time, before upgrading any of your clients.
17:34 diogo semiosis: my current setup is 1 server only
17:34 semiosis diogo: i recommend rebooting servers after upgrade (but beware of the fsck if your bricks are ext!)
17:34 diogo shouldnt be so hard then
17:34 * semiosis cries a little bit
17:34 johnmark lol
17:34 johnmark diogo: http://vbellur.wordpress.com/2013/​07/15/upgrading-to-glusterfs-3-4/
17:34 glusterbot <http://goo.gl/SXX7P> (at vbellur.wordpress.com)
17:34 semiosis diogo: also note that clients must be disconnected & reconnected to upgrade
17:34 johnmark rdo_: the replication is handled on the GlusterFS end of things
17:34 diogo semiosis: sure, makes sense
17:34 semiosis srsly i never understood the point of a single server gluster volume
17:35 semiosis but hey, if it works for you (is it?!)
17:35 johnmark rdo_: what version of openstack are you using?
17:35 rdo_ johnmark: this would be RHOS 3.0
17:35 johnmark rdo_: is that grizzly-based?
17:35 diogo semiosis: not my choice... it was the customer's choice
17:35 diogo nothing I could do about it
17:35 rdo_ johnmark: it is and I understand the limitations until Havana
17:35 johnmark rdo_: aha, ok
17:35 diogo johnmark: thanks for the link!
17:35 johnmark you knew what I was going to ask next :)
17:35 johnmark diogo: yw
17:36 rdo_ johnmark: I'm looking more into the long-term benefits of Gluster
17:36 dneary joined #gluster
17:36 diogo semiosis: and everything runs at AWS
17:36 semiosis AWS <3
17:37 diogo ;)
17:37 johnmark rdo_: ah, saw your podcast with @emeacloudguy
17:37 rdo_ johnmark: so just to clarify - the machine that is actually using the volume, in this case the nova compute host would be responsible for the replication and in that sense of things if we lost a back-end node, it wouldn't even care
17:37 rdo_ johnmark: yeah - not connected to corp, I'm usually just 'rdo' ;-)
17:37 johnmark heh
17:38 johnmark rdo_: so you have a gluster volume with replica 2?
17:38 rdo_ forgive my ignorance with gluster, thought I'd pop in here and ask a few questions and who better than to ask ;-)
17:38 johnmark rdo_: precisely :)
17:38 johnmark rdo_: it's ok, people forgive my Gluster ignorance all the time ;)
17:38 johnmark rdo_: so in Gluster-land, the backend servers are responsible for replication
17:39 rdo_ johnmark: so I'd be looking to stand up an environment that replicates pretty much everything, objects (and with that, glance images) and cinder volumes
17:39 johnmark but if when you've mounted a replicated volume, the GlusterFS client takes care of HA and routing
17:39 johnmark and with Grizzly, that means Gluster is mounted via the Fuse module and the GlusterFS client
17:40 rdo_ sure, makes sense in my head.. but does it actually stripe the "block devices" in cinder?
17:40 johnmark and on the backend storage servers, the replication is automatically taken care of when you stand up the gluster volume
17:40 johnmark rdo_: doesn't stripe
17:40 johnmark it should be transparent to cinder via the GlusterFS mount
17:40 semiosis [13:38] <johnmark> rdo_: so in Gluster-land, the backend servers are responsible for replication
17:40 semiosis huh?
17:40 johnmark even though you mount specifically via an IP or hostname, the GlusterFS client knows where all the replicas are
17:41 johnmark semiosis: sorry, worded tha tincorrectly
17:41 diogo semiosis and johnmark: thanks for the help guys
17:41 semiosis fuse clients replicate to the backend servers, which store the data
17:41 rdo_ indeed, but across say two bricks? my concern is that we're writing to a block device that may disappear and it would have to route to the replica, is it in sync?
17:41 diogo have a nice day guys! :)
17:41 johnmark semiosis: er? fuse clients mount the volume and can then serve data from any of hte replicated storage nodes
17:41 johnmark diogo: you too :)
17:41 semiosis rdo_: the fuse client (mount) writes to all replicas in sync
17:42 rdo_ excellent
17:42 semiosis take care diogo
17:42 johnmark semiosis: right - synchronously
17:42 semiosis see ,,(mount server)
17:42 glusterbot The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds
17:42 rdo_ ok interesting, that clears up a lot
17:43 johnmark rdo_: yeah, what I was trying to say is that neither Cinder nor Nova have to worry about replication or HA if the Gluster volume is replicated
17:46 XpineX joined #gluster
18:02 davinder joined #gluster
18:29 lpabon joined #gluster
18:33 cleverfoo_work joined #gluster
18:35 cleverfoo_work ?
18:35 cleverfoo_work left #gluster
18:36 rferreira joined #gluster
18:37 rferreira hey fellas, my gluster storage just totally disappeared on me - I'm a tad freaked out right now. Has anyone seen these errors before? https://gist.github.com/rferreira/6798333
18:37 glusterbot Title: gist:6798333 (at gist.github.com)
18:37 semiosis rferreira: please ,,(pasteinfo) and also what version of glusterfs?  what distro?
18:38 glusterbot rferreira: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:39 rferreira https://dpaste.de/ijht
18:39 glusterbot Title: dpaste.de: Snippet #242381 (at dpaste.de)
18:40 rferreira volume info pasted
18:41 rferreira @semiosis we're on ubuntu 12.04 LTS and gluster 3.4.0-final
18:41 rferreira packages from the ppa repo
18:41 rferreira it almost looks like someone rm-rf * the volume
18:42 semiosis you mean the bricks are empty?
18:42 rferreira semiosis: yup
18:42 semiosis never seen that happen because of a bug
18:43 rferreira https://dpaste.de/b028
18:43 glusterbot Title: dpaste.de: Snippet #242382 (at dpaste.de)
18:44 rferreira df -h still shows disk in use
18:45 semiosis the .glusterfs directory has hard-links to files in the brick
18:45 rferreira I haven't really done anything wih the gluster servers besides reading logs and getting volumes tatus
18:45 semiosis if someone did rm -rf the bricks they forgot to hit the .glusterfs directory
18:46 rferreira hmmm
18:47 semiosis if things were deleted throuhg a client mount point then glusterfs would've cleaned up the .glusterfs directory
18:47 rferreira I'm puzzled.
18:48 rferreira is there a way to recover it?
18:48 rferreira I had a team of engineers working on data residing on the gluster filesystem mounted on a few clients
18:49 semiosis backups?
18:49 rferreira semiosis: nope, we're in pre production with this
18:49 rferreira :(
18:51 semiosis well the data is still in the .glusterfs directory.  under there you'll find symlinks to directories (which could be used to recover directory names) but files are hard links with gfid (random ID) names, so no way intrinsically to recover the filenames
18:51 semiosis afaict
18:51 rferreira k
18:53 rferreira I think I have an idea
18:53 rferreira ...
18:56 rferreira @semiosis I did a find . | xargs file | grep -v directory in .glusterfs and there were only a handful of files
18:57 rferreira I was expecting more
18:57 khushildep joined #gluster
19:14 flakrat joined #gluster
19:14 flakrat left #gluster
19:22 ricky-ticky joined #gluster
19:38 LoudNoises joined #gluster
19:46 badone joined #gluster
19:52 dtyarnell joined #gluster
20:06 badone joined #gluster
20:09 DV joined #gluster
20:17 rferreira joined #gluster
20:27 a2_ joined #gluster
20:34 rferreira joined #gluster
21:00 squizzi joined #gluster
21:32 andreask joined #gluster
21:34 glusterbot New news from newglusterbugs: [Bug 987555] Glusterfs ports conflict with qemu live migration <http://goo.gl/SbL8x>
21:39 chirino joined #gluster
21:46 rotbeard joined #gluster
22:12 chirino joined #gluster
22:18 rwheeler joined #gluster
22:36 rferreira joined #gluster
23:00 StarBeast joined #gluster
23:35 JoeJulian rferreira: I've been in meetings all day and just got back to my desk. "nfs.rpc-auth-allow: gluster-brick1-gdev01.log" ? That's not likely to be a very useful hostname.
23:36 rferreira @JoeJulian we're not even using it for nfs
23:37 rferreira all clients are mounting over the gluster fuse file system
23:37 JoeJulian ok
23:37 zeedon2 joined #gluster
23:38 zeedon2 Hey I am having a very strange issue with a simple 2 brick replicated setup
23:38 JoeJulian And both bricks are in the same state?
23:39 JoeJulian Could it be that one of your bricks was part of PRISM and now that it's part of the shut down your data is unavailable? ;)
23:40 zeedon2 both servers in the replicated setup mount the glustervolume locally but I seem to intermittently lose the mount with an error cannot access /data: Transport endpoint is not connected
23:40 JoeJulian rferreira: Are both bricks in the same state?
23:40 zeedon2 umount and remount of the glustervolume resolves the issue
23:41 JoeJulian zeedon2: What version? Have you checked your client log when you get in that state?
23:42 zeedon2 3.4
23:42 zeedon2 which log file should i be looking at
23:42 rferreira JoeJulian: gluster volume status said so
23:42 rferreira https://dpaste.de/ijht
23:43 glusterbot Title: dpaste.de: Snippet #242381 (at dpaste.de)
23:43 rferreira unless I should be looking somewhere else
23:43 zeedon2 it seems like a network issue but it makes no sense since the volumes are mounted locally
23:43 JoeJulian rferreira: I mean are both bricks showing the same "used" but nothing there except .glusterfs?
23:43 JoeJulian ~mount server| zeedon2
23:44 JoeJulian @mount server
23:44 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds
23:44 rferreira @JoeJulian yup
23:45 zeedon2 sorry?
23:45 JoeJulian rferreira: yuck.
23:45 rferreira JoeJulian: yeah an to be honest we kinda gave up on it
23:46 JoeJulian zeedon2: Despite the fact that you're referencing a "local" server to perform the mount, the client is simply retrieving the volume information from that server. It then connects to all the servers in the volume. Replication is handled by the client.
23:46 JoeJulian rferreira: Ok, then I'll not waste any more thought on it. :D
23:47 rferreira @JoeJulian we're rebuilding the volume but I'm running into something weird, let's say I have two bricks again in replicated mode and I delete a file from brick1, we're seeing very strange behavior with regards to that deleted file being restored
23:47 rferreira sometimes gluster restores it and sometimes it doesn't
23:47 rferreira self heal is on
23:47 JoeJulian Not strange at all. You're doing it wrong. :P
23:48 rferreira JoeJulian: go on..
23:48 rferreira please...
23:48 JoeJulian You don't mess with the files on the bricks. The bricks are storage for GlusterFS. You should be doing that stuff through the mountpoint.
23:49 zeedon2 ok sure, however the replication still seems to be working fine i just lose the mount
23:49 zeedon2 is there a specific log file I should be looking at
23:49 rferreira so how can I test losing a brick and having it restored?
23:49 JoeJulian GlusterFS has no way of knowing that you messed with the brick. If the deleted file is accessed through a client, it'll get healed. Otherwise it has no idea.
23:49 JoeJulian You can force it with a "heal ... full"
23:50 rferreira hmmm
23:50 rferreira ok
23:50 JoeJulian zeedon2: /var/log/glusterfs/{mountpoint | tr '/' '-'}
23:50 JoeJulian zeedon2: /var/log/glusterfs/{mountpoint | tr '/' '-'}.log
23:51 rferreira @JoeJulian  so let's say I have tons of files and a brick and a file gets corrupted shouldn't gluster be checking the health of those files and fixing them from the replica?
23:52 JoeJulian If you're referring to bit-rot, no. GlusterFS will keep track of files that have been changed if a brick is offline, and heal the offline brick once it comes back online. That's automatic.
23:52 zeedon2 also I have noticed log timestamps seem to be in UTC can this be changed?
23:52 JoeJulian If your whole brick gets toasted, you'll want to heal-full.
23:53 JoeJulian zeedon2: Not in this version. There's a new way that logs will be done in, I think 3.5. Then you can get a lot more creative.
23:54 JoeJulian Logs should always be in UTC anyway, IMHO. Translate them to something local in kibana as you read them from logstash. :D
23:55 JoeJulian I'm not really sure why they changed that. Made no difference to me as all my servers are in UTC anyway.
23:55 zeedon2 [2013-10-02 23:18:58.868361] W [socket.c:514:__socket_rwv] 0-gv0-client-1: readv failed (No data available) seems to occur alot
23:56 JoeJulian Without context, the "W" would suggest that that's irrelevant.
23:57 JoeJulian If it were near an error (" E ") then I might consider it more meaningful.
23:57 JoeJulian maybe
23:59 rferreira @JoeJulian so I"m going to run a test and wipe a brick and run a heal full - I really hope to see all the files restored

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary