Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 ramkrsna left #gluster
00:33 redbeard joined #gluster
00:46 zeedon joined #gluster
00:46 zeedon Is there any problem with taking file backups directly from underlying bricks?
00:48 vpshastry joined #gluster
01:00 yinyin joined #gluster
01:25 vpshastry joined #gluster
01:27 hjmangalam joined #gluster
01:27 hjmangalam1 joined #gluster
01:32 johnmark zeedon: as long as you're not writing directly to bricks
01:33 hchiramm_ joined #gluster
01:50 maxiepax joined #gluster
03:01 vshankar joined #gluster
03:47 twx joined #gluster
03:54 bulde joined #gluster
04:06 sgowda joined #gluster
04:13 rastar joined #gluster
04:20 bala1 joined #gluster
04:23 sripathi joined #gluster
04:24 bala1 joined #gluster
04:30 vshankar joined #gluster
04:36 hagarth joined #gluster
04:57 deepakcs joined #gluster
05:01 saurabh joined #gluster
05:03 samppah deepakcs: hey! seems that adding gluster domain into ovirt got merged into master :)
05:03 samppah is there still some work to do on it?
05:04 hchiramm_ joined #gluster
05:04 vpshastry joined #gluster
05:05 Humble joined #gluster
05:06 deepakcs samppah, Hi, yes.. it got merged few days back.
05:06 deepakcs samppah, for basic gluster storage domain usage, it should be it. Are u looking at something specific ?
05:07 shylesh joined #gluster
05:15 lalatenduM joined #gluster
05:21 satheesh joined #gluster
05:23 sripathi joined #gluster
05:25 mohankumar joined #gluster
05:26 raghu joined #gluster
05:27 satheesh joined #gluster
05:40 timothy joined #gluster
05:56 rastar joined #gluster
05:57 samppah deepakcs: i'd lioe to use glusterfs support in qemu
06:00 deepakcs samppah, yes.. that should be possible by using gluster storage domain
06:02 samppah deepakcs: that's great :)
06:08 deepakcs samppah, welcome :)
06:12 sgowda joined #gluster
06:14 sripathi joined #gluster
06:24 vpshastry joined #gluster
06:30 stickyboy joined #gluster
06:38 vimal joined #gluster
06:44 stickyboy Are there any tips to tune the FUSE client for higher-speed writes?
06:44 sahina joined #gluster
06:45 stickyboy I'm rsyncing a few dozen gigabytes (~10-20,000 files) into a FUSE mount over 1GbE and it's slow as molasses.
06:45 stickyboy The NFS client is much much faster, but the data has extended ACLs so I need to put them into FUSE.
06:46 satheesh1 joined #gluster
06:51 glusterbot New news from newglusterbugs: [Bug 924637] link file with NULL gfid pointing to a file with valid gfid <http://goo.gl/ZOSjh>
06:53 samppah stickyboy: what kind of data you are writing and what kind of speed you are seeing now?
06:57 hagarth joined #gluster
07:05 ngoswami joined #gluster
07:09 sgowda joined #gluster
07:10 jules_ joined #gluster
07:11 stickyboy samppah: I was rsyncing ~70 gigs of files.  Sizes varying from a few megs to a few hundred megs.
07:11 samppah oh, hmm..
07:11 stickyboy iftop on the backend shows less then 90 mbits.  Lemme look again.
07:12 samppah mbits? so that's around 9 mbytes/s? sounds very slow indeed
07:12 samppah what kind of backend and glusterfs setup you have??
07:13 stickyboy samppah: Two new servers with 12-disk RAID5 arrays.  Volume is composed of 2 bricks, one on each server.
07:13 samppah okay
07:13 samppah replica 2 i assume?
07:14 stickyboy samppah: Yeah, replica 2.
07:15 vpshastry joined #gluster
07:15 samppah stickyboy: what kind of performance you have on backend?
07:16 stickyboy samppah: Pretty high.  200+ MB/sec writes.
07:16 stickyboy I found that NFS was much better for migrating my data over (~2.5TB), but I need extended ACLs for some data, so I have to use FUSE for that.
07:17 ekuric joined #gluster
07:17 stickyboy I'm going to re-sync some data to measure FUSE performance over rsync again.  Or should I just try the venerable `dd if=/dev/zero of=/path/to/file bs=1M count=1024 oflag=direct`
07:18 samppah stickyboy: i think that rsync is good because you can compare it to nfs results
07:18 samppah but ofcourse it would be good to other performance tests aswell
07:20 samppah stickyboy: does your raid controller have write-behind cache?
07:20 stickyboy samppah: I don't think so.  You mean battery-backed?
07:21 samppah stickyboy: or at all?
07:22 stickyboy Well I know it doesn't have BBU
07:22 stickyboy Lemme look at MegaCli...
07:23 stickyboy samppah: "Disk Cache Policy   : Disk's Default"
07:24 samppah sec.. i have to sec megacli aswell :)
07:24 samppah stickyboy: Current Cache Policy: WriteBack, ReadAdaptive, Cached, Write Cache OK if Bad BBU
07:25 stickyboy Ah yes
07:25 stickyboy Current Cache Policy: WriteThrough, ReadAhead, Direct, No Write Cache if Bad BBU
07:25 xavih joined #gluster
07:26 samppah stickyboy: hmm, i think it's something like this /opt/MegaRAID/MegaCli/MegaCli64 -ldsetprop wb -l1 -a0
07:26 samppah and this is needed if there is no battery /opt/MegaRAID/MegaCli/MegaCli64 -ldsetprop forcedwb -l1 -a0
07:26 samppah is it possible for you test it?
07:26 ricky-ticky joined #gluster
07:27 stickyboy Sure, it's not in production yet.
07:29 stickyboy samppah: Ah, yes.  It forced writeback indeed.    Default Cache Policy: WriteBack, ReadAhead, Direct, Write Cache OK if Bad BBU
07:31 samppah stickyboy: i'm not sure if -ldsetprop cached is also needed.. can't remember right away what it means :)
07:31 samppah it may have been needed for cachecade
07:32 stickyboy samppah: Ah, to change Direct to Cached (like yours)?
07:33 samppah stickyboy: yeah
07:34 stickyboy samppah: Ah, now:  Default Cache Policy: WriteBack, ReadAhead, Cached, Write Cache OK if Bad BBU
07:35 stickyboy I thought writeback was bad.  Or "dangerous"?
07:36 samppah yes.. it may cause data corruption if there's a power failure and data is still in wb cache
07:36 stickyboy Ok.
07:36 stickyboy I guess that's what the replica is for, eh?
07:37 stickyboy And tape backups. :P
07:37 samppah yeah.. it's somewhat safer as you are replicating data to two machines :)
07:37 samppah and remember backups ;)
07:38 stickyboy So essentially what I was doing before was writing to storage directly, and waiting for confirmation it was written before accepting more writes?
07:39 samppah yep
07:39 stickyboy Lemme enable writeback on my other replica.
07:43 stickyboy samppah: So you run all your controllers in writeback + caching mode?
07:44 hagarth joined #gluster
07:46 samppah stickyboy: most of them yes.. we also have battery backup on some of the controllers but definetly not all
07:47 stickyboy samppah: So this is probably a plus in the long run, but doesn't necessarily explain why FUSE is so slow for me :D
07:48 samppah stickyboy: i hope it helps a lot :)
07:48 samppah btw what glusterfs version you are using?
07:48 stickyboy samppah: Gluster 3.3.1 on CentOS 6.4
07:49 Nevan joined #gluster
07:52 samppah stickyboy: i have been testing 3.4 alpha2 as VM image storage.. for that purpose it seems to be much faster than 3.3
07:53 stickyboy Ah.
07:54 stickyboy samppah: I know 3.4 has some new qemu stuff, but not sure how it works.  Is it just FUSE mount like normal?
07:55 samppah stickyboy: i have been using mostly FUSE mount, but i'm not sure if it only improves in vm use or in general
07:55 stickyboy samppah: Ah.
07:56 stickyboy samppah: btw, are you using any performance.* options on your Gluster volumes?
07:59 guigui3 joined #gluster
08:01 andreask joined #gluster
08:01 vpshastry joined #gluster
08:01 samppah stickyboy: depends on use case.. on some volumes i have set performance.write-behind off and performance.client-io-threads on
08:06 camel1cz joined #gluster
08:06 camel1cz left #gluster
08:07 ujjain joined #gluster
08:10 stickyboy samppah: What is the write-behind?  Is that like the RAID option?
08:11 cw joined #gluster
08:12 samppah stickyboy: afaik it combines small writes on client side before sending them to server and it should reduce network packets transferred between server and clients
08:15 rotbeard joined #gluster
08:22 stickyboy samppah: Ah.
08:22 stickyboy The documentation is quite lacking :P
08:22 stickyboy For those options.
08:29 stickyboy samppah: Maybe my problem could be solved with the write-behind
08:29 samppah stickyboy: did you do any tests after configuring raid controller?
08:30 stickyboy samppah: I'm re syncing now.
08:30 samppah okay
08:31 stickyboy This particular subset of my data doesn't have large files per se.  Even over NFS it's actually only ~50mbits.
08:31 stickyboy Which makes me think grouping small writes on the client could be good.
08:41 stickyboy samppah: Presumably this is the amount of client-side writes to cache before writing?  performance.write-behind-window-size
09:20 _ilbot joined #gluster
09:20 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
09:28 test_ joined #gluster
09:36 puebele joined #gluster
09:44 stickyboy Man, NFS is fast as balls.
09:44 samppah stickyboy: still no luck with glusterfs? :(
09:45 stickyboy Hah, I thought I was on to something... but man, NFS is just crazy fast here.
09:46 test_ joined #gluster
09:48 Staples84 joined #gluster
09:48 stickyboy samppah: btw, this is Gluster's NFS.
09:48 xavih joined #gluster
09:55 stickyboy samppah: https://gist.github.com/alanorth/7e553​cd1ee63f3bd75ee/raw/147f7e337666fa931d​f294d1d9375c0dc3ba9801/gistfile1.txt
09:55 glusterbot <http://goo.gl/bjYSN> (at gist.github.com)
09:55 stickyboy Check those stats
09:56 ricky-ticky joined #gluster
09:56 twx joined #gluster
09:57 eedfwchris joined #gluster
09:57 samppah stickyboy: nice :) there is 1 GigE betweeen client & servers?
09:57 eedfwchris I have a replicated volume in a "degrade" state. I can't get status because I think it's trying to poll the now online degraded server for volume info.
09:58 eedfwchris peer status returns only the new server, ip only
09:59 eedfwchris any idea what to do? volume is live
10:00 stickyboy samppah: Yeah, there's dedicated 1GbE switch between clients and servers.  But as you can see NFS is way faster. :P
10:00 stickyboy Sad
10:01 eedfwchris ah admmit i've had this happen before
10:01 eedfwchris okay… new server (web-3) says old server is already a part of a cluster, old server (web-4) probes web-3 no problem...
10:03 dobber joined #gluster
10:04 eedfwchris http://pastie.org/private/alp8abhofowodir1pfh3q
10:04 glusterbot Title: Private Paste - Pastie (at pastie.org)
10:06 maxiepax joined #gluster
10:08 vpshastry joined #gluster
10:08 eedfwchris following http://europe.gluster.org/community/doc​umentation/index.php/Gluster_3.2:_Brick​_Restoration_-_Replace_Crashed_Server it seems peers is empty?
10:08 glusterbot <http://goo.gl/XL1pU> (at europe.gluster.org)
10:08 eedfwchris despite the uuid obviously being echoded by peer status
10:08 eedfwchris ll
10:11 hateya joined #gluster
10:15 eedfwchris god i hate glusterfs
10:15 eedfwchris it's like a crapshoot what's going on
10:25 guigui1 joined #gluster
10:33 sahina joined #gluster
10:40 samppah eedfwchris: sorry to hear that.. i haven't hit such issues :(
10:40 samppah please stick around if there is someone else who knows what to do
10:42 eedfwchris I got everything to talk to each other but now I can't trigger a heal (3.3)
10:42 eedfwchris just says operation failed
10:43 jclift joined #gluster
10:44 sonne joined #gluster
10:45 eedfwchris looks like a lock is held
10:51 sgowda joined #gluster
10:52 glusterbot New news from newglusterbugs: [Bug 927616] root-squash: root-squashing does not get disabled dynamically <http://goo.gl/tZW0X>
10:52 eedfwchris well… hrm… i wonder if my heal is too "large?
11:02 rotbeard joined #gluster
11:06 BSTR joined #gluster
11:06 tjikkun_work joined #gluster
11:12 GLHMarmot joined #gluster
11:14 plarsen joined #gluster
11:16 sgowda joined #gluster
11:20 shireesh joined #gluster
11:23 hagarth joined #gluster
11:36 satheesh joined #gluster
11:38 adriaaah joined #gluster
11:51 kkeithley1 joined #gluster
11:52 glusterbot New news from newglusterbugs: [Bug 927648] volume status command not providing host names for NFS and SHD <http://goo.gl/AoK0w>
11:56 andreask joined #gluster
11:57 hagarth joined #gluster
11:57 misuzu joined #gluster
11:58 brunoleon__ joined #gluster
11:59 stoile_ joined #gluster
12:00 twx_ joined #gluster
12:00 nhm_ joined #gluster
12:11 lalatenduM joined #gluster
12:23 hagarth joined #gluster
12:35 dustint joined #gluster
12:35 bennyturns joined #gluster
12:48 bala joined #gluster
12:51 awheeler_ joined #gluster
12:52 msmith_ joined #gluster
12:53 jskinner_ joined #gluster
12:55 robos joined #gluster
12:58 bala joined #gluster
12:58 mohankumar joined #gluster
13:06 hybrid512 joined #gluster
13:14 mynameisbruce joined #gluster
13:16 n8whnp joined #gluster
13:24 guigui3 joined #gluster
13:38 jruggiero left #gluster
13:45 rwheeler joined #gluster
14:00 rob__ joined #gluster
14:01 ctria joined #gluster
14:04 plarsen joined #gluster
14:09 sgowda joined #gluster
14:11 aliguori joined #gluster
14:15 NeatBasis joined #gluster
14:21 clag_ joined #gluster
14:24 zaitcev joined #gluster
14:26 rastar joined #gluster
14:27 vpshastry joined #gluster
14:28 piotrektt_ joined #gluster
14:32 bugs_ joined #gluster
14:36 hjmangalam joined #gluster
14:36 hjmangalam1 joined #gluster
14:46 vpshastry left #gluster
14:53 glusterbot New news from newglusterbugs: [Bug 918917] 3.4 Beta1 Tracker <http://goo.gl/xL9yF>
14:54 daMaestro joined #gluster
15:08 bitsweat joined #gluster
15:08 bitsweat left #gluster
15:10 saurabh joined #gluster
15:12 zetheroo joined #gluster
15:14 zetheroo we had a power outage over the weekend and one of the two KVM hosts configured with glusterfs replica 2 was not switching on initially. During this time the VM's on the host that was running were not able to start due to the VM images (which were stored on the gluster) not being accessible
15:15 zetheroo the gluster was mounted on the host and I could see how much space was being used on the gluster but when I looked on the brick there were no files or folders
15:16 zetheroo so is the data on the brick unusable if the other brick on the other server is not online?
15:17 lh joined #gluster
15:17 lh joined #gluster
15:19 bala joined #gluster
15:19 luckybambu joined #gluster
15:22 lpabon joined #gluster
15:24 semiosis zetheroo: sounds like your brick wasnt mounted?
15:24 semiosis bricks are independent, and the idea of replication is that if one is down the other still works
15:25 zetheroo yeah - strange ... I had unmounted it and remounted manually and could still not see any data on the brick
15:25 zetheroo as soon as the second host was up and it's gluster bricks mounted all was good on both hosts
15:27 semiosis that doesnt sound right
15:28 zetheroo good to know :)
15:28 zetheroo maybe we should do another test with shutting down one host ... :P
15:29 zetheroo can't do it now ... but will have to try this at some point ...
15:38 ultrabizweb joined #gluster
15:44 eedfwchris joined #gluster
15:48 jag3773 joined #gluster
15:48 puebele1 joined #gluster
15:51 camel1cz joined #gluster
15:52 hagarth joined #gluster
16:01 glusterbot New news from resolvedglusterbugs: [Bug 923580] ufo: `swift-init all start` fails <http://goo.gl/F73bO>
16:03 jdarcy joined #gluster
16:05 plarsen joined #gluster
16:09 hjmangalam joined #gluster
16:09 hjmangalam1 joined #gluster
16:24 zetheroo left #gluster
16:28 sgowda joined #gluster
16:28 hjmangalam joined #gluster
16:28 hjmangalam1 joined #gluster
16:28 hateya joined #gluster
16:32 92AAAA2SQ joined #gluster
16:39 timothy joined #gluster
16:50 Rocky__ joined #gluster
16:51 puebele1 joined #gluster
16:51 camel1cz joined #gluster
16:53 rastar joined #gluster
17:02 venkatesh_ joined #gluster
17:06 nueces joined #gluster
17:07 vshankar joined #gluster
17:10 timothy joined #gluster
17:13 rastar joined #gluster
17:24 coredumb joined #gluster
17:29 bala joined #gluster
17:30 plarsen joined #gluster
17:31 premera joined #gluster
17:40 _br_ joined #gluster
17:44 theron joined #gluster
17:46 camel1cz left #gluster
17:48 _br_ joined #gluster
17:51 timothy joined #gluster
17:57 morse joined #gluster
17:59 tryggvil joined #gluster
18:01 rotbeard joined #gluster
18:02 stickyboy joined #gluster
18:05 venkatesh_ joined #gluster
18:06 Mo____ joined #gluster
18:07 disarone joined #gluster
18:19 georgeh|workstat joined #gluster
18:20 JoeJulian joined #gluster
18:25 isomorphic joined #gluster
18:36 ninkotech__ joined #gluster
18:56 JoeJulian @op
19:15 jclift @op
19:15 glusterbot jclift: Error: You don't have the #gluster,op capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
19:15 jclift Heh, had to try. :)
19:15 JoeJulian hehe
19:16 ProT-0-TypE joined #gluster
19:16 JoeJulian I finally was able to register #glusterfs and set it to redirect people here.
19:18 stickyboy JoeJulian: Nice.
19:20 nueces joined #gluster
19:21 semiosis [471] semiosis #glusterfs Cannot join channel (+l) - channel is full, try again later
19:22 JoeJulian Had you left #gluster?
19:22 semiosis never!
19:22 JoeJulian I have joins and parts turned off...
19:22 JoeJulian Then that's expected.
19:23 JoeJulian hmm, still not working they way it says it's supposed to though...
19:24 semiosis i tried sending logstashbot in there but it said room is full too
19:29 piotrektt_ joined #gluster
19:31 Ryan_Lane joined #gluster
19:36 johnmark @channelstats
19:36 glusterbot johnmark: On #gluster there have been 104961 messages, containing 4531065 characters, 761237 words, 3111 smileys, and 392 frowns; 697 of those messages were ACTIONs. There have been 38098 joins, 1226 parts, 36910 quits, 15 kicks, 113 mode changes, and 5 topic changes. There are currently 188 users and the channel has peaked at 217 users.
19:36 johnmark woohoo
19:38 ProT-0-TypE joined #gluster
19:46 JoeJulian @op
19:48 JoeJulian mode
19:54 redirect_tester joined #gluster
19:56 JoeJulian Oh, that was interesting... I didn't notice that Ric was in the channel.
19:57 johnmark JoeJulian: he usually is
19:57 JoeJulian I mention him from time to time. He's never chimed in.
19:57 johnmark he's "here" but not actually paying much attention, I'm betting :)
19:58 JoeJulian I'm going to have to start making his notifications flash sometimes.
19:58 johnmark :)
19:59 stickyboy And send ^G to him too!
20:00 ramkrsna joined #gluster
20:08 redirect_tester left #gluster
20:08 redirect_tester joined #gluster
20:08 redirect_tester left #gluster
20:10 jag3773 joined #gluster
20:14 hateya joined #gluster
20:25 66MAAFUGT joined #gluster
20:25 hjmangalam joined #gluster
20:26 jdarcy joined #gluster
20:47 brunoleon joined #gluster
20:58 Mo___ joined #gluster
20:59 ramkrsna joined #gluster
21:18 ramkrsna joined #gluster
21:18 ramkrsna joined #gluster
21:22 msmith_ Running gluster 3.3.1 with XFS bricks.  Volume is setup 3x2 (2 copies).  volume is accessed via NFSv3 and is being used for dovecot mail storage.  I keep seeing various files go "Remote I/O error".
21:22 msmith_ The gluster nfs.log is reporting "[2013-03-25 15:58:31.277100] E [dht-helper.c:652:dht_migr​ation_complete_check_task] 0-mail-dht: /mail/72/1000001650/index/mail​boxes/INBOX/dovecot.index.log: failed to get the 'linkto' xattr Permission denied"
21:22 msmith_ ls in the directory shows the file with all ?.  -????????? ? ?                ?      ?            ? /mail/72/1000001650/index/mail​boxes/INBOX/dovecot.index.log
21:23 msmith_ gfid is the same on the bricks that actually hold the file.  What else should I look at, or how do I fix?
21:25 JoeJulian Can you fpaste ~20ish lines either side of that error?
21:28 JoeJulian btw... If *I* were doing a dovecot deployment requiring any degree of scale, I would probably use the object storage plugin with ufo
21:28 msmith_ http://fpaste.org/NdNL/
21:28 glusterbot Title: Viewing Gluster nfs.log by msmith (at fpaste.org)
21:31 JoeJulian msmith_: Does "/mail/72/1000001650/index/mail​boxes/INBOX/dovecot.index.log" exist on the first or second brick?
21:32 JoeJulian Assuming it does, can you fpaste "getfattr -m . -d -e hex $brickpath/mail/72/1000001650/index​/mailboxes/INBOX/dovecot.index.log"
21:33 msmith_ it does...
21:34 msmith_ trusted.gfid=0x1e898dfe3b3c4faa81ac55b2f0392f36
21:34 msmith_ trusted.glusterfs.dht.linkto=0x6d​61696c2d7265706c69636174652d3100
21:34 tryggvil joined #gluster
21:35 JoeJulian Hmm, anything in the brick logs on those two servers about that file?
21:36 msmith_ the gfid is the same on brick1, brick2, brick3 and brick4.  not on brick5 or brick6.  the gfid is the same on brick1 thru brick4
21:36 JoeJulian Ew.
21:36 msmith_ brick1 and brick2 are 0 byte and have file perms of "---------T"
21:37 JoeJulian As they should.
21:38 msmith_ [2013-03-26 16:36:21.241302] I [server3_1-fops.c:823:server_getxattr_cbk] 0-mail-server: 52915974: GETXATTR /texas.net/72/1000001650/index/m​ailboxes/INBOX/dovecot.index.log (trusted.glusterfs.dht.linkto) ==> -1 (Permission denied)
21:39 JoeJulian That's the brick log?
21:39 JoeJulian hmm
21:39 msmith_ that's on both brick1 and brick2
21:40 JoeJulian selinux?
21:40 msmith_ disabled
21:47 msmith_ btw, didn't think dovecot had an object storage plugin for the free release?
21:48 JoeJulian Not sure.
21:48 JoeJulian I'm still using cyrus imap.
21:49 msmith_ I think you only get that with the $13K/yr subscription.
21:49 JoeJulian holy cow...
21:50 * JoeJulian is in the wrong business.
21:52 msmith_ I would love to get the storage working with gluster.  execs are starting to look at dropping a netapp in though, if I cant get these errors fixed.
21:52 JoeJulian Wait... they're willing to pay for netapp, but not dovecot? :P
21:53 msmith_ very used netapp, much cheaper than dovecot
21:54 ctria joined #gluster
21:55 msmith_ any other thoughts about this problem?
21:57 JoeJulian Yes... but I'm googling some things to try to clarify them. Hang in there. I have a few ideas.
21:58 _br_ joined #gluster
21:59 msmith_ hanging...
22:05 _br_ joined #gluster
22:08 JoeJulian There are 2 things that can return EPERM when trying to read xattrs as root (as far as I can tell). The immutable or append only bits being set. I'm not sure if that's the cause, but that's the only thing I've found so far.
22:09 nueces joined #gluster
22:10 msmith_ doesn't look like the immutable is set, based on lsattr.  It doesn't have any special attr's set
22:11 msmith_ lsattr shows --------------- /gluster/mail/72/1000001650/index/​mailboxes/INBOX/dovecot.index.log
22:11 JoeJulian now... but is it a transient state?
22:14 msmith_ stat says the file hasn't been touched (access/modify/change) since 9:05am yesterday.  not sure if setfattr updates the times
22:14 msmith_ but the logs are still flooded with the perm denied errors
22:19 ramkrsna joined #gluster
22:23 JoeJulian What version of dovecot?
22:23 msmith_ ok, setfattr will update the change timestamp, access and modify stay the same.
22:24 JoeJulian When gluster's setting the attributes, though, that's all being managed. We admins don't want change timestamps changing.
22:24 msmith_ dovecot 2.2.rc3.  this was a problem with 2.0.9 as well
22:25 manik joined #gluster
22:26 JoeJulian which storage backend?
22:27 msmith_ we're using dovecot proxy to help ensure that a mailbox is only accessed from a single nfs client.  mdbox
22:31 JoeJulian Yeah, but you're also doing stats asynchronously through the kernel's fscache so I don't know how the state flushes work.
22:32 JoeJulian Have you checked to see if this is a problem through a fuse mount?
22:33 msmith_ not recently, since gluster 3.2.7.  but we were having not-stop split brain on that one.  Then we discovered that NFS was 2-3 times faster than fuse.
22:34 red_solar joined #gluster
22:35 msmith_ not to mention must less memory overhead
22:37 msmith_ we also kept running into issues with gluster fuse wanting to use the pop3s port
22:37 JoeJulian That's fixed using portreserve
22:40 msmith_ what should the expected memory usage be for a 6 brick (3x2) layout 22TB cluster?  The servers have 12G of ram and we would have a boxes start going OOM after 1 week.
22:41 JoeJulian I have 60 bricks on a 16gig server, though I set performance.cache-size=8MB
22:41 JoeJulian Default's 32MB, iirc.
22:42 JoeJulian That cache size is used for several different caches, so I think it's a multiple of that per brick.
22:46 msmith_ it will probably take me a couple hours to switch to test fuse.  I'll likely need to move the host doing the nfs mount out of it's VM and back to bare metal (for cpu and memory stress).
22:57 JoeJulian Well crap.. that line of thinking didn't pan out.
22:59 JoeJulian msmith_: I would strongly suggest you file a bug report. Include the nfs log and brick log(s). Include the 'getfattr -m . -d -e hex' of that file on all bricks. Include a 'getfattr -m . -d -e hex' of the parent directory of that index file.
22:59 glusterbot http://goo.gl/UUuCq
23:01 JoeJulian msmith_: If you can, you could also grab an strace of dovecot around that error happening.
23:05 zeedon am I meant to be able to see link targets if I am doing ls -l in the .glusterfs directory?
23:06 JoeJulian zeedon: http://joejulian.name/blog/what-is-​this-new-glusterfs-directory-in-33/
23:06 glusterbot <http://goo.gl/j981n> (at joejulian.name)
23:06 zeedon ive got a bunch of no active sinks for performing self-heal on file showing up in the glustershd log file
23:06 zeedon and I am trying to identify the files
23:06 zeedon I was actually just reading that page using getfattr from the bricks works fine
23:07 zeedon but when i try to
23:07 zeedon # ls -l /var/spool/glusterfs/b_home/.glusterfs/4b​/fc/4bfc7da6-9000-4fe4-b54e-b0399984b712
23:07 zeedon lrwxrwxrwx 2 root root 22 Sep 21 09:42 /data/glusterfs/b_home/.glusterfs/4b/f​c/4bfc7da6-9000-4fe4-b54e-b0399984b712 -> .fedora-server-ca.cert
23:07 zeedon that will not show me any link information though
23:07 zeedon for any files in .glusterfs
23:08 JoeJulian Which means that inode's probably a symlink.
23:08 JoeJulian "ls -li /var/spool/glusterfs/b_home/.glusterfs/4b​/fc/4bfc7da6-9000-4fe4-b54e-b0399984b712" to get the inode, then "find -inum " the inode number you just found.
23:11 zeedon just seems to point back to the same file
23:12 zeedon never mind
23:12 zeedon got it
23:13 zeedon heh
23:13 zeedon thanks.
23:13 JoeJulian That's a hardlink. Two directory entries that share an inode.
23:15 ProT-0-TypE joined #gluster
23:26 zeedon yeh I found it, so I have a quite simple 2 brick replicated setup I have about 9 files showing that error "no active sinks for performing self-heal on file" the file seems to exist on both bricks and is accessible from both fuse mounts any ideas?
23:29 JoeJulian fpaste.org the ,,(extended attributes) from an example file that's producing that error from both bricks.
23:30 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
23:31 zeedon http://fpaste.org/bc07/
23:31 glusterbot Title: Viewing Paste #287862 (at fpaste.org)
23:33 rubbs joined #gluster
23:50 yinyin joined #gluster
23:50 dustint joined #gluster
23:52 lh joined #gluster
23:52 lh joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary