Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 glusterbot New news from newglusterbugs: [Bug 1004100] mbd crashes in libglusterfs under heavy load <http://goo.gl/XRcTC1>
00:39 StarBeast joined #gluster
00:49 glusterbot New news from newglusterbugs: [Bug 1004100] smbd crashes in libglusterfs under heavy load <http://goo.gl/XRcTC1>
00:54 yongtaof joined #gluster
00:57 dmojoryder is there a downside to using fopen-keep-cache when mounting glusterfs (e.g. possible stale cache)? Seems to really improve perf (and reduce network bandwidth) when repeatedly accessing files over gluster. Kinda almost seems it should be enabled by default
01:02 a2_ dmojoryder, there was an upstream patch in fuse kernel module which purged a cache on detecting an mtime bump, which was submitted by Brian Foster. Not having that patch can be dangerous and serve stale data. If you are sure your distro has the patch, you can turn on fopen-keep-cache flag safely
01:02 a2_ upstream kernel commit id eed2179efe1aac145bf6d54b925b750976380fa6
01:03 dmojoryder a2_: thanks!
01:03 a2_ oh wait, i see that brian's patch adds FUSE_AUTO_INVAL_DATA flag.. hmm, we could detect this in fuse-bridge and turn on fopen-keep-cache safely
01:04 a2_ *safely automatically
01:04 a2_ dmojoryder, do you have perf numbers for comparison of how much improvement you get by turning on the cache?
01:05 dmojoryder a2_: I could get some. It was in an admittedly synthetic test requesting the same file repeatedly, but w/o that flag I could peg the cpu in iowait, and with it the iowait disappeared
01:07 robo joined #gluster
01:07 yongtaof how to change file mode in xlator?
01:08 yongtaof for example if i try to open a file and if the file has sticky permission I want to remove the sticky permission and change it to normal permission like rwxrwxrwx?
01:11 plarsen joined #gluster
01:12 dalekurt joined #gluster
01:20 yongtaof any one know how to use io-stats-dump
01:20 yongtaof ?
01:25 kevein joined #gluster
01:30 a2_ yongtaof, you will need to write a new translator for changing permissions that way
01:40 a2_ dmojoryder, http://review.gluster.org/5770
01:40 glusterbot Title: Gerrit Code Review (at review.gluster.org)
01:40 yongtaof a2_, thank you yes but I don't know how to implement the new xlator
01:42 yongtaof what I want to do is before open the file change the file permission, and I want to add a client side xlator
02:04 bulde joined #gluster
02:12 hchiramm_ joined #gluster
02:42 jporterfield joined #gluster
02:56 lalatenduM joined #gluster
02:59 jporterfield joined #gluster
03:02 kshlm joined #gluster
03:04 saurabh joined #gluster
03:09 sprachgenerator joined #gluster
03:09 bharata-rao joined #gluster
03:20 rjoseph joined #gluster
03:23 shubhendu joined #gluster
03:24 asias joined #gluster
03:26 jporterfield joined #gluster
03:30 nonsenso joined #gluster
03:38 nightwalk joined #gluster
03:39 itisravi joined #gluster
03:46 jag3773 joined #gluster
03:55 RameshN joined #gluster
03:57 ccha3 joined #gluster
03:57 jporterfield joined #gluster
03:57 glusterbot` joined #gluster
03:59 morsik_ joined #gluster
03:59 bivak joined #gluster
04:03 dalekurt_ joined #gluster
04:04 kanagaraj joined #gluster
04:11 shylesh joined #gluster
04:11 vshankar joined #gluster
04:13 bulde joined #gluster
04:18 jbrooks joined #gluster
04:19 dblack joined #gluster
04:19 kkeithley joined #gluster
04:27 saurabh joined #gluster
04:29 dusmant joined #gluster
04:33 spandit joined #gluster
04:35 jporterfield joined #gluster
04:37 ppai joined #gluster
04:42 lalatenduM joined #gluster
04:43 jporterfield joined #gluster
04:44 lalatenduM joined #gluster
04:49 bala joined #gluster
04:58 bharata-rao joined #gluster
05:02 mohankumar joined #gluster
05:03 CheRi_ joined #gluster
05:10 ajha joined #gluster
05:15 satheesh joined #gluster
05:20 raghu joined #gluster
05:24 psharma joined #gluster
05:27 bulde joined #gluster
05:28 sgowda joined #gluster
05:30 bulde joined #gluster
05:31 shastri joined #gluster
05:36 mjrosenb ok, I asked this in the past: I want to change the datastore for a brick, I presumably want to use rsync -a -X --what-else cur_loc new_lock.
05:36 mjrosenb oh, something with hardlinks, I bet.
05:37 mjrosenb -H
05:37 mjrosenb well, I can always rsync again to fix it, right? :-p
05:39 RedShift joined #gluster
05:39 vpshastry1 joined #gluster
05:41 hagarth joined #gluster
05:47 nshaikh joined #gluster
05:49 ababu joined #gluster
05:56 yongtaof joined #gluster
06:01 psharma joined #gluster
06:01 yongtaof joined #gluster
06:05 vshankar joined #gluster
06:06 jikz joined #gluster
06:07 jiku joined #gluster
06:10 yongtaof joined #gluster
06:11 nshaikh joined #gluster
06:11 shruti joined #gluster
06:18 shubhendu joined #gluster
06:20 glusterbot New news from newglusterbugs: [Bug 1002907] changelog binary parser not working <http://goo.gl/UB57mL>
06:21 ndarshan joined #gluster
06:22 yongtaof joined #gluster
06:24 yongtaof joined #gluster
06:24 dusmant joined #gluster
06:26 jtux joined #gluster
06:28 anands joined #gluster
06:28 yongtaof joined #gluster
06:31 yongtaof joined #gluster
06:36 yongtaof joined #gluster
06:40 tziOm joined #gluster
06:42 davinder joined #gluster
06:50 satheesh joined #gluster
06:55 ngoswami joined #gluster
06:56 vimal joined #gluster
07:00 semiosis joined #gluster
07:01 yongtaof joined #gluster
07:02 ricky-ticky joined #gluster
07:03 vpshastry1 joined #gluster
07:03 dusmant joined #gluster
07:05 hchiramm_ joined #gluster
07:10 yongtaof a2_, I find there's no .chmod defined in xlator.h
07:10 yongtaof how to trigger chmod in .open?
07:12 rastar joined #gluster
07:17 bala joined #gluster
07:23 ricky-ticky joined #gluster
07:23 dusmant joined #gluster
07:25 yongtaof joined #gluster
07:26 theron joined #gluster
07:39 ctria joined #gluster
07:45 ricky-ticky joined #gluster
07:48 shubhendu joined #gluster
07:50 ricky-ticky joined #gluster
07:50 bharata-rao joined #gluster
07:51 DV__ joined #gluster
07:51 eseyman joined #gluster
07:57 yongtaof joined #gluster
07:58 ricky-ticky joined #gluster
08:02 yongtaof seems chmod called fsetattr
08:08 vpshastry1 joined #gluster
08:09 jcsp joined #gluster
08:09 kkeithley joined #gluster
08:11 mattf joined #gluster
08:12 satheesh1 joined #gluster
08:14 portante joined #gluster
08:15 Humble joined #gluster
08:25 dblack joined #gluster
08:26 jporterfield joined #gluster
08:26 vpshastry joined #gluster
08:28 bfoster joined #gluster
08:29 mattf joined #gluster
08:35 StarBeast joined #gluster
08:52 dusmant joined #gluster
08:55 vincent_vdk joined #gluster
08:56 bharata-rao joined #gluster
09:04 meghanam joined #gluster
09:04 meghanam_ joined #gluster
09:09 jtux joined #gluster
09:10 ricky-ticky joined #gluster
09:11 edward1 joined #gluster
09:31 jtux joined #gluster
09:32 Humble joined #gluster
09:33 ricky-ticky joined #gluster
09:47 tryggvil joined #gluster
09:50 yongtaof joined #gluster
09:54 yongtaof joined #gluster
09:56 mgebbe joined #gluster
09:57 jurrien_ joined #gluster
10:02 yongtaof joined #gluster
10:04 davinder joined #gluster
10:13 Humble joined #gluster
10:15 yongtaof joined #gluster
10:16 RameshN joined #gluster
10:27 mooperd_ joined #gluster
10:34 johnmwilliams joined #gluster
10:36 dusmant joined #gluster
10:37 spresser joined #gluster
10:39 rastar_ joined #gluster
10:40 mooperd_ joined #gluster
10:41 vpshastry joined #gluster
11:11 failshell joined #gluster
11:13 Humble joined #gluster
11:23 mohankumar joined #gluster
11:24 hagarth joined #gluster
11:25 nonsenso joined #gluster
11:27 ppai joined #gluster
11:33 Humble joined #gluster
11:50 kkeithley joined #gluster
11:56 aib_007 joined #gluster
11:57 an joined #gluster
11:59 Humble joined #gluster
12:00 bennyturns joined #gluster
12:03 mooperd_ joined #gluster
12:06 masterzen joined #gluster
12:09 hybrid512 joined #gluster
12:15 chirino joined #gluster
12:17 eseyman joined #gluster
12:19 rcheleguini joined #gluster
12:26 jclift joined #gluster
12:43 ppai joined #gluster
12:47 DV__ joined #gluster
12:52 bennyturns joined #gluster
12:53 awheeler joined #gluster
12:54 awheeler joined #gluster
12:55 B21956 joined #gluster
12:58 ndarshan joined #gluster
13:02 samsamm joined #gluster
13:06 hchiramm_ joined #gluster
13:07 bulde joined #gluster
13:15 robo joined #gluster
13:18 robo joined #gluster
13:22 glusterbot New news from newglusterbugs: [Bug 1002556] running add-brick then remove-brick, then restarting gluster leads to broken volume brick counts <http://goo.gl/YqOYSj>
13:24 tqrst I was hoping for new new news
13:25 sprachgenerator joined #gluster
13:27 dusmant joined #gluster
13:30 andreask joined #gluster
13:31 jkroon joined #gluster
13:32 jkroon hi guys, just wondering whether it's a know issue that certain operations are almost 200 times slower than on native file systems?
13:33 jkroon for example, an ls somenameprefix.* takes around 0.9s on gluster, and 0.005s directly an an ext4 equivalent.
13:33 jkroon where somenameprefix.* matches exactly one file
13:33 jkroon I'm asuming bash uses glob(3) which uses readdir underneath which is what causing the slowness.
13:34 mooperd_ joined #gluster
13:38 vpshastry left #gluster
13:39 hagarth joined #gluster
13:40 robo joined #gluster
13:41 kkeithley jkroon: what version? native or nfs? what's an ext4 equivalent; is it really just ext4? what linux distro might be good to know too.
13:49 manik joined #gluster
13:50 jkroon native
13:51 jkroon well, badly stated at ext4, I've got an ext4 filesystem on a local disk.
13:52 jkroon the glusterfs filesystem also sits on an ext4 brick.  different server at this stage
13:52 plarsen joined #gluster
13:53 bugs_ joined #gluster
13:53 bala joined #gluster
13:55 an joined #gluster
13:56 bala1 joined #gluster
13:57 jtux joined #gluster
13:57 kkeithley jkroon: what version of glusterfs?
14:07 jkroon 3.3.1
14:08 jkroon just noticed the kernel on the one client is pretty old @ 2.6.37
14:08 jkroon server at least is at 3.7.3
14:08 jkroon if that makes a difference.
14:08 jkroon there are around 6500 files in the folder - if that makes a difference.
14:11 sgowda joined #gluster
14:14 dmojoryder I am getting "unable to self-heal contents of '/' (possible split brain)". Doubt I can just just delete that on one of the replicas given its the root dir. Is there another way to tell gluster to treat one or the other replicas as 'correct' and use that?
14:15 lpabon joined #gluster
14:18 saurabh joined #gluster
14:18 jbrooks joined #gluster
14:26 wushudoin joined #gluster
14:37 robo joined #gluster
14:43 nshaikh joined #gluster
14:56 satheesh joined #gluster
14:59 kaptk2 joined #gluster
15:02 dmojoryder With a conflict on / in a replicated setup, can I just stop glusterd on one of the replicas reporting the conflict, clear out the dir, restart glusterd, and have it self heal?
15:10 jkroon kkeithley, any additional hints?
15:11 kkeithley Are you able to give 3.4.0 a try? IIRC there are performance improvements in readdir. (And 3.4 is compatible with 3.3 so a server upgrade should be transparent)
15:12 jkroon ok, will test compatibility in a test env first and then look at it again.
15:12 jkroon is there a "minimum recommended kernel version" somewhere?
15:14 kkeithley The ext4 readdir bug is fixed in 3.4.0, so the kernel version shouldn't matter
15:15 LoudNoises joined #gluster
15:19 zerick joined #gluster
15:22 Humble joined #gluster
15:23 ddp23 joined #gluster
15:24 ddp23 Hi, I'm having trouble accessing gluster nfs from another subnet via nat. I get: "mount.nfs: access denied by server while mounting" for mount.nfs -o vers=3,mountproto=tcp host:vol /dir
15:24 tryggvil joined #gluster
15:24 wushudoin left #gluster
15:24 ddp23 anyone have any ideas what to tweak please?
15:25 ddp23 tried nfs.rpc-auth-allow and nfs.addr-namelookup settings without success. Nothing in the gluster logs...
15:27 ddp23 from a VM within the same subnet this just works...
15:30 plarsen joined #gluster
15:33 jbrooks Hey guys, I'm working on a howto about using gluster with ovirt, and I'm wondering whether, in a "get it up and running to kick the tires" install,  it's crucial to include all the business about creating a new xfs partition vs. just using a dir on the existing filesystem for your brick
15:34 tryggvil joined #gluster
15:37 aliguori joined #gluster
15:39 jag3773 joined #gluster
15:41 torbjorn___ joined #gluster
15:43 kkeithley just to kick the tires? Go ahead and use a dir on the existing file system. Just don't go writing blog articles about poor benchmark results.
15:47 jbrooks kkeithley, Yeah, cool, thanks
15:47 Humble joined #gluster
15:49 torbjorn___ hey .. I'm having some problems connecting to my Gluster servers from some new clients. I'm seeing a little bit of network traffic, then the client process waits forever on a epoll_wait(3 system call
15:50 torbjorn___ I've tried quite a few Gluster and kernel versions, but pretty much the same results on everything
15:50 dmojoryder to address the split brain I was seeing I identified the replicas involved (replica 2), stopped glusterd on one, wiped it clean and then restarted glusterd. However I still see the split brain error for that replica in the client logs. Any ideas?
15:50 sprachgenerator joined #gluster
15:50 torbjorn___ The working clients are on kernel 2.6.something, which won't work for hardware reasons on my new clients
15:53 torbjorn___ No error messages from the client, it just sits there .. the mount directory is inaccesible, ie. hangs everything that touches it
15:53 torbjorn___ using ctrl-c on the foreground client frees those up, though
15:54 andreask joined #gluster
15:58 Humble joined #gluster
16:10 jkroon kkeithley, there was an ext4 readdir bug?  I assume that could have caused a lot of havoc :)
16:12 kkeithley yes, there was; and yes, it did
16:20 ProT-0-TypE joined #gluster
16:21 ProT-0-TypE joined #gluster
16:21 ProT-0-TypE joined #gluster
16:22 ProT-0-TypE joined #gluster
16:23 ProT-0-TypE joined #gluster
16:32 TuxedoMan joined #gluster
16:37 Technicool joined #gluster
16:38 manik joined #gluster
16:39 kPb_in_ joined #gluster
16:41 Mo__ joined #gluster
16:46 mooperd_ joined #gluster
16:59 compbio sbenz: /data/GrayCellLineProject/RNAs​eq/fastq/HCC1143_42JVHAAXX_2_*
16:59 compbio sorry, wrong channel
17:01 an joined #gluster
17:06 Mo__ joined #gluster
17:07 plarsen joined #gluster
17:08 bulde joined #gluster
17:09 dusmant joined #gluster
17:11 _pol joined #gluster
17:19 lalatenduM joined #gluster
17:26 daMaestro joined #gluster
17:32 plarsen joined #gluster
17:35 mooperd_ joined #gluster
18:01 t35t0r joined #gluster
18:01 t35t0r joined #gluster
18:11 ThatGraemeGuy joined #gluster
18:16 tziOm joined #gluster
18:18 vpshastry joined #gluster
18:25 vpshastry joined #gluster
18:26 zaitcev joined #gluster
18:28 vpshastry1 joined #gluster
18:38 andreask joined #gluster
18:40 vpshastry joined #gluster
18:48 vpshastry1 joined #gluster
18:51 vpshastry1 left #gluster
18:59 sprachgenerator joined #gluster
18:59 robo joined #gluster
19:04 aliguori joined #gluster
19:09 plarsen joined #gluster
19:10 bennyturns joined #gluster
19:18 fyxim joined #gluster
19:21 manik joined #gluster
19:21 manik joined #gluster
19:27 robo joined #gluster
19:28 theron joined #gluster
19:38 failshel_ joined #gluster
19:42 P0w3r3d joined #gluster
19:53 glusterbot New news from newglusterbugs: [Bug 990330] geo-replication fails for longer fqdn's <http://goo.gl/X4adNQ>
19:54 an__ joined #gluster
19:56 failshell joined #gluster
20:13 tryggvil joined #gluster
20:15 c_layton joined #gluster
20:16 _pol joined #gluster
20:17 awheele__ joined #gluster
20:23 glusterbot New news from newglusterbugs: [Bug 1004519] SMB:smbd crashes while doing volume operations <http://goo.gl/DMsNHh> || [Bug 986429] Backupvolfile server option should work internal to GlusterFS framework <http://goo.gl/xSA6n>
20:40 chirino_m joined #gluster
20:44 badone joined #gluster
21:07 ahomolya joined #gluster
21:10 aliguori joined #gluster
21:17 sprachgenerator joined #gluster
21:31 theron_ joined #gluster
21:47 realdannys joined #gluster
21:48 realdannys Hi guys, could someone tell me the best way to upgrade GlusterFS 3.3 on my Centos6 installation to 3.4? I have the 3.3 repo's added which are giving me http errors when I run yum update now. Is there an easy way to remove them and replace with the 3.4 repo and then update gluster hassle free?
21:53 glusterbot New news from newglusterbugs: [Bug 1004546] peer probe can deadlock in "Sent and Received peer request" for both servers after server build <http://goo.gl/2RhTva>
21:57 tjstansell JoeJulian: there's the bug ... hopefully someone can make sense of it.
21:57 andrewklau joined #gluster
21:57 tjstansell turns out it's not related to the initial RJT as I initially thought...
21:58 theron__ joined #gluster
21:59 andrewklau Hi, quick question. I have a 2 brick replica, if I reinstall the OS of one node but keep the brick intact what's the best way to rejoin the volume
22:00 tjstansell andrewklau: restore the uuid in /var/lib/glusterd/glusterd.info from before so the uuid is the same ... then just peer probe the other node and it *should* work.  but i just filed bug 1004546 about peer probe issues around that.
22:00 glusterbot Bug http://goo.gl/2RhTva unspecified, unspecified, ---, kparthas, NEW , peer probe can deadlock in "Sent and Received peer request" for both servers after server build
22:01 andrewklau tjstansell: Cheers
22:02 tjstansell we modelled it after this doc: http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Brick_​Restoration_-_Replace_Crashed_Server
22:02 glusterbot <http://goo.gl/60uJV> (at gluster.org)
22:03 tjstansell our kickstart stuff used to work 100% on 3.3.2, but with 3.4.0 we're seeing these occational issues with peer probe
22:04 andrewklau looks easier than I thought, thanks.
22:05 robo joined #gluster
22:05 tjstansell yeah, it's really not hard... when it works :)
22:14 jporterfield joined #gluster
22:33 theron_ joined #gluster
22:34 StarBeast joined #gluster
22:45 MugginsM joined #gluster
22:49 tryggvil joined #gluster
22:52 niximor joined #gluster
23:24 jones_d joined #gluster
23:41 nueces joined #gluster
23:45 StarBeast joined #gluster
23:51 sprachgenerator joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary