Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
08:18 ilbot_bck joined #gluster
08:18 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
08:21 gbrand_ joined #gluster
08:30 bchilds joined #gluster
08:33 atrius_ joined #gluster
08:35 piotrektt joined #gluster
08:40 bchilds joined #gluster
08:41 aravindavk joined #gluster
08:51 vpshastry joined #gluster
08:51 bharata joined #gluster
08:52 vpshastry joined #gluster
08:55 duerF joined #gluster
09:00 bchilds joined #gluster
09:10 bchilds joined #gluster
09:29 premera_x joined #gluster
09:39 E_T_ joined #gluster
09:39 E_T_ joined #gluster
09:46 ollivera left #gluster
09:50 bchilds joined #gluster
09:54 bharata joined #gluster
10:11 renihs joined #gluster
10:17 renihs hmm might be abit ot, but if anyone knows a good paper on glusterfs / hadoop / lustre i would be curious
10:17 renihs might be my misunderstanding but they all seem to aim for the same targets
10:18 maxiepax left #gluster
10:50 bchilds joined #gluster
10:55 jtux joined #gluster
10:57 renihs joined #gluster
11:04 edward1 joined #gluster
11:06 ekuric joined #gluster
11:12 Staples84 joined #gluster
11:16 vpshastry joined #gluster
11:20 kkeithley1 joined #gluster
11:22 aravindavk joined #gluster
11:32 andreask joined #gluster
11:34 joeto joined #gluster
11:49 sahina joined #gluster
11:50 vimal joined #gluster
11:50 bchilds joined #gluster
11:51 flrichar joined #gluster
11:52 hagarth joined #gluster
11:53 hagarth @channelstats
11:53 glusterbot hagarth: On #gluster there have been 119256 messages, containing 5137746 characters, 862671 words, 3586 smileys, and 443 frowns; 772 of those messages were ACTIONs. There have been 43795 joins, 1403 parts, 42418 quits, 19 kicks, 119 mode changes, and 5 topic changes. There are currently 196 users and the channel has peaked at 217 users.
11:54 partner oh, didn't realize there's that many people around
12:00 rcheleguini joined #gluster
12:00 bchilds joined #gluster
12:03 spider_fingers what are for binary core* files in /
12:04 spider_fingers i don't like anything messing with posix HFS structure)
12:08 kkeithley| are you asking why core files were dropped in / instead of `cat /proc/sys/kernel/core_pattern`?
12:09 spider_fingers yup
12:10 spider_fingers i found em on my fedora 18, a couple of centos'es show no sign of these files
12:10 spider_fingers hi kkeithley|
12:10 bchilds joined #gluster
12:12 kkeithley| hi. and just because you saw core files on f18, why do you think they'd be there on centos boxes? (/me is trying to avoid asking a certain question)
12:16 spider_fingers kkeithley| what question?)
12:17 kkeithley| why do you expect there to be core files on the centos boxes? You only get core files when something crashes.
12:17 glusterbot New news from newglusterbugs: [Bug 958108] Fuse mount crashes while running FSCT tool on the Samba Share from a windows client <http://goo.gl/BdJvh>
12:19 kkeithley| The fact that there are core files on the f18 boxes means you've had things crash and drop a core file. What crashed?  `file /core.xxxxx` will usually tell you what crashed. Use gdb to find out where it crashed.
12:19 kkeithley| you can install the -debuginfo rpm to get more detail from gdb
12:20 kkeithley| And if what crashed was a glusterfs process like gluster, glusterd, glusterfs, glusterfsd, then you should file a bug report.
12:20 glusterbot http://goo.gl/UUuCq
12:24 plarsen joined #gluster
12:24 spider_fingers thank you, that's obvious, but why they are dumped in root?
12:24 sahina joined #gluster
12:25 kkeithley| yes, I thought it was obvious too, but I didn't understand why you were asking why there were no core files on the centos boxes.
12:27 kkeithley| They're in / because that's the pwd/cwd of the gluster daemons when they run. f18's /proc/sys/kernel/core_pattern defaults to |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e, so I'd look into why the abrtd didn't handle it.
12:27 vpshastry joined #gluster
12:28 kkeithley| Maybe gluster daemons are registered with abrtd. Probably they shouldn't be, because automatically sending bug reports about gluster crashes to the Fedora project probably isn't very useful.
12:28 kkeithley| s/are/aren't/
12:28 glusterbot What kkeithley| meant to say was: The fact that there aren't core files on the f18 boxes means you've had things crash and drop a core file. What crashed?  `file /core.xxxxx` will usually tell you what crashed. Use gdb to find out where it crashed.
12:28 kkeithley| no glusterbot, you blew it
12:29 kkeithley| Maybe gluster daemons aren't registered with abrtd. Probably they shouldn't be, because automatically sending bug reports about gluster crashes to the Fedora project probably isn't very useful.
12:30 kkeithley| rhel6 (and centos too I presume)  has the same /proc/sys/kernel/core_pattern, so maybe you just haven't had any crashes on your centos boxes
12:34 spider_fingers i see now
12:35 spider_fingers :) thank you
12:40 ngoswami joined #gluster
12:40 nickw joined #gluster
12:47 piotrektt hey. I have this funny issue
12:47 piotrektt glusterFShV001v ~ # gluster volume start glusterfs
12:47 piotrektt Volume glusterfs already started
12:47 piotrektt glusterFShV001v ~ # gluster volume status
12:47 piotrektt Volume glusterfs is not started
12:47 piotrektt and one server was out of sync. how I can repair that?
12:48 piotrektt volume heal does not work, cause it says volume is not started but it is
12:48 piotrektt can anyone help me with that? its urgent :(
12:54 vshankar joined #gluster
13:01 bchilds joined #gluster
13:01 mohankumar joined #gluster
13:11 hagarth joined #gluster
13:12 vpshastry left #gluster
13:16 chirino joined #gluster
13:18 mohankumar joined #gluster
13:19 piotrektt ok its solved. needed to stop gluster on first server, stop on second, start on second, start on first and its up and running :) thx for help :)
13:21 bchilds joined #gluster
13:29 rastar joined #gluster
13:29 golan joined #gluster
13:35 bennyturns joined #gluster
13:37 golan Hi, are the debian control files that have been used to build the debian packages available anywhere?
13:40 kkeithley| I'm pretty sure they're (exactly) the same as the control files semiosis uses to build the .debs in his ppa.
13:40 kkeithley| @ppa
13:40 glusterbot kkeithley|: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
13:40 semiosis @later tell san i updated the ubuntu-glusterfs-3.4 ppa recently with *alpha3* -- https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.4
13:40 glusterbot semiosis: The operation succeeded.
13:40 golan ah, thanks. /me has a look
13:41 semiosis golan: whose debian packages?
13:41 golan http://download.gluster.org/pu​b/gluster/glusterfs/3.3/3.3.1/
13:41 glusterbot <http://goo.gl/ZO2y1> (at download.gluster.org)
13:41 golan those
13:42 semiosis oh those :)
13:42 semiosis kkeithley|: do you still have those lying around somewhere?
13:43 golan shall I have a look at the ppa?
13:43 semiosis golan: what distro are you working on?
13:43 golan squeeze, and potentially wheezy
13:43 kkeithley| I'm just firing up my wheezy vm to have a look see. Probably still have the whole kit-and-kaboodle. Give me a sec
13:43 semiosis kkeithley|: great, thanks.  it would be the .debian.tar.gz file that was produced by debuild
13:44 golan basically I want to put those in a private repo I have to have them more at hand, but because both wheezy and squeeze packages are named teh same, reprepro can't add them to the same repo.
13:45 kkeithley| O j
13:47 kkeithley| I have a glusterfs_3.3.1-1.debian.tar.gz, contents listed at http://paste.fedoraproject.org/9501/29591136.   Does that look like what you're looking for?
13:47 golan presumably, don't you have those available somwhere in git/elsewhere?
13:48 kkeithley| I don't
13:48 kkeithley| Probably should, but I don't
13:49 kkeithley| I can't but that tar file on download.gluster.org easily enough
13:49 kkeithley| sheesh. I can put that tar file on d.g.o easily enough
13:49 golan that'd be appreciated :)
13:53 kkeithley| get it at http://download.gluster.org/pub/gluster/glusterfs​/3.3/3.3.1/Debian/glusterfs_3.3.1-1.debian.tar.gz
13:53 glusterbot <http://goo.gl/MZ1ec> (at download.gluster.org)
13:53 golan thanks :)
13:53 kkeithley| I hope that's what you wanted.
13:53 golan I'll have a look. At least it'll give me something to play with
13:58 jskinner_ joined #gluster
14:03 Supermathie StarBeast: Your problem sounds like mine: http://imgur.com/Nemkjs6
14:03 glusterbot Title: imgur: the simple image sharer (at imgur.com)
14:05 Supermathie Although in this case, I had a gluster process get bigger and bigger until it crapped out at 66GB RSS
14:06 partner it would be most perfect if the source package and dsc file would be uploaded with the packages, that would ease up work on "this" end quite a bit
14:06 ihs joined #gluster
14:11 golan partner: agreed :)
14:11 spider_fingers left #gluster
14:12 partner i mean it would allow many of us to do quick patches to problems we're encountering, the rebalance bug i was hit was fixed hmm yesterday but i guess its going to be months before i actually have a package on my table unless i do it myself
14:14 partner but the tar.gz kicks speed into that nicely, downloaded it to safe already
14:14 partner thank you
14:15 partner golan: reading your request i actually have the very same problem, i can't have them both in reprepro repository due to identical names
14:15 Supermathie partner: I know that they're uploaded to semiosis's repo: http://ppa.launchpad.net/semiosis/ubuntu-g​lusterfs-3.3/ubuntu/pool/main/g/glusterfs/
14:15 glusterbot <http://goo.gl/tmhNu> (at ppa.launchpad.net)
14:15 semiosis Supermathie: launchpad \o/
14:15 partner Supermathie: yeah i see them, thanks
14:16 Supermathie s/repo/website on the intertubes/
14:16 glusterbot What Supermathie meant to say was: partner: I know that they're uploaded to semiosis's website on the intertubes: http://goo.gl/tmhNu
14:16 kkeithley| I just put the .dsc file on download.gluster.org, same place as the debian.tar.gz.
14:16 partner i and golan need to do exactly that same "glue" for squeeze/wheezy as done for lucid/precise/quantal
14:16 Supermathie So… where do I look to find the solution for 'cannot find stripe size'
14:17 partner kkeithley|: for that to be of any use it would required the mentioned two files along. or the missing orig.tar.gz actually, then it would be *perfect*
14:18 semiosis partner: glue?
14:18 hagarth joined #gluster
14:19 semiosis partner: the orig.tar.gz is the glusterfs release tarball, you can find it at ,,(latest)
14:19 glusterbot partner: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
14:21 bchilds joined #gluster
14:21 aliguori joined #gluster
14:22 theron joined #gluster
14:22 bulde joined #gluster
14:22 kkeithley| besides the orig.tar.gz, which is on d.g.o. as semiosis indicated, what's the other file that's needed?
14:23 semiosis kkeithley|: imho none.  with the orig & the debian tarballs you can generate the rest
14:23 semiosis s/you/one/
14:23 glusterbot What semiosis meant to say was: kkeithley|: imho none.  with the orig & the debian tarballs one can generate the rest
14:23 kkeithley| okay
14:23 sgowda joined #gluster
14:24 semiosis however the .dsc file is usually included.  i believe this would have your signature, allowing verification of the others
14:24 semiosis but i doubt they would do that
14:24 semiosis launchpad does that
14:27 partner semiosis: ah got it, named differently but the hash matched, all pieces at hand, thank you very much
14:27 semiosis partner: welcome to the world of debian packaging
14:27 semiosis may your journey through be smoother than mine :)
14:28 partner semiosis: well if it would be for debian similarly you have it for ubuntu then no searching for anything :)
14:29 bulde1 joined #gluster
14:29 partner not a complaint by any means, just can't go and mv anything :)
14:31 bchilds joined #gluster
14:37 jclift joined #gluster
14:37 bugs_ joined #gluster
14:38 partner shouldn't be tough from here on, i just need those mentioned files and i can issue one command and wait for the package to appear in our repo for whichever flavors i selected
14:51 bchilds joined #gluster
14:52 Supermathie Is there a solution for the 3.3.1 striped volume problem? (cannot find stripe size) or do we need to go to HEAD?
14:57 vpshastry joined #gluster
14:58 jag3773 joined #gluster
14:59 BSTR joined #gluster
15:01 bchilds joined #gluster
15:03 luckybambu joined #gluster
15:11 bchilds joined #gluster
15:13 jdarcy joined #gluster
15:13 jdarcy_ joined #gluster
15:27 luckybambu left #gluster
15:31 bchilds joined #gluster
15:33 daMaestro joined #gluster
15:41 sandeen_ joined #gluster
15:45 rotbeard joined #gluster
15:47 gdavis33 is there any manual command i can run on a master to force geo-rep to stay in sync
15:50 gdavis33 i've set up a one to many system that mirrors one file repository to several WAN connected edge sites and the file counts keep coming up different
15:51 gdavis33 replication is to\from distributed mirrors
15:55 DEac- joined #gluster
15:57 gdavis33 anyone?
16:02 aliguori joined #gluster
16:09 bala1 joined #gluster
16:10 jag3773 joined #gluster
16:11 bchilds joined #gluster
16:16 SteveCooling Is there a reason the "glusterfs-fuse" rpm does not depend on the "fuse" package?
16:19 Supermathie SteveCooling: On my RHEL6 it depends on fuse.so.0()(64bit)
16:20 Supermathie Oh wait - that's provided by glusterfs-fuse...
16:21 Supermathie Hmmmm glusterfs-fuse may provide it's own fuse library and not depend on the system's
16:21 bchilds joined #gluster
16:25 Supermathie Yeah it may not need the fuse package.
16:28 vpshastry joined #gluster
16:28 vpshastry left #gluster
16:33 gbrand_ joined #gluster
16:34 Mo___ joined #gluster
16:36 ekobox joined #gluster
16:39 SteveCooling it does need the fuse package. that's how I notices
16:39 SteveCooling it failed mounting a volume until i installed the fuse package.
16:40 SteveCooling this is RHEL5 though
16:40 SteveCooling s/notices/noticed/
16:40 glusterbot What SteveCooling meant to say was: it does need the fuse package. that's how I noticed
16:41 zaitcev joined #gluster
16:41 bchilds joined #gluster
16:47 luckybambu joined #gluster
16:48 JoeJulian kkeithley|: ^ Looks like EL5 is missing a dependency?
16:49 JoeJulian SteveCooling: If kkeithley doesn't respond here, please file a bug report.
16:49 glusterbot http://goo.gl/UUuCq
16:51 vpshastry joined #gluster
16:51 vpshastry left #gluster
16:54 hagarth joined #gluster
16:54 waldner joined #gluster
16:54 waldner joined #gluster
16:59 jag3773 joined #gluster
17:06 thomasl__ joined #gluster
17:08 Supermathie SteveCooling: Did installing the fuse package have the side effect of loading the fuse.ko kernel module?
17:14 nueces joined #gluster
17:19 georgeh|workstat joined #gluster
17:29 gdavis33 will setting read-only on a geo-replicated slave interfere with replication?
17:32 luckybambu joined #gluster
17:45 gdavis33 does read-only even work?
17:45 gdavis33 i just set it on a vol and i can still write from the client after unmount and remount
17:56 edong23 joined #gluster
17:57 bulde joined #gluster
18:06 gdavis33 do my posts even appear in this room?
18:06 Supermathie gdavis33: Yeah, can be pretty quiet though
18:06 Supermathie Hmm... this is simple enough :/ http://www.websequencediagrams.com/cgi-bin/cdra​w?lz=dGl0bGUgR2x1c3RlckZTIFNFVEFUVFIgY2FsbCBzZX​F1ZW5jZQoKcGFydGljaXBhbnQgY2xpZW50IGFzIEMACw0ib​mZzMy5jOlxubmZzM19zZXRhdHRyIiBhcyAzc2EACCNfcmVz​dW1lACoIcgBNES1nZW5lcmljcwBeCABYDQBREy1mb3AAIwp​mb3AAgQkNZgAGLF9jYmsALghjAIIDDVNUQUNLX1dJTkQgYX​MgU1cAgXobc3ZjAD8RMwBAEACBVBZ0cnVuY2F0AIIYBm50A​IFIIwAkDmYABywAgWQJbmZ0AIELEgCBQg4AIxEzc3RjCgpD​LT4zc2E6AIRICVJQQwCEUAVcbihjaG1vZD00NDA
18:06 glusterbot <http://goo.gl/5TbkZ> (at www.websequencediagrams.com)
18:07 Supermathie whoops, thought I had already shortened that :/
18:08 gdavis33 That's unfortunate
18:09 gdavis33 is there a better forum for inquiries?
18:10 semiosis hello
18:10 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:10 Supermathie Try asking on the gluster-users mailing list
18:10 semiosis gdavis33: see what glusterbot just said
18:10 gdavis33 i've been in here for days asking for some guidance and you are the first person to acknowledge
18:10 semiosis really?
18:10 gdavis33 really
18:11 gdavis33 i know everyone is a volunteer. i dont expect anything more then some friendly advice
18:11 semiosis has your nick changed?  i dont see any messages from gdavis33 before about an hour ago
18:11 gdavis33 no
18:12 gdavis33 nope been here using same name same comp same client
18:13 gdavis33 again if everyone is busy or no one has a good answer that's fine
18:13 semiosis well i'll give it a shot
18:13 gdavis33 thanks
18:14 semiosis in re: will setting read-only on a geo-replicated slave interfere with replication?
18:14 gdavis33 does read-only flag work?
18:14 semiosis read-only flag?  where are you setting this?  have a command you can paste here?
18:14 Supermathie What read-only flag?
18:14 gdavis33 yep
18:14 gdavis33 gluster volume set EDGE_RESOURCE read-only on
18:15 gdavis33 and info reports: features.read-only: on
18:17 bfoster_ joined #gluster
18:17 gdavis33 using debian version 3.3.1 btw
18:18 aliguori joined #gluster
18:18 nocterro_ joined #gluster
18:18 rwheeler_ joined #gluster
18:20 gdavis33 additionally trying to set ro on the mount in fstab causes the mount to fail
18:21 Supermathie mounted via nfs or fuse?
18:21 gdavis33 i'd like to use fuse
18:21 Supermathie So you've tried both and they behave the same?
18:21 gdavis33 nfs means implementing my own ha
18:22 bchilds joined #gluster
18:22 gdavis33 well i'm assuming that nfs ro will work
18:22 gdavis33 though i havent tried
18:23 H__ joined #gluster
18:23 H__ joined #gluster
18:23 gdavis33 testing it now
18:23 semiosis gdavis33: you want your geo-rep slave to be read-only?
18:24 gdavis33 ummm yeah
18:24 gdavis33 i want it to be read only when resharing to clients
18:24 gdavis33 i would think that would be the desired behavior for a slave filesystem
18:26 Supermathie Huh yeah, I get same failure (3.3.1 RHEL6) trying to mount ro
18:26 semiosis gdavis33: if its read-only, how can glusterfs write into it?
18:26 semiosis that doesnt add up
18:26 semiosis i doubt you want a read-only filesystem, seems more like you would want access control/authorization
18:26 gdavis33 the goal was to have a master fs replicate out to remote facilities to provide a resource repo for my edge systems
18:27 Supermathie semiosis: Yeah, but if the read-only ACLS have to come from the master, you wouldn't be able to write to the master either
18:27 semiosis Supermathie: what?
18:27 semiosis acls?
18:27 gdavis33 this is geo-replication between two distrib/mirrors
18:28 semiosis it's one way
18:28 gdavis33 well between on master and 5 slaves
18:28 semiosis heh
18:28 gdavis33 geo doesnt do two way yet
18:28 semiosis yeah
18:28 gdavis33 thats why i need read only slaves
18:28 semiosis uhh
18:29 semiosis how about geo-rep writing to a slave volume (that is not read-only) then have a read-only mount of that volume for people to use at the remote location
18:29 semiosis master -> slave -> slave-client-mount(ro)
18:29 gdavis33 if the slave can be modified then it's xattribs would be newer then the master and that would cause that file to be orphaned no?
18:30 gdavis33 lol
18:30 gdavis33 that is part of the issue
18:30 semiosis why can't you just make a client mount of the slave read-only?
18:30 gdavis33 you cant mount ro
18:30 semiosis what?
18:30 semiosis i can
18:30 semiosis i do
18:30 gdavis33 using nfs or fuse?
18:30 semiosis fuse, but i'm sure nfs ro would work too
18:31 gdavis33 no i want to use fuse
18:31 semiosis ok fine
18:31 gdavis33 if i specify ro in my mount line the mount fails
18:31 semiosis mount -t glusterfs slave:vol /mount/point -o ro
18:31 semiosis thats not normal
18:31 semiosis get client logs
18:31 Supermathie https://bugzilla.redhat.com/show_bug.cgi?id=853895
18:31 gdavis33 1 sec
18:31 glusterbot <http://goo.gl/xCkfr> (at bugzilla.redhat.com)
18:31 semiosis pastie.org them
18:31 glusterbot Bug 853895: medium, medium, ---, csaba, ON_QA , CLI: read only glusterfs mount fails
18:31 semiosis Supermathie: thats a bummer
18:31 gdavis33 i was using fstab but i'll try manual
18:32 Supermathie "Tag v3.4.0qa3 and later contain commit 702b2912970e7cc19416aff7d3696d15977efc2f"
18:32 bchilds joined #gluster
18:33 duerF joined #gluster
18:33 gdavis33 so it's broken up to 3.4
18:34 semiosis awww
18:34 semiosis that sucks
18:34 Supermathie gdavis33: You can probably cherry-pick the fix and try it out
18:34 gdavis33 yeah i need this for a prod env
18:34 gdavis33 kind of a large one
18:34 gdavis33 i need a stable release
18:35 semiosis gdavis33: define stable
18:35 semiosis if read-only mounts dont work.....
18:35 gdavis33 hahaha
18:35 * semiosis is saddened by this news :(
18:35 gdavis33 i see your line of reasoning
18:35 Supermathie 3.3.1 is riddled with bugs, wouldn't exactly call it stable.
18:35 semiosis granted i've been happily using 3.1 in prod for ~2 years
18:36 gdavis33 well it is labeled as current stable no?
18:36 semiosis well 3.4 should be out this year
18:36 semiosis it is
18:36 * Supermathie wonders if you can just put the ro clients on the 3.3.1 codebase with the cherry-picked patch
18:37 gdavis33 i have seen a lot of ppl trying to do things with gluster that it wasn't meant to do
18:37 Supermathie WHAT'S THE WORST THAT COULD HAPPEN? :)
18:37 gdavis33 is this one of them?
18:37 Supermathie gdavis33: Did you read the bug? It's marked as a regression.
18:38 gdavis33 yes
18:38 gdavis33 i meant the whole setup
18:38 Supermathie so it's *supposed* to work, but doesn't.
18:38 semiosis gdavis33: i assure you, glusterfs is meant to support read-only mounts.  this is a bug.
18:39 semiosis gdavis33: the setup seems reasonable to me, and my impression of the road map (last time i saw it ~1 year ago) is that more support for these kinds of setups is coming
18:39 gdavis33 i know that. in fact i got as far as setting up configuring 12 servers completely taking it for granted
18:40 gdavis33 i understand
18:40 gdavis33 appreciate the help guys
18:49 glusterbot New news from newglusterbugs: [Bug 955753] NFS SETATTR call with a truncate and chmod 440 fails <http://goo.gl/fzF6r>
18:52 partner hmm that's good to know, was thinking a ro-mounts for some of our datamining activity to prevent users/services from modifying any and all of the data...
18:53 bchilds joined #gluster
19:14 y4m4 joined #gluster
19:14 neofob joined #gluster
19:23 bchilds joined #gluster
19:28 jag3773 joined #gluster
19:33 bchilds joined #gluster
19:35 SteveCooling Supermathie: the kernel module seems to have loaded the next time i tried to mount after installing the fuse rpm ("kernel: fuse init (API version 7.10)" is timestamped in the log not when i installed the package but when i mounted the filesystem)
19:37 SteveCooling JoeJulian: filing a bug report...
19:37 SteveCooling (fuse package dependency)
19:53 bchilds joined #gluster
19:58 andreask joined #gluster
20:01 sandeen joined #gluster
20:13 bchilds joined #gluster
20:16 duerF joined #gluster
20:23 bchilds joined #gluster
20:32 chirino joined #gluster
20:43 bchilds joined #gluster
20:52 piotrektt joined #gluster
20:56 ctria joined #gluster
20:58 san joined #gluster
20:59 san any one can give some idea to check the brick and volume consistency after 1 or more node recovery after failure in distributed replicated volume.
21:02 Supermathie san: Are you asking how to check for files that need healing? volume heal VOL info
21:03 bchilds joined #gluster
21:04 san Supermathie : thanks for the response. I had only one node stopped glusterfs-server service for couple of minutes but now the number of files does not match as of gluster volume. Should I care of file count on individual bricks ?
21:05 san the command volume heal VOL info shows : number of entries 0 for the node which has service stopped for some time.
21:05 san other three nodes shows self heal daemon not running
21:07 Supermathie self-heal daemon not running - probably a problem. You can force a heal with 'volume heal VOL full'
21:08 san Self-heal daemon is not running. Check self-heal daemon log file. - where to look for the log file
21:09 Supermathie /var/log/glusterfs
21:10 piotrektt joined #gluster
21:11 san Supermathie : I see some errors in glustershd.log file. Is any other file helpful too ?
21:12 Supermathie san: All of them are :D
21:14 san Supermathie : any idea to check pid of self heal daemon ?
21:15 Supermathie http://pastie.org/7744965
21:15 glusterbot Title: #7744965 - Pastie (at pastie.org)
21:20 san it looks like ,out of four nodes, only one node has self healing daemon running. any idea to resovle this ?
21:21 Supermathie san: From what I understand, 'volume start VOL force' will restart any missing daemons
21:23 san Supermathie : thanks for the help, now all four nodes have all pid running, I will monitor the directory if it is in sync.
21:23 bchilds joined #gluster
21:29 jiqiren joined #gluster
21:32 san Supermathie : I have some missing files on two bricks and not other two.
21:38 elyograg joined #gluster
21:43 Supermathie avati: PING
21:43 bchilds joined #gluster
21:48 portante joined #gluster
21:49 a2 Supermathie, pong
21:50 Supermathie a2: "all our releases have this wierd inefficent handle of nfs stable writes" I was looking for the commit that addressed this and I think I found it: https://github.com/gluster/glusterfs/commi​t/7645411f134c2b7ae004f0a8478449965e424a97.
21:50 glusterbot <http://goo.gl/CYp99> (at github.com)
21:50 a2 Supermathie, right
21:51 a2 Supermathie, you also need fdb05c6f84054ca640e3da1c19ea7d536d2751e0 in conjunction
21:51 Supermathie I'm switching my testing to v3.4.0a3 with a few patches on top of it... was looking for that one and any others that may be suitable
21:51 a2 that bug got exposed after fixing nfs :)
21:51 Supermathie Excellent, thanks
21:53 Supermathie Any other suggestions? :)
21:54 a2 054c1d7eb3782c35fc0f0ea3a5fd25337d080294 if you want to avoid passing -o vers=3 at nfs mount time :)
21:55 Supermathie Ugh nfsv4 systems that don't both with portmap
21:56 a2 does oracle issue ACCESS nfs method? 1a5979dc09e15dbc83aada0b7647d2482e431884 fixed an ACCESS bug which made AIX systems crib
21:56 Supermathie a2: Yep, but it's fine with the additional info
21:57 a2 oh wait.. there were fixes in AFR for working well with nfs
21:58 a2 6f6744730e34fa8a161b5f7f2a8ad3f8a7fc30fa and bunch (in case you use replication behind NFS)
21:59 a2 hmm, no "and bunch", that alone is sufficient, i think
21:59 a2 we were keeping write-behind disabled in NFS graph and therefore eager lockign wouldn't work
22:00 Supermathie That's not included in v3.4.0a3?
22:00 a2 that patch should improve general iops performance (even random IO)
22:00 a2 hmmm, let's see
22:01 Supermathie apparently not
22:01 a2 oh yeah, it's backported by jeff as c37546cf11555678be6fefdbfec0007272aeb336
22:01 a2 oh no, that was a different patch
22:01 a2 it's missing
22:01 a2 sorry, got confused :)
22:03 Supermathie gah cherry-pick fails on 6f6744730e34fa8a161b5f7f2a8ad3f8a7fc30fa, nuts
22:04 a2 brb
22:05 Supermathie https://github.com/Supermathie/g​lusterfs/tree/release-3.4-oracle is my selected-fixes-on-top-of-latest-alpha if you feel nice enough to figure that one out ;)
22:05 glusterbot <http://goo.gl/WCKLV> (at github.com)
22:05 Supermathie I gotta get home, but thanks for the help
22:09 a2 does it work?
22:09 a2 behind oracle?
22:10 Supermathie It's working more and more :) Found one Oracle bug and a few GlusterFS bugs that I've patched/worked around/pulled fixes for
22:11 Supermathie that stable write fix hit me HARD when Oracle went to start up after I crashed it hard... it issues writes on DB recovery as stable, glusterfsd was choking on a processor at 100% and the NFS daemon chewed up memory until it died with 66GB RSS
22:11 a2 that sounds like a memleak
22:12 Supermathie yeah, it was pretty bad.
22:12 Supermathie I'll take note if I see it again, I gave up on 3.3.1
22:12 a2 don't recall fixing a memleak
22:13 Supermathie I *think* it was the NFS daemon that died. Some essential daemon in the path died.
22:13 a2 ok.. curious to know if oracle at least starts working!
22:13 bchilds joined #gluster
22:15 Supermathie a2: It works fairly well. Already getting times better than NetApp (SSDs help :) but the kicker this time was the can't-find-stripe-size bug
22:15 theron Hi folks.. looking for the syntax to get a striped replicated volume configured.
22:15 theron have four bricks per server.
22:16 a2 Supermathie, hmm i think that's the readdirplus issue
22:16 a2 let me check
22:17 Supermathie [2013-04-29 15:57:48.351193] W [nfs3-helpers.c:3389:nfs3_log_common_res] 0-nfs-nfsv3: XID: 8fe8dbd, CREATE: NFS: 17(File exists), POSIX: 0(Success)
22:17 Supermathie [2013-04-29 16:28:00.022461] E [rpc-clnt.c:208:call_bail] 0-gv0-client-6: bailing out frame type(GlusterFS 3.1) op(WRITE(13)) xid = 0x15986x sent = 2013-04-29 15:57:59.603435. timeout = 1800
22:17 Supermathie [2013-04-29 16:28:00.022526] W [client3_1-fops.c:821:client3_1_writev_cbk] 0-gv0-client-6: remote operation failed: Transport endpoint is not connected
22:17 Supermathie volume died at 16:28ish
22:17 Supermathie [2013-04-29 16:28:00.022740] E [rpc-clnt.c:208:call_bail] 0-gv0-client-6: bailing out frame type(GlusterFS 3.1) op(WRITE(13)) xid = 0x14529x sent = 2013-04-29 15:57:57.326209. timeout = 1800
22:17 Supermathie [2013-04-29 16:28:00.022762] W [client3_1-fops.c:821:client3_1_writev_cbk] 0-gv0-client-6: remote operation failed: Transport endpoint is not connected
22:17 Supermathie OK, REALLY have to get home.
22:17 a2 k ttyl
22:18 Supermathie but yeah, check please, lemme know. May revert to 3.3.1 for testing native client if we figure out striping issue
22:19 Supermathie theron: volume create gv0 stripe 4 replica 2 (8 bricks)
22:19 Supermathie theron: volume create gv0 stripe 4 replica 2 brick1 brick2 brick3...
22:19 theron Supermathie thanks.
22:20 a2 hmm, readdirplus seems to be doig the right thing
22:24 bchilds joined #gluster
22:30 jag3773 joined #gluster
22:34 bchilds joined #gluster
22:50 glusterbot New news from newglusterbugs: [Bug 958324] Take advantage of readdir-plus efficiencies for handling container GET responses <http://goo.gl/PBfbC> || [Bug 958325] For Gluster-Swift integration, enhance quota translator to return count of objects as well as total size <http://goo.gl/E5K0R>
22:59 sandeen joined #gluster
23:02 jikz joined #gluster
23:04 bchilds joined #gluster
23:24 bchilds joined #gluster
23:32 cyberbootje joined #gluster
23:50 glusterbot New news from newglusterbugs: [Bug 947774] [FEAT] Display additional information when geo-replication status command is executed <http://goo.gl/Bpg3O>

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary