Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:24 blendedbychris joined #gluster
00:24 blendedbychris joined #gluster
00:35 jbrooks joined #gluster
00:43 stefanha joined #gluster
00:47 mnaser Can you setup gluster with 1 node and add on from there?
00:48 semiosis yes that's possible
00:48 mnaser yay! that's good news.  thank, semiosis
00:48 semiosis mnaser: test it before you go to production though :)
00:49 mnaser semiosis: yeses, i was just wondering if there was a requirement as we don't need to grow more than 1 server than now
00:49 semiosis i'd consider using nfs instaed of glusterfs then
00:49 mnaser my only concern is the data exposed, i don't want all the data exposed to all the servers (but want distributed storage), i'm reading more about this, i think i might do a volume for each "server"
00:49 semiosis until you need the complexity of a distributed cluster filesystem
00:50 mnaser semiosis: i have a huge amount of local storage that i want to leverage (and have as a centralized redundant storage)
00:50 semiosis cool
00:50 andreask semiosis: any idea what the "timeout" paramater for geo-replication means?
00:52 semiosis nope, haven't used geo-rep yet
00:59 kevein joined #gluster
00:59 andreask hard to find someone who has ....
01:20 glusterbot New news from newglusterbugs: [Bug 874348] mount point broke on client when a lun from a storage backend offline or missing . After there the data are scrap <http://goo.gl/CjwrE>
01:24 nightwalk joined #gluster
02:05 blendedbychris why is nfs 38465-38467 ?
02:05 blendedbychris not just one port?
02:29 sunus joined #gluster
02:44 nightwalk joined #gluster
03:04 sripathi joined #gluster
03:31 shylesh joined #gluster
03:31 shylesh_ joined #gluster
03:31 bharata joined #gluster
03:56 pranithk joined #gluster
04:15 ika2810 joined #gluster
04:17 mdarade1 joined #gluster
04:26 pranithk joined #gluster
04:30 vpshastry joined #gluster
04:48 zaitcev joined #gluster
04:49 sripathi joined #gluster
04:50 zaitcev ok, a FAQ undoubtedly: ./rfc.sh generated Change-ID, but failed to push. No error messages. What to do?
04:51 glusterbot New news from newglusterbugs: [Bug 874390] Swift test fails <http://goo.gl/aMwmj>
05:04 zaitcev actually, n/m. Cloned from a header repo instead of directly and with ssh.
05:27 faizan joined #gluster
05:40 ankit9 joined #gluster
05:46 sunus i write a large file, like 4gb to a gluster volume mount point fails, why?
06:11 shapemaker joined #gluster
06:15 mdarade1 joined #gluster
06:16 mdarade1 joined #gluster
06:19 sripathi joined #gluster
06:26 raghu joined #gluster
06:30 mdarade1 left #gluster
06:37 mtanner joined #gluster
06:38 sripathi joined #gluster
06:38 mtanner joined #gluster
06:40 khushildep joined #gluster
06:52 bala joined #gluster
07:05 bala joined #gluster
07:07 bala joined #gluster
07:09 guigui1 joined #gluster
07:11 seanh-ansca joined #gluster
07:12 ramkrsna joined #gluster
07:21 sshaaf joined #gluster
07:22 vpshastry joined #gluster
07:25 ngoswami joined #gluster
07:26 rgustafs joined #gluster
07:30 Triade joined #gluster
07:33 xiaolin joined #gluster
07:40 zhangxiaolins joined #gluster
07:43 fitzdsl left #gluster
07:54 ekuric joined #gluster
07:55 zhangxiaolins joined #gluster
07:56 ankit9 joined #gluster
07:56 hagarth joined #gluster
07:58 xiaolins joined #gluster
08:00 ctria joined #gluster
08:01 Azrael808 joined #gluster
08:17 oscailt left #gluster
08:18 sripathi joined #gluster
08:26 TheHaven joined #gluster
08:29 harshpb joined #gluster
08:30 sripathi joined #gluster
08:38 harshpb joined #gluster
08:40 shireesh joined #gluster
08:45 andreask joined #gluster
08:47 manik joined #gluster
08:58 JoeJulian sunus: With the information given, I can only guess that it's because your brick is less than 4gb.
08:58 sunus JoeJulian: i didn't set any size volume to any brick
09:00 sunus of is there any default value?
09:02 JoeJulian sunus: Of course you set the size of the brick. You have a drive of a fixed size. You probably created a partition table. Another fixed size. You installed a filesystem on that partition...
09:02 sunus JoeJulian: that partition is big enough
09:03 joeto joined #gluster
09:03 JoeJulian So what's the error that you get? How are you writing this file? What does the client log show when this fails?
09:04 sunus i get no errors.. the file just didn't show up in that brick of the server
09:06 JoeJulian Please answer the next two questions as well.
09:09 sunus JoeJulian: sorry, but, where is client log?
09:09 sunus JoeJulian: i am , log name?
09:09 JoeJulian /var/log/glusterfs
09:09 sunus JoeJulian: and?
09:09 JoeJulian The name of the log depends on the mount point. Slashes ('/') will be replaced with dashes ('-').
09:10 sunus ok, wait
09:12 harshpb joined #gluster
09:13 TheHaven joined #gluster
09:13 sunus JoeJulian: http://fpaste.org/Xwsx/ it seems all right
09:13 glusterbot Title: Viewing tt-log by sunus (at fpaste.org)
09:14 JoeJulian Wait, you're running trunk?
09:16 z00dax so, that rsync filled up 2 of the 3 bricks.
09:16 z00dax it decided to completely ignore one of them
09:17 z00dax why would it do that ?
09:18 z00dax or put differently, under what circumstances would gluster decide it was going to put everything on 2 of 3 bricks available ( these are thousands of files )
09:19 JoeJulian because it was added after the fact and a rebalance (or at least a rebalance...fix-layout) wasn't done.
09:19 z00dax after what fact ?
09:19 JoeJulian After the volume was created, typically.
09:20 z00dax thats not correct
09:20 JoeJulian Was this the replace-brick thing?
09:20 z00dax the volume was created with 1 brick
09:20 z00dax and the other 2 were added later, a rebalance runs typically every 7 to 10 days
09:20 z00dax JoeJulian: yeah
09:24 sunus JoeJulian: found anything useful:(
09:24 sunus ?
09:25 vshastry joined #gluster
09:25 JoeJulian sunus: Sorry, you're running the active development branch. I don't use that so I can't support it.
09:26 z00dax fwiw, my setup is 3.3.0
09:26 sunus JoeJulian: hahahaha ok,  i will just do some research, then:)
09:26 z00dax didnt want to upgrade during this pull-back-to-one-machine-replace-machines-push-out process
09:27 JoeJulian Oh, right... I really strongly suggest doing so anyway. There's some pretty significant bugs squashed.
09:28 sunus JoeJulian: i open a networking trafftic monitor in the server, when cp a large file, there is not trafftic to server or not very obviously lager traffic
09:28 z00dax once i've got everything back in, I'll rebalance again and do the 3.3.1 upgrade
09:29 z00dax btw, the two brick hosts I'm dropping were centos-5 based, the new setup will be centos-6 all through ( noticed much better perf and features running under c6 )
09:29 DaveS joined #gluster
09:29 JoeJulian My personal opinion on running development branches are: 1) They're broken. 2) You should only be running it to look for and fix or report bugs. You should have enough knowledge to isolate and identify said bugs.
09:30 JoeJulian I'll be happy to discuss theory, but as for actually diagnosing that specific problem, you're kind-of on your own.
09:31 JoeJulian z00dax: I just did the same thing.
09:31 z00dax personally, I think one should only run stuff in production once its been a year since release :D
09:32 JoeJulian z00dax: I hear ya... but then again, we're running 3.3 in production. There were too many advantages to be able to wait.
09:34 z00dax same
09:35 JoeJulian And just to be clear, sunus, I am a ,,(volunteer). I do not represent any official entity. There may be other support options available to you, but if so I'm just not aware of them.
09:35 glusterbot A person who voluntarily undertakes or expresses a willingness to undertake a service: as one who renders a service or takes part in a transaction while having no legal concern or interest or receiving valuable consideration.
09:36 sunus JoeJulian: okok, yeah i know that:)
09:36 JoeJulian cool
09:37 sunus JoeJulian: i am just testing qemu-gluster now and found a bug, and in order to locate that bug, which leads me to this one.. so i think they might be related
09:37 z00dax btw, if a brick is full, any brick in the distributed set, I would have thought its safe to assume client would get notified
09:37 z00dax rather than glusterfs commit suicide as a whole
09:38 JoeJulian Could be. I'd point you at gluster-dev but even they're going to expect that you'll know where to find logs.
09:38 z00dax me ?
09:39 JoeJulian z00dax: If there's still room to put stuff, it'll do so and create dht links. If one of your three isn't even being touched, though, it ...
09:39 z00dax expecting every client app to parse logs on the glusterfs head-node is a bit... odd
09:39 JoeJulian no, z00dax, not you...
09:39 z00dax right
09:39 z00dax so how it started off was : 60% 60% 20% of 500gb in each
09:40 z00dax how it ended was 100% 100% 30%
09:40 z00dax if it decided nothing from brick3 was to be used, I'd have not been as confused.
09:40 JoeJulian Right...
09:40 z00dax i wodner if its a number of files thing
09:40 z00dax wonder even
09:41 z00dax maybe it did put the same number of files in each, just the larger ones ( there are some large ISOS in there ) onto 1  & 2
09:41 JoeJulian I suppose it's possible but it should be statistically unlikely.
09:41 JoeJulian Eh, could be that, sure.
09:42 JoeJulian Even so, once a file can't fit, it's supposed to put it on a brick with room and create the sticky-pointer.
09:42 tripoux joined #gluster
09:42 JoeJulian unless there wasn't even room for a 0 size file.
09:42 z00dax machine load on the head-node did goto 30'ish ( its a 4 core machine ) and stayed there for ~ 6 hrs ( while I was asleep )
09:42 JoeJulian were you doing rsync --inplace?
09:43 * JoeJulian needs to do that soon too. Sleep that is...
09:43 andreask joined #gluster
09:43 harshpb joined #gluster
09:44 z00dax --ignore-existing --min-size=1 --exclude .glusterfs -av
09:45 z00dax ( so it does not pickup those 0 byte files and truncate whats on the volume
09:45 z00dax and am running from this from xfs on the old brick mount point, and running it to the glusterfs-vol
09:46 JoeJulian I just do the "find -perm 1000 -size 0 -print0 | xargs -0 /bin/rm" before I mess with it.
09:47 JoeJulian Use --inplace otherwise you're creating tempfiles (dht hash points to one brick) then renaming the file to the target filename where the hash will point to a different brick and a sticky pointer will be created.
09:47 z00dax good point
09:47 sripathi joined #gluster
09:48 * z00dax restarts
09:51 z00dax that made a bit of a diff to speed as well
09:52 JoeJulian Well goodnight. I need to get some sleep and the project I was working on is going to take longer to finish than I'm willing to wait. See you all in about 5 hours (that's all the sleep I get tonight).
09:55 z00dax nite
10:09 z00dax -bash: start_pipeline: pgrp pipe: Too many open files in system
10:09 z00dax is prolly not a good sign
10:10 ngoswami joined #gluster
10:12 hagarth left #gluster
10:25 harshpb joined #gluster
10:34 manik joined #gluster
10:40 khushildep joined #gluster
10:40 manik joined #gluster
10:41 tjikkun_work joined #gluster
10:52 glusterbot New news from newglusterbugs: [Bug 874498] execstack shows that the stack is executable for some of the libraries <http://goo.gl/NfsDK>
10:57 unalt_ joined #gluster
10:57 manik joined #gluster
11:03 y4m4 joined #gluster
11:06 * jdarcy o_O
11:13 manik joined #gluster
11:25 pjefferson joined #gluster
11:27 tryggvil joined #gluster
11:45 pjefferson Hello, I have a query regarding the cluster.min-free-disk option
11:46 harshpb joined #gluster
11:48 jdarcy pjefferson: What's the question?
11:48 pjefferson I have set up a 2 brick volume, each with 1GB of total space
11:49 pjefferson I've configured cluster.min-free-disk option to be 100MB
11:50 pjefferson I've then written several files (~60MB in size) which is all fine
11:51 pjefferson And looking at volume status gv0 status shows its looking quite good
11:51 pjefferson Brick1: 48.7MB free space
11:51 pjefferson Brick 2: 24.0MB free space
11:52 pjefferson If I try to write another 60MB file, this fails - as expected
11:52 pjefferson I then add another brick, again with 1GB available
11:52 pjefferson I issue a rebalance fix-layout
11:53 pjefferson From what I understand from the cluster.min-free-space, brick 3 should now receive my new 60MB file if I retry
11:53 pjefferson as Brick1 and Brick2 have less then 100MB free
11:53 pjefferson however, it still wants to write to brick1
11:54 jdarcy Let me take a look at how that's implemented.
11:54 pjefferson cool, thanks
11:55 saz joined #gluster
11:55 grzany joined #gluster
11:55 grzany_ joined #gluster
11:56 hagarth joined #gluster
11:58 jdarcy OK, so it looks like it does check whether the initially-picked brick is over its limit, then if so it will choose one that's not.
11:59 jdarcy Is the client that's attempting to write still mounted from before the brick was added, or has it remounted?
12:00 pjefferson the volume was left mounted when the new brick was added
12:00 jdarcy It would be useful to know if it still happens with a newly mounted client.
12:00 pjefferson OK, i will try that now
12:04 pjefferson ok, i've re-mounted the volume, tried my write and it still wants to pick the old brick
12:04 jdarcy Hmmm.
12:05 jdarcy This is a write to a new file, or to an existing file?
12:05 pjefferson when i see the file has failed to write, i delete that file
12:05 pjefferson i then try to write the same file again
12:05 pjefferson same filename, path
12:06 jdarcy OK.  Seems like there must be a bug in the brick-assignment code.  Would you mind filing a report on bugzilla.redhat.com?
12:07 pjefferson sure thing. i'd be happy to.
12:07 jdarcy Let's see if I remember the glusterbot syntax: ,,(bug)
12:07 glusterbot jdarcy: Error: No factoid matches that key.
12:07 glusterbot jdarcy: Error: No factoid matches that key.
12:07 jdarcy Apparently not.
12:08 pjefferson I'll post the report now and track it on the web site.
12:08 jdarcy https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
12:08 glusterbot <http://goo.gl/UUuCq> (at bugzilla.redhat.com)
12:10 pjefferson should i file under the "distribute" component?
12:10 jdarcy Yes.
12:12 pjefferson OK, I'm using verion 3.3.1
12:12 pjefferson Which I don't see in the version list
12:12 pjefferson hmmm... am i using a beta version? :/
12:12 pjefferson I thought that was the latest stable
12:12 pjefferson version*
12:13 jdarcy I think 3.3.1 is technically beta, but IMO it is actually more stable than 3.3 at this point.
12:13 pjefferson OK, I'll select 3.3.0 in the version list and note in the description what I'm actually using
12:14 jdarcy Good enough.  Usually it doesn't actually matter because the code's the same anyway.
12:14 pjefferson cool :)
12:18 jdarcy I don't see any current bugs that seem related, BTW.
12:18 kkeithley1 joined #gluster
12:20 jdarcy Time to get ready for my "commute".  BBIAB.
12:23 pjefferson Righto. Many thanks, jdarcy. I'll get this posted up shortly.
12:28 tjikkun_work joined #gluster
12:30 andreask has anyone an idea of the meaning of that "timeout" parameter for geo-replication setup?
12:45 pjefferson_ joined #gluster
12:50 harshpb joined #gluster
12:52 glusterbot New news from newglusterbugs: [Bug 874554] cluster.min-free-disk not having an effect on new files <http://goo.gl/xbQQC>
12:56 edward1 joined #gluster
13:03 vimal joined #gluster
13:06 hagarth joined #gluster
13:14 mohankumar joined #gluster
13:16 puebele1 joined #gluster
13:23 manik joined #gluster
13:24 hackez joined #gluster
13:25 faizan joined #gluster
13:41 harshpb joined #gluster
13:53 ika2810 left #gluster
14:02 guigui1 joined #gluster
14:02 ctria joined #gluster
14:07 plarsen joined #gluster
14:08 Dave2 joined #gluster
14:12 puebele joined #gluster
14:14 saz joined #gluster
14:22 sag47 joined #gluster
14:27 ekuric joined #gluster
14:32 puebele1 joined #gluster
14:34 pjefferson__ joined #gluster
14:35 saz joined #gluster
14:35 pjefferson joined #gluster
14:40 mitchbcn joined #gluster
14:41 mitchbcn left #gluster
14:42 raghu joined #gluster
14:46 Technicool joined #gluster
14:47 faizan joined #gluster
14:57 hagarth joined #gluster
15:00 ctria joined #gluster
15:03 stopbit joined #gluster
15:07 Nr18 joined #gluster
15:11 tryggvil_ joined #gluster
15:12 manik joined #gluster
15:20 ika2810 joined #gluster
15:34 ika2810 left #gluster
15:36 harshpb_ joined #gluster
15:40 mohankumar joined #gluster
15:49 tryggvil joined #gluster
15:51 jbrooks joined #gluster
15:55 semiosis :O
15:57 36DACEMPO joined #gluster
16:00 andreask no geo-replication user here?
16:02 rodlabs joined #gluster
16:09 daMaestro joined #gluster
16:15 wushudoin joined #gluster
16:20 nightwalk joined #gluster
16:20 seanh-ansca joined #gluster
16:31 seanh-ansca joined #gluster
16:43 harshpb_ joined #gluster
16:46 nightwalk joined #gluster
17:01 saz joined #gluster
17:04 dstywho joined #gluster
17:08 18WACEP95 joined #gluster
17:10 shylesh joined #gluster
17:10 shylesh_ joined #gluster
17:19 tryggvil joined #gluster
17:26 harshpb_ joined #gluster
17:26 shylesh joined #gluster
17:26 shylesh_ joined #gluster
17:31 Mo__ joined #gluster
17:34 zaitcev joined #gluster
17:37 Bullardo joined #gluster
17:40 puebele1 joined #gluster
17:50 JoeJulian andreask: It looks like it's passed directly to rsync, so it should be the I/O timeout in seconds.
17:56 sshaaf joined #gluster
17:57 tryggvil_ joined #gluster
18:00 wushudoin| joined #gluster
18:15 andreask JoeJulian: thanks!
18:31 nueces joined #gluster
18:33 raghu joined #gluster
18:35 hagarth joined #gluster
18:42 Nr18 joined #gluster
18:43 layer3 joined #gluster
18:44 sr71 joined #gluster
18:58 dberry joined #gluster
18:58 dberry joined #gluster
19:01 plarsen joined #gluster
19:11 DaveS_ joined #gluster
19:15 mohankumar joined #gluster
19:20 Bullardo joined #gluster
19:24 TheHaven joined #gluster
19:24 Bullardo_ joined #gluster
19:26 Bullardo joined #gluster
19:26 y4m4 joined #gluster
19:28 Bullardo_ joined #gluster
19:29 Bullardo joined #gluster
19:32 Bullardo_ joined #gluster
19:38 Psi-Jack OKay, well, GlusterFS has failed me, unfortunately.
19:38 Psi-Jack I ripped out GlusterFS, and went straight NFSv4, and my access times was 0.5s, consistently./
19:39 nightwalk joined #gluster
19:39 jdarcy Well, if that works for you, that's great.
19:40 Psi-Jack Yeah, it's just sad I can't get GlusterFS to perform anywhere near that good.
19:41 Psi-Jack But, 770% performance decrease is way too much.
19:41 H__ anyone here tried ceph for performance ? or is that a dirty word in here ? ;-)
19:41 Psi-Jack H__: I'd say, it's, off topic. :)
19:41 Psi-Jack Besides, Ceph isn't even stable yet, because it depends on an unstable filesystem, btrfs.
19:42 andreask no, it runs fine on XFS
19:42 andreask you can use btrfs, but its not mandatory
19:42 jdarcy I haven't run any performance tests on Ceph for too long.  Need to do that some time soon.
19:42 gcbirzan Psi-Jack: How do you measure 'access time'?
19:42 Psi-Jack gcbirzan: Hitting the website that's serving up the content.
19:43 Psi-Jack We went from 0.5s access times to a 6s average.
19:43 gcbirzan What... were you storing on gluster?
19:43 jdarcy GlusterFS is a hammer, Ceph is a screwdriver, some people need a saw.
19:43 H__ heh :)
19:44 Psi-Jack gcbirzan: All of the PHP content, the php session files,.
19:45 gcbirzan first of all, dear God in heaven, don't do that :P
19:46 jdarcy I'm going to write a translator that returns ETOOMANYREFS whenever someone tries to open a .php file.
19:46 gcbirzan second, you might want to turn on stat caching. not sure gluster supports that, but fuse does
19:46 Psi-Jack gcbirzan: It was reasonably tuned for it all. apc.stat = 0; Zend Framework's autoloader's require_once all commented out in the framework, it /SHOULD/ have been fine.
19:46 gcbirzan also, outside the scope, but you should use one of them accelerators thingies, and... session files
19:46 gcbirzan use a database of sorts :P
19:47 jdarcy gcbirzan: It looks like he *was* using APC, and for some reason it wasn't doing its job.
19:47 Psi-Jack gcbirzan: Already am using sessions into a 3-tier directory storage hash. Which is faster than using a database by far.
19:47 Psi-Jack Instead of 1 monolithic directory.
19:47 gcbirzan we had a customer once trying to use this perl thingy with a trillion modules on gluster, with 8 instances of it on 50 nodes. that was fun, it took 10 minutes just to start thie thing
19:48 Psi-Jack Hmmm, how would I enable this stat caching?
19:48 Psi-Jack Looking that up, I see very little usable information.
19:49 jdarcy Psi-Jack: I think stat-cache was superseded by md-cache, which is on by default.
19:51 gcbirzan the one in gluster, afair, was caching fstats
19:52 jdarcy Psi-Jack: What you really need, for PHP and similar workloads, is negative-lookup caching.
19:53 jdarcy negative-lookup caching = "I asked a second ago, wasn't there then, damn little chance it has appeared since, say no right away"
19:54 gcbirzan aha, i't scalled negative_timeout for mount options
19:54 XmagusX joined #gluster
19:55 * gcbirzan does a little jig.
19:55 gcbirzan First time ever I was able to reboot a host and not have the volume die horribly, with 3.3.1
19:55 jdarcy gcbirzan: Yes, there's some limited support for it in FUSE, which we only very recently enabled in GlusterFS.  Not even sure if the patch has been merged yet.
19:56 Psi-Jack jdarcy: Well, I'm on Ubuntu 10.04, which is Linux 2.6.32
19:56 gcbirzan I just wrote an .so that overrode stat/stat64 and LD_PRELOADED it
19:56 JoeJulian @php
19:56 Psi-Jack So, a bit dated. ;)
19:56 glusterbot JoeJulian: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
19:57 JoeJulian I've got a pretty good list of things that most people should do even if they're not using gluster on there.
19:57 gcbirzan yes, I know, not everything is a nail, but it was easier at the time to just use a hammer :P
19:57 Psi-Jack JoeJulian: Yep. Already been there, done that, everything PHP-wise has been tuned.
19:57 jdarcy I'm only seeing the patch on master, not 3.3 etc.
20:01 TSM2 joined #gluster
20:02 y4m4 joined #gluster
20:05 esm_ joined #gluster
20:07 JoeJulian I changed all my php to use absolute pathing so with apc and stat=0 my app pretty much never touches storage for the scripts. I should try a little test app that uses a search path and see if, once cached, it still does the negative lookups. If so, that might be a good feature request for apc.
20:09 Psi-Jack JoeJulian: Yeah, we're already effectively doing that as well.
20:10 nightwalk joined #gluster
20:11 JoeJulian Then there's broken cms's that, rather than using includes/requires for plugins, use open. That, of course, eliminates any improvements that apc would give.
20:12 Psi-Jack Nope. Beyond a few specific libraries, including Zend Framework, this codebase is completely custom stuff, and I've gone through it myself with a fine toothed comb looking for performance tweaks.
20:12 JoeJulian cool
20:12 JoeJulian aack. I'm supposed to have left already... ttfn
20:13 Psi-Jack I can find 0 reasons why GlusterFS, specifically, would be running as slow as it does, just for this application. Everything else I've done appears to work very nicely. Even CIFS mounted from a Windows-based DFS resource.
20:14 jdarcy Psi-Jack: I think the answers will only be found via strace or some sort of profiling, to find what calls that application is actually generating that take so long.
20:14 Psi-Jack Hmmm, well, most of our applications are already hooked up to newrelic, which does stuff like that.
20:14 jdarcy Psi-Jack: Actually wireshark would be another possibility.
20:15 jdarcy Bit painful matching up the requests and responses, but it can be done.
20:16 Psi-Jack Or, as I said, could hook up this test platform to newrelic, and it would do it all in nice pretty readable formats that can be drilled down into. ;)
20:20 jdarcy I don't know enough about newrelic to say whether it would provide the necessary information, but I'll take your word on the pretty part.
20:24 badone_ joined #gluster
20:29 lh joined #gluster
20:30 badone joined #gluster
20:39 hackez joined #gluster
20:40 Azrael808 joined #gluster
20:42 gcbirzan newrelic won't really tell you if requiring stuff is slow, but then never used it for php
20:46 noob2 joined #gluster
20:46 noob2 looks like installing oracle on a gluster fuse mount causes a few warnings to be thrown
20:47 noob2 i'm seeing a bunch of these in the logs: http://fpaste.org/BQ3I/
20:47 glusterbot Title: Viewing Paste #250609 (at fpaste.org)
20:47 noob2 i can't say for sure what the oracle admins were doing at the time this happened.  they said oracle threw an error about unzipping a log file
20:48 noob2 i revised it a little with the other warnings
20:52 sshaaf joined #gluster
20:53 elyograg just noticed that gluster and gluster-swift are now up to 3.3.1-2 ... is there a changelog from -1 somewhere?
20:57 noob2 elyograg: you can use rpm -ql to get the change log i believe
20:57 noob2 i think i asked the same questoin here some weeks ago :)
20:58 noob2 elyograg: sorry it's rpm -q --changelog glusterfs-3.3.1-2
20:58 ola` joined #gluster
20:59 mohankumar joined #gluster
21:02 Bullardo joined #gluster
21:02 elyograg one of the things in the glusterfs-swift changelog says "save swift .conf files correctly during upgrade" but this one also clobbered my changes.  i planned ahead and saved them elsewhere, though. :)
21:03 noob2 here's the changelog: http://fpaste.org/8dgh/
21:03 glusterbot Title: Viewing Paste #250618 (at fpaste.org)
21:04 elyograg oh, that was on -1, which is where i noticed the problem (and filed a bug).
21:09 atrius joined #gluster
21:13 TSM2 joined #gluster
21:17 ola` I folks!
21:18 ola` is somen using gluster as filebackend for serving big loads of webpages?
21:18 ola` some one*
21:21 nightwalk joined #gluster
21:33 Psi-Jack ola`: Your question is insufficient, as-is.
21:37 Bullardo joined #gluster
21:43 atrius joined #gluster
21:48 ola` Psi-Jack: im sorry for that, whats lacking?
21:48 Psi-Jack Substanance.
21:49 ola` oh.. thats bad :/
21:51 ola` i will give it a new try tomorrow after som well needed sleep :)
21:53 tryggvil joined #gluster
21:55 nightwalk joined #gluster
22:01 Nr18 joined #gluster
22:03 ctria joined #gluster
22:11 atrius joined #gluster
22:23 ladd left #gluster
22:41 nightwalk joined #gluster
22:51 stefanha joined #gluster
23:19 HavenMonkey joined #gluster
23:23 JordanHackworth joined #gluster
23:26 thekev joined #gluster
23:26 saz joined #gluster
23:26 bulde joined #gluster
23:27 zaitcev joined #gluster
23:27 xymox joined #gluster
23:27 Shdwdrgn joined #gluster
23:36 hattenator joined #gluster
23:55 arusso joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary