Camelia, the Perl 6 bug

IRC log for #gluster, 2012-12-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 yinyin joined #gluster
00:36 kwevers joined #gluster
00:40 yinyin joined #gluster
00:51 kevein joined #gluster
01:32 hchiramm_ joined #gluster
01:47 wN joined #gluster
01:55 mohankumar joined #gluster
02:22 __Bryan__ joined #gluster
02:33 wN joined #gluster
02:52 bharata joined #gluster
03:00 hchiramm_ joined #gluster
03:03 grade__ joined #gluster
03:03 grade__ hi guys! I already installed gluster 3.3.1 and it was successful :)
03:04 grade__ is there a way that I could configure my client to use autofs for automounting my gluster volume?
03:20 m0zes grade__: you can, but why not mount it on boot?
03:22 grade__ mozes: thanks for your reply. im using ubuntu but it cant it at boot time.
03:22 m0zes @ppa
03:22 glusterbot m0zes: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
03:23 m0zes grade__: putting _netdev in the fstab options doesn't fix it?
03:24 m0zes /home -fstype=bind fileserver:home
03:24 m0zes s/bind/glusterfs
03:24 m0zes s/bind/glusterfs/
03:24 glusterbot What m0zes meant to say was: /home -fstype=glusterfs fileserver:home
03:25 m0zes I think that is the correct syntax for autofs...
03:29 hchiramm_ joined #gluster
03:29 mohankumar joined #gluster
03:29 grade__ mozes/gluserbot: thanks for that reply. I'll try the syntax in my autofs config. again thank you :)
03:54 hchiramm_ joined #gluster
03:59 shylesh joined #gluster
04:21 glusterbot New news from newglusterbugs: [Bug 887711] Cannot delete directory when special characters are used. <http://goo.gl/MOc1N> || [Bug 887712] Cannot delete directory when special characters are used. <http://goo.gl/OSCCR> || [Bug 887713] Cannot delete directory when special characters are used. <http://goo.gl/uw7Us> || [Bug 887714] Cannot delete directory when special characters are used. <http://goo.gl/N2GsK>
04:26 kevein joined #gluster
04:26 m0zes someone was bug happy...
04:28 m0zes and it isn't even a gluster bug. 'rm -rf -- -4oEo5VkALI' works, it is how rm parses its arguments.
04:45 isomorphic joined #gluster
04:46 kevein joined #gluster
05:04 rastar joined #gluster
05:04 mohankumar joined #gluster
05:08 yinyin joined #gluster
05:14 flakrat joined #gluster
05:22 sgowda joined #gluster
05:25 vpshastry joined #gluster
05:30 bulde joined #gluster
05:32 hagarth joined #gluster
05:38 hagarth1 joined #gluster
05:42 jbrooks joined #gluster
05:52 koodough joined #gluster
06:03 hchiramm_ joined #gluster
06:15 kevein joined #gluster
06:17 glusterbot New news from resolvedglusterbugs: [Bug 806851] [glusterfs-3.3.0qa31] - fileop failed in striped-replicated volume <http://goo.gl/rxvWu>
06:18 vimal joined #gluster
06:20 shireesh joined #gluster
06:30 raghu joined #gluster
06:34 overclk joined #gluster
06:34 ramkrsna joined #gluster
06:34 ramkrsna joined #gluster
06:35 mooperd joined #gluster
06:49 bala joined #gluster
06:53 rgustafs joined #gluster
07:11 Nevan joined #gluster
07:13 jtux joined #gluster
07:18 ngoswami joined #gluster
07:23 kshlm joined #gluster
07:23 kshlm joined #gluster
07:38 alan_ joined #gluster
07:39 Guest75739 hi, i have questions about data balancing between brickets.
07:39 Guest75739 i have 2 servers and 4 brichets (4+4 replica 2) 3x3TB and one 300G
07:41 Guest75739 and i have problem, 300G disk full, and vat i to do? Now i remove brackets with 300G
07:44 Guest75739 here problem like i have http://gluster.org/pipermail/glus​ter-users/2010-April/004327.html
07:44 glusterbot <http://goo.gl/RSr48> (at gluster.org)
07:48 ctria joined #gluster
07:50 deepakcs joined #gluster
07:53 hagarth joined #gluster
07:56 samu60 joined #gluster
07:57 passie joined #gluster
08:05 andreask joined #gluster
08:07 yinyin joined #gluster
08:11 mdarade joined #gluster
08:12 jtux joined #gluster
08:14 mdarade left #gluster
08:15 ekuric joined #gluster
08:16 xavih joined #gluster
08:19 inodb joined #gluster
08:19 sripathi joined #gluster
08:21 samu60 hi all
08:21 samu60 I've been testing 3.4.0qa5
08:21 samu60 with a Type: Striped-Replicate
08:21 samu60 Number of Bricks: 1 x 2 x 2 = 4
08:22 samu60 and write performance is quite amazing
08:22 samu60 but i'm having horrible read performance
08:22 samu60 when I edit an existing file, it takes 3 seconds to open it
08:23 samu60 and there's a ruby website serving files and I got timeout constantly
08:23 samu60 Options Reconfigured:
08:23 samu60 performance.quick-read: on
08:23 samu60 performance.io-thread-count: 32
08:23 samu60 performance.cache-max-file-size: 128MB
08:23 samu60 performance.cache-size: 256MB
08:23 samu60 performance.io-cache: on
08:23 samu60 cluster.stripe-block-size: 2MB
08:23 samu60 nfs.disable: on
08:23 samu60 any hint how ot debug read performance?
08:23 samu60 was kicked by glusterbot: message flood detected
08:23 samu60 joined #gluster
08:24 samu60 previously I was using distributed instead of stripped and read performance was acceptable
08:24 samu60 although write was relatively slow
08:26 samu60 could be a CPU problem in the reading node that is not capable of reaching the cpu power required to retrieve the stripped parts of a file to recreate the original file?
08:26 samu60 I do not see CPU peak
08:26 samu60 or memory issues...
08:28 nissim joined #gluster
08:29 yinyin_ joined #gluster
08:32 sripathi1 joined #gluster
08:35 rudimeyer joined #gluster
08:48 glusterbot New news from resolvedglusterbugs: [Bug 887268] glusterfsd process crashed <http://goo.gl/TSYCO>
08:54 sripathi joined #gluster
08:57 shireesh joined #gluster
09:00 bala joined #gluster
09:06 yinyin joined #gluster
09:08 gbrand_ joined #gluster
09:09 mohankumar joined #gluster
09:26 hagarth joined #gluster
09:29 DaveS joined #gluster
09:31 yinyin joined #gluster
09:36 DaveS joined #gluster
09:49 tryggvil joined #gluster
09:51 puebele joined #gluster
10:07 vpshastry joined #gluster
10:08 xinkeT joined #gluster
10:14 shireesh joined #gluster
10:16 bala joined #gluster
10:18 dobber joined #gluster
10:24 vpshastry joined #gluster
10:48 dobber_ joined #gluster
10:49 lh joined #gluster
10:49 lh joined #gluster
10:54 shireesh joined #gluster
11:10 Norky is it possible to change the transport options of a volume (from "tcp,rdma" to just "rdma"), or do I have to delete then recreate the volume?
11:18 manik joined #gluster
11:18 xinkeT joined #gluster
11:20 glusterbot New news from resolvedglusterbugs: [Bug 887712] Cannot delete directory when special characters are used. <http://goo.gl/OSCCR>
11:24 romero joined #gluster
11:30 tryggvil joined #gluster
11:41 edward1 joined #gluster
11:42 guest2012 joined #gluster
11:50 glusterbot New news from resolvedglusterbugs: [Bug 887713] Cannot delete directory when special characters are used. <http://goo.gl/uw7Us> || [Bug 887714] Cannot delete directory when special characters are used. <http://goo.gl/N2GsK>
11:59 shireesh joined #gluster
12:00 hagarth joined #gluster
12:04 mohankumar joined #gluster
12:25 andreask joined #gluster
12:25 mohankumar joined #gluster
12:30 peterlin Still having inconsistent results with nfs client caching. I could really use the help of an expert to understand whats going on..
12:31 H__ I see (using strace) find crawl at 2 directories per second, with about 10 files per directory on a local 3.2.5 glusterfs mount. The drives do about 30% of their IOPS. Network usage about 10%. There is no paging. What can I look at to determine the cause ?
12:38 khushildep joined #gluster
12:39 guest2012 joined #gluster
12:42 rwheeler joined #gluster
12:55 balunasj joined #gluster
12:56 nissim Hi Just installed CentOS 6 + kkeithle glusterfs using epel repo and mount RDMA stuck, it didnt happen on fedora 17
12:56 nissim anyone can shade a light here?
12:57 nissim BTW, using XFS
13:06 vpshastry left #gluster
13:22 Norky nissim, have you got RDMA hardware?
13:22 tryggvil joined #gluster
13:22 Norky are the modules for RMDA loaded on CentOS?
13:25 Staples84 joined #gluster
13:38 Kins joined #gluster
13:39 passie left #gluster
13:47 nissim sure I have an RDMA hardware
13:48 nissim I could only mount using vol.rdma when creating the volume with both tcp,rdma
13:54 x4rlos Anyone know if the auth.allow accepts /24 notation?
13:55 khushildep joined #gluster
13:56 Kins Anyone know where I can begin debugging this issue? [posix.c:658:posix_lookup] 0-gv0-posix: lstat on / failed: Input/output error
13:56 Kins .
13:56 Kins I just completed the quickstart, had everything working well, and when I rebooted one of my nodes, that started happening after mounting.
13:57 Kins Everything looks fine in peer status and volume info
13:57 x4rlos sounds to me like you have been editing the files/folders on the server itself directly rather than via a client. :-)
13:58 Kins Well, that is kind of confusing, had no idea I couldn't do that.
13:58 Kins So I need to seperate server just for gluster?
13:59 x4rlos Or mount it on itself :-)
13:59 Kins Thats what I was doing, I think
14:00 Kins On the server -> mount -t glusterfs serverhostname:/gv0 /gluster
14:00 passie joined #gluster
14:00 hagarth joined #gluster
14:00 andreask joined #gluster
14:03 Kins x4rlos, does that sound right?
14:03 x4rlos what version you running? Send the peer status and volume status to hastebin or the likes.
14:03 x4rlos I have to nip over the road, but will be back in a few mins if noone answered you by then.
14:04 Kins Thanks
14:05 Kins http://dpaste.com/847246/
14:05 glusterbot Title: dpaste: #847246 (at dpaste.com)
14:05 Kins I guess the package might be somewhat out of date.
14:08 x4rlos yeah, the new versions 3.3 i find are much better. Are the folders "/export/brick1" empty on each of your hosts?
14:11 Kins ls: cannot access /export/brick1: Input/output error
14:11 Kins iirc, that happened when it was working too, but I could be wrong.
14:13 x4rlos Usually, it means that you have added the peer, added the volume, and then you constructed files/folders under there. the 3.3 versions handle this better - but i would advise removing the entire gluster package before adding the new version if you plan to.
14:14 x4rlos input output error - are you ls'ing the actual folder on the gluster server? or a mounted gluster folder?
14:14 Kins mounted
14:14 Kins I never even touched the /export/brick1 mountpoint
14:14 x4rlos hmmm. I would delete the volume and start again to be honest. Sounds a bit fruity.
14:15 x4rlos I assume that this is a test box you have set up?
14:15 Kins Yes
14:15 Kins No way to recover anything though?
14:16 Kins I'm working on a server image I can work from, and I had some config files in there that I'd rather not rewrite.
14:16 x4rlos if you need the things in there, you could move them on th physical server, and then once set up correctly, copy them into the mounted gluster sharepoint and it will re-distribute.
14:17 Kins I'm on the physical server
14:17 x4rlos So move them to server1:/backupfolder for now
14:18 Kins Both the mounted gluster volume, and the brick /export/brick1 are giving me input/output errors
14:19 dblack joined #gluster
14:19 x4rlos stop the gluster service, and check the folder itself again.
14:19 x4rlos on server2.
14:19 x4rlos server1 sorry.
14:19 Kins Same thing
14:20 Kins From my logs. web02 is the server, as web01 is my production server
14:20 rwheeler joined #gluster
14:21 Kins Still have input/access error on /export/brick1 even after stopping gluster service.
14:22 mohankumar joined #gluster
14:22 x4rlos hmmm. So this is suggesting a problem with the actual hdd then?
14:22 x4rlos - maybe :-./
14:22 Kins :S
14:22 Kins Oh
14:23 Kins I wonder if it could be a network problem?
14:23 glusterbot New news from newglusterbugs: [Bug 885281] Swift integration Gluster_DiskFIle.unlink() performs directory listing before deleting file ... scalable? <http://goo.gl/pR86P> || [Bug 886730] File object is opened and closed when not requested <http://goo.gl/R6xq0> || [Bug 887301] Container listing of objects is sorted resulting is higher latency in response time <http://goo.gl/789il>
14:23 Kins [2012-12-17 08:04:58.907588] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1018)
14:23 glusterbot Kins: That's just a spurious message which can be safely ignored.
14:23 Kins Oh ok
14:23 x4rlos hahaha.
14:23 x4rlos It means that its not connected to the gluster endpoint because of one reason or another.
14:24 Kins Could that be the cause?
14:27 x4rlos i would discount gluster for the time being and check the disk is okay if possible.
14:27 Staples84 joined #gluster
14:28 x4rlos then i would remove the gluster version you have, and install 3.3.1 maybe.
14:35 stopbit joined #gluster
14:35 H__ Question: I see find (using strace) crawl at 2 directories per second, with about 10 files per directory on a local 3.2.5 glusterfs mount. The drives do about 30% of their IOPS. Network usage to other peer is about 10% of local gigabit. There is no paging. What can I look at to determine the cause ?
14:36 x4rlos Kins: What client and server version you running?
14:36 x4rlos Or more - what client version you running?
14:36 Kins 3.2.5
14:37 Kins I got it working
14:37 Kins Not sure how
14:37 Kins I umounted brick1 and ran xfs_check on it (nothing)
14:38 Kins After remounting, it magically started working.
14:38 x4rlos I have just set up a new one. I had an old client rather than the new one, and it had problem trying to connect. thought may have been similar.
14:38 Kins Client is just recently cloned from an image of the server
14:38 x4rlos yeah, but i thought you got the input/output even after stopping the service, and checking the server directly?
14:39 Kins Yeah
14:39 Kins I did
14:39 Kins Im still getting that problem on the client though, I bet the same thing will fix it
14:40 x4rlos good to know :-)
14:40 Kins Know of a good PPA for gluster 3.3 on ubuntu serveR?
14:40 x4rlos they not have an experimental one?
14:40 ndevos ~ppa | Kins
14:40 glusterbot Kins: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
14:41 bennyturns joined #gluster
14:41 x4rlos haha, there ya go :-)
14:41 Kins Thanks ;)
14:41 plarsen joined #gluster
14:47 Bonaparte joined #gluster
14:48 VSpike I'm setting up gluster as a back-end for webservers serving wordpress... looks like I should be using the NFS client instead of the gluster one.. is that correct?
14:48 Bonaparte Hello #gluster. We have a situation here. I am unable to delete or move certain files in a directory mounted using glusterfs
14:49 Bonaparte lsattr says: lsattr: Input/output error While reading flags on <file>
14:49 Bonaparte The permissions on file are: -rwxr--r--
14:50 VSpike Also, because I'm running on cruddy virtualization software and making/managing servers is *hard*, I've create two webservers that are also both gluster clients and gluster servers. Is there likely to be an issue with that arrangement or should I be OK?
14:54 m0zes Bonaparte: @split-brain ?
14:54 m0zes @split-brain
14:54 glusterbot m0zes: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
14:55 m0zes VSpike: mount nfs via localhost can cause deadlocks, even with in-kernel nfs.
14:56 noob2 joined #gluster
14:56 m0zes ~php | VSpike
14:56 glusterbot VSpike: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
14:56 m0zes there *might* be some tuning you can do to make the glusterfs client more useable in your situation.
14:57 kkeithley1 @ext4
14:57 glusterbot kkeithley1: Read about the ext4 problem at http://goo.gl/PEBQU
15:00 VSpike m0zes: would an alternative be to use nfs client from each box to the other?
15:01 m0zes VSpike: that should work. were you planning a replicated or distributed volume. if replicated you have no HA in that setup...
15:01 VSpike @xfs
15:01 noob2 JoeJulian: i did an upgrade from 3.3.0 to 3.3.1 on my development gluster over the weekend.  it was painless :)  time for the real thing once i can schedule it
15:02 m0zes noob2: fsck it, we'll do it live!
15:02 noob2 :D
15:02 noob2 haha
15:04 VSpike m0zes: replicated. I see your point though - either server failing will take down the site
15:06 twx_ utlize a third party clustering software for managing the IP failover based on underlying resources (think veritas cluster server)
15:06 twx_ or just simple as hell - ucarp, as described on google
15:06 twx_ there's probably better solutions but those two come to my mind
15:07 VSpike Oh cool, didn't know about ucarp. I'm used to carp because we use pfsense
15:07 VSpike I would have though of haproxy, but that means yet another server :)
15:08 VSpike So I'd be much better separating the webservers from the gluster servers, it seems, and using nfs
15:10 VSpike Is XFS the recommended underlying file system btw?
15:11 twx_ AFAIK, yes
15:12 VSpike Ah, but I have lots of small files so EXT4 would be better probably
15:12 rwheeler joined #gluster
15:13 twx_ sure, however I wonder how big of a diff it really makes with the gluster layers and networking on top of it
15:15 ekuric joined #gluster
15:16 x4rlos "" http://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/ ""
15:16 glusterbot <http://goo.gl/PEBQU> (at joejulian.name)
15:16 x4rlos I think this is the ext4 problem
15:23 romero hello all, it is possible to find out on which disk/brick my files are placed? thanks
15:26 wushudoin joined #gluster
15:26 m0zes romero: not without checking the bricks themselves, from what I've found.
15:27 x4rlos gluster volume status not tell ya?
15:34 JoeJulian romero, m0zes: Actually there is a way by reading a pseudo xattr... Let's see if I can find it again...
15:34 m0zes JoeJulian: can you see that from the client mount?
15:35 m0zes I guess that is what I assumed romero wanted...
15:35 JoeJulian I believe it's trusted.glusterfs.pathinfo
15:35 JoeJulian Right, from the client mount.
15:37 JoeJulian Yep, that's right. So "getfattr -n trusted.glusterfs.pathinfo $file" where $file is a file on your client mount.
15:37 wica__ Hi, is it possible to let glusterfs log direct to syslog?
15:38 plarsen joined #gluster
15:38 JoeJulian wica__: No, I wish it was though. Please file a bug report listing that as a feature request.
15:38 glusterbot http://goo.gl/UUuCq
15:39 JoeJulian ... although, wica, I suppose if you specified the log file as /dev/log it would probably work.
15:40 wica JoeJulian: I will file a bug/feature
15:40 glusterbot http://goo.gl/UUuCq
15:40 JoeJulian I haven't tried it but I hate saying no... ;)
15:40 wica Yes, that is a way to do it, but I like it with the program self can do it.
15:41 wica I can also run a script to parse the log and send it to syslog :)
15:41 mdarade1 joined #gluster
15:41 JoeJulian I'm about to do the same but sending it to logstash.
15:41 romero JoeJulian, m0zes , x4rlos thank you :)
15:41 mdarade1 left #gluster
15:41 JoeJulian You're welcome, romero.
15:44 x4rlos Its all JoeJulian:
15:45 circut joined #gluster
15:47 Bonaparte left #gluster
15:47 johnmark wica: thanks. would like to see that as well
15:49 x4rlos will gluster support notifications on any failures?
15:52 jbrooks joined #gluster
15:54 wica x4rlos: If it support syslog, it will :)
15:54 wica sorry :)
15:55 wica Bug request done
15:55 x4rlos wica: I like things like email notification and pushover/text messages. Could build it into a nagios check or write a daemon - but didnt want to reinvent the wheel. With the selfheal etc seems like there'll be a few things to add in the future and
15:55 x4rlos then maybe they will wish they had the bits there from the start.
15:56 wica x4rlos: I like it in syslog, so I can have a nagios on 1 place, instead of on every server
15:58 x4rlos How do you implement that check?
15:59 x4rlos (if you dont mind me asking)
15:59 johnmark x4rlos: I think that would require something like nagios
16:00 wica x4rlos: 1 moment
16:00 x4rlos johnmark: Yeah, i'm wondering if its a simple cat, which required nagios to have ssh access passwordless, or exported a script over custom snmp oid with any details, etc.
16:01 x4rlos a gluster_check for nagios (or the others) would perhaps be good.
16:02 wica x4rlos: http://justpaste.it/1my6
16:02 glusterbot Title: #!/bin/bash # Gluster hail-failed file checker # get list... - justpaste.it (at justpaste.it)
16:02 wica not bautiful, but does the job
16:03 x4rlos aaah, i see what you do there.
16:04 daMaestro joined #gluster
16:04 x4rlos Have you added nagios into the peer group to be able to check the status?
16:05 wica it is a nrpe running on 1 of the peers
16:06 wica anyway, me is gone to $home
16:06 mdarade1 joined #gluster
16:07 x4rlos wica: Thanks for that. Same me a little time.
16:08 wica np
16:08 passie left #gluster
16:13 manik joined #gluster
16:14 x4rlos Interesting. Just firewalled off the two gluster servers. And server1 on a volume status <VOL> says: operation failed whereas on server2 it just sees itself as the gluster server. thought would have been the other way.
16:20 RobertLaptop joined #gluster
16:23 nightwalk joined #gluster
16:23 glusterbot New news from newglusterbugs: [Bug 887924] Logging to syslog. <http://goo.gl/6skUw>
16:26 nissim joined #gluster
16:31 mdarade joined #gluster
16:47 Nevan joined #gluster
16:50 isomorphic joined #gluster
16:56 zaitcev joined #gluster
17:03 nissim joined #gluster
17:12 mdarade left #gluster
18:05 nullck joined #gluster
18:21 y4m4 joined #gluster
18:25 jermudgeon joined #gluster
18:31 jermudgeon joined #gluster
18:31 Mo__ joined #gluster
18:36 andreask joined #gluster
18:42 jermudge_ joined #gluster
18:44 eightyeight when running 'gluster volume status', i see that my bricks show 'N/A' for the 'Port' and 'Pid'. further, i don't see the data replicated into my bricks. how can i resolve this?
18:45 eightyeight recently, i had to reinstall this peer. i have restored the UUID, and can see the rest of the peers
18:46 eightyeight i can client mount the volume, and access all the data in it, and place data in it, even
18:47 JoeJulian ~pasteinfo | eightyeight
18:47 glusterbot eightyeight: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:48 eightyeight JoeJulian: http://ae7.st/p/63a
18:48 glusterbot Title: Pastebin on ae7.st » 63a (at ae7.st)
18:49 JoeJulian Which server is giving you the problem?
18:49 eightyeight eightyeight
18:50 eightyeight http://ae7.st/p/8mo
18:50 glusterbot Title: Pastebin on ae7.st » 8mo (at ae7.st)
18:50 JoeJulian peer status looks good from that server and any one other?
18:50 eightyeight yes
18:51 andreask left #gluster
18:51 JoeJulian stop glusterd, truncate /var/log/glusterfs/etc-glusterfs-glusterd.vol.log, start glusterd, wait 10 seconds and paste the log file please.
18:51 JoeJulian sorri, on eightyeight
18:52 JoeJulian sorri? I can't even blame that on lack of caffeine...
18:54 eightyeight http://ae7.st/p/9mc
18:54 mooperd joined #gluster
18:54 glusterbot Title: Pastebin on ae7.st » 9mc (at ae7.st)
18:55 JoeJulian ... did that just fix it?
18:55 eightyeight it apears so, and it seems that data is being replicated to the bricks
18:56 JoeJulian It looks like it did. I was kind-of hoping it wouldn't though so I could figure out why this is happening sometimes.
18:56 eightyeight so just a HUP of the server?
18:56 JoeJulian It was the restart of glusterd. It triggered the start of the missing brick servers.
18:56 eightyeight interesting
18:57 eightyeight so, i'm guessing restoring a node back to the cluster isn't straight forward? or buggy?
18:58 JoeJulian I know of a couple bugs that could be related that are currently fixed in qa.
19:21 y4m4 joined #gluster
19:27 Technicool joined #gluster
19:30 * jdarcy_ @_0
19:37 nissim Hello all
19:38 nissim Finally I can say my Infiniband is working at 40Gbps
19:39 nissim I still need to understand how can I increase glusterfs fuse mount around tcp of infiniband
19:40 nissim glusterfs still has an impact of filesystem performance
19:41 m0zes nissim: hooray. what was the issue?
19:41 y4m4 joined #gluster
19:41 nissim Now I can get up to 750MB/s on a single node over RDMA or tcp
19:41 nissim same test on local filesystem provides 1.7Gbps
19:42 wushudoin joined #gluster
19:42 nissim I want to be able to optimize gluster performance over RDMA and tcp
19:43 nissim I want results of dd command I ran to be closer to local filesystem on localhost
19:45 nissim m0zes ??
19:46 ode2k joined #gluster
19:47 ode2k Hey guys, does anyone know if there is a bug relating to a gluster rebalance (using version 3.3.0 on CentOS 6.3 x64) where it leaves 'open files' and then the server gives the error "VFS: file-max limit 1608348 reached"
19:48 m0zes nissim: I thought you were another user that had brought infiniband questions in here the other day.
19:48 ode2k it's usually after about 2-3 days of running the rebalance that it does this
19:48 nissim I am that user
19:49 nissim I dont see anyone here in the last few days rasing infiniband questions but me
19:49 nissim still hoping to get some answers :(
19:49 m0zes yeah, I see that in my backlog now. why wasn't your infiniband performing so well?
19:50 nissim first, I ran fedora which is not supported by Mellanox OFED distro
19:50 nissim so I moved to CentOS 6.3
19:51 JoeJulian Really? That's interesting. I wonder why the older kernel works better than a newer one.
19:51 nissim next I removed all distibution related infiniband rpms and build the latest OFED package
19:51 JoeJulian Ah, ok.
19:51 JoeJulian That's the difference then.
19:52 JoeJulian Using the blob.
19:52 nissim disabled ServerSpeed service
19:52 nissim disabled BIOS hyperthreading
19:52 ode2k I have to stop the rebalance for about 10 minutes before it closes all of the 'open files' then resume the rebalance... There are only about 16000 open files after I restart the rebalance until a couple days go by. I use the Distributed gluster setting.
19:52 nissim disabled BIOS power mgmt
19:53 nissim ran ib_write_test and goot 5000MB/s
19:53 nissim got 5000MB/s on localhost
19:54 m0zes yeah, I've seen wild performance variation with the automatic overclocking stuff enabled in bios, now that I think about.
19:54 nissim now after I know infiniband is working very well on the low level, I am still asking myself why Gluster/Redhat did such a bad job on FUSE that I get only 700MB/s while I can get much more on localhost through RDMA
19:55 nissim so back to my question, does anyone know how can I optimize performance on glusterfs client
19:55 nissim on both tcp/rdma
19:55 nissim ?
19:59 nissim sorry people, but I see so many users connected and its so difficult to get few answers
20:00 nissim I guess, performance is important to all of us
20:00 nissim thats the main reason we choose Gluster, isnt it?
20:00 JoeJulian fuse performance is limited by memory, cpu, and bus performance due to context switching.
20:01 JoeJulian nissim: Don't be so negative. :P
20:01 nissim I am not
20:01 Technicool nissim, i think for performance in gluster, its important to properly define it
20:01 nissim I a just trying to talk to some people and no one answers :(
20:01 JoeJulian "I see so many users connected and its so difficult to get few answers" is a negative statement.
20:01 Technicool for example, which is faster, a semi truck, or a ferrari?
20:01 Technicool most people say ferrari
20:02 Technicool but then add the requirement "to haul 40 tons to Fargo"
20:02 Technicool and the semi starts looking a bit faster
20:02 JoeJulian Btw, nissim, you do realize that we aren't paid to sit here and answer questions, right? I work for Ed Wyse & Co, Inc, not Red Hat. For direct Red Hat support, see ,,(commercial)
20:02 glusterbot Commercial support of GlusterFS is done as Red Hat Storage, part of Red Hat Enterprise Linux Server: see https://www.redhat.com/wapps/store/catalog.html for pricing also see http://www.redhat.com/products/storage/ .
20:03 johnmark JoeJulian: beat me to it
20:03 nissim ok, Fuse is limited by system spec, but I have 12 cores cpu + 64GB ram + RDMA backbone, so ...
20:03 kkeithley1 Ouch. Gluster wrote their fuse stuff before Red Hat acquired them. And to be fair, FUSE just isn't going to be fast. The FUSE bridge is pretty lean code. It's speed, or lack thereof, is not a reflection on Gluster devels.
20:03 JoeJulian If you're willing to hang out and chat with fellow sysadmins, that's what we're here for. If you're going to demand immediate answers, I'll go back to diagnosing why my kickstart script broke.
20:03 kkeithley1 s/It's speed/Its speed/
20:03 glusterbot What kkeithley1 meant to say was: Ouch. Gluster wrote their fuse stuff before Red Hat acquired them. And to be fair, FUSE just isn't going to be fast. The FUSE bridge is pretty lean code. Its speed, or lack thereof, is not a reflection on Gluster devels.
20:04 nissim I know that JoeJulian, but we are to share information, that what makes a community stronger
20:04 semiosis @joe's performance metric
20:04 glusterbot semiosis: nobody complains.
20:04 nissim My remarks are not personal, I just want to encourage people to speak their mind
20:05 Technicool nissim, there are a few volume options that can help, maybe start by showing us your gluster volume info output?
20:05 semiosis ,,(pasteinfo)
20:05 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:05 nissim I am currently using default settings
20:06 nissim tried to play with some performance options but nothing really improved the performance over RDMA nor nfs
20:06 Technicool ok, well, for RDMA, there is at least one option that can be set to increase performance
20:06 Technicool what OS/kernel?
20:06 nissim CentOS 6.3 default kernel
20:07 Technicool nissim, take a gander at this: http://community.gluster.org/a/li​nux-kernel-tuning-for-glusterfs/
20:07 glusterbot <http://goo.gl/URHmU> (at community.gluster.org)
20:08 chirino joined #gluster
20:08 Technicool ignore the part about vfs_cache_pressure
20:09 Technicool as in, do not change the value to > 100
20:09 nissim ok paste the configuration http://fpaste.org/ScY2/
20:09 glusterbot Title: Viewing Gluster Over TCP,RDMA by nissim (at fpaste.org)
20:09 nissim yep
20:11 rwheeler joined #gluster
20:11 bfoster nissim: one option you can try, if you haven't, is the gid-timeout mount option
20:12 tryggvil joined #gluster
20:13 nissim what exactly is the gid-timeout?
20:14 Technicool nissim, thought you said everything was default?
20:14 Technicool for the three options you set, are those the only values you have tried?
20:14 nissim not really
20:14 nissim plays with the cache size etc ...
20:14 Technicool also, have you tried more than one brick?
20:15 nissim got same results with or without these options
20:15 Technicool of course a distributed filesystem is going to perform poorly if you only give it one node to work with
20:15 bfoster nissim: it determines how long gluster will cache group information about the calling process into gluster. without it, gluster does some work that has shown to impact performance
20:15 nissim I know
20:15 bfoster ... on each request
20:15 nissim how come I dont see it in the admin guide?
20:16 bfoster it may or may not make a difference for you, but it's easily tested with '-o gid-timeout=1' (for 1 second)
20:17 nissim I am going to run ditributed volume over 5 different nodes, but I dont see how it will increase my performanc since file will always be written to a single node
20:17 nissim where do I put this option?
20:18 nissim is it in the mount line??
20:18 Technicool why will it only be written to one node?
20:18 bfoster it's a mount option, i.e.: mount -t glusterfs my.gluster.host:myvol /mnt -o gid-timeout=1
20:18 nissim because its distributed
20:18 nissim single file is writen to single node
20:18 Technicool nissim, sorry i missed how those are related
20:18 nissim its not striped
20:19 Technicool a single node at a time
20:19 Technicool so, one out of five
20:19 nissim yep
20:19 Technicool and you don't see how having five available can be faster than one?
20:19 nissim one second let me try what you provided here
20:19 nissim I see that
20:19 nissim one sec ...
20:21 nissim gid-timeout=1 unknown option (ignored)
20:22 bfoster hmm, maybe you need a newer client version
20:23 semiosis nissim: ip over ib?
20:24 andreask joined #gluster
20:24 nissim what client are talking about? I am using glusterfs version 3.3.1-4
20:24 bfoster the upstream commit is 59ff893d11844eb52453ce4f7f098df05fcde174, not sure if/when it made it into a release...
20:24 nissim can you give me more details?
20:25 * bfoster is downloading the latest tarball to look...
20:27 kkeithley It's not in 3.3.1
20:27 bfoster yeah, unfortunately I don't see it in the 3.3.1 tarball. you'd probably need to download the upstream repo w/ git to try it out
20:28 nissim is there an easy way to build rpms from git?
20:28 bfoster nissim: another suggestion if you're really trying to break down client performance, run your test with fuse using some other filesystem on top of your local storage
20:29 bfoster (libfuse packages an example filesystem that does nothing but pass through operations)
20:29 nissim I am using XFS
20:29 kkeithley It is in 3.4.0qa5 though
20:29 bfoster that probably gives you an upper bound on what to expect from anything driven via fuse
20:29 nissim Thanks kkeithley
20:30 kkeithley easy way to build rpms? ;-) There's a way. It's easier than building .debs, I can tell you that much.
20:30 * Technicool agrees with kkeithley ^
20:31 kkeithley You could take the 3.3.1-4.src.rpm from my repo. Install it. This will populate ~/rpmbuild/.... Replace the glusterfs-3.3.1.tar.gz with your own, then do `cd rpmbuild; rpmbuild -bb SPECS/glusterfs.spec`
20:32 nissim can you provide the link for that version?
20:32 y4m4 joined #gluster
20:32 kkeithley 3.4.0qa5?
20:32 nissim yep
20:33 kkeithley download.gluster.org? Hang on a sec
20:33 ode2k Actually, I'm using glusterfs 3.3.1-1 x86_64 versions... Anyone have any ideas?
20:34 kkeithley http://bits.gluster.org/pub/gluster/gl​usterfs/src/glusterfs-3.4.0qa5.tar.gz
20:34 glusterbot <http://goo.gl/mC424> (at bits.gluster.org)
20:34 nissim one sec ... checking
20:34 semiosis ,,(qa-releases)
20:34 glusterbot I do not know about 'qa-releases', but I do know about these similar topics: 'qa releases'
20:34 semiosis ,,(qa releases)
20:34 glusterbot The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
20:36 nissim I see that are RPMS available
20:36 nissim so no need to rebuild them
20:37 nissim anyone know what changes were made to these releases?
20:37 nissim any performance changes? rdma ? etc...?
20:38 daMaestro joined #gluster
20:41 ode2k Are there changelogs available for the qa-releases?
21:01 rudimeyer_ joined #gluster
21:01 semiosis ode2k: if we're lucky things get tagged in git.
21:01 semiosis for example... https://github.com/gluster/​glusterfs/commits/v3.4.0qa5
21:01 glusterbot <http://goo.gl/J7mjt> (at github.com)
21:02 ode2k cool, thanks
21:02 semiosis yw
21:09 H__ Question: find (using strace) crawls at 2 directories per second, with about 10 files per directory on a local 3.2.5 glusterfs mount. The drives do about 30% of their peak IOPS. Local gigabit network usage to other peer is about 10% of peak. There is no paging on the systems. What can I do to determine the cause ?
21:12 nissim upgrading my RPMS to v3.4.0qa5
21:12 nissim lots of bug fixes to RDMA + performance, etc ...
21:12 nissim although not stable
21:17 nueces joined #gluster
21:25 torrancew joined #gluster
21:26 torrancew Does anyone know at what version of gluster the 'volume heal' subcommand was added?
21:26 JoeJulian torrancew: 3.3.0, but 3.3.1 is recommended over that.
21:27 torrancew Gotcha. Anyway to achieve similar functionality on 3.2? Doing a quick POC before I go further
21:27 torrancew 3.2.7, specifically
21:28 JoeJulian Sort-of. Any reason to use 3.2.7 though?
21:28 JoeJulian Esp. for a poc.
21:29 torrancew It was what was packaged in the vm I spun up ;)
21:29 torrancew Basically, this is a pre-POC POC - I was given a really, really hairbrained idea, and am testing feasability on a small local vm before putting it on a server in the dev environment
21:32 torrancew JoeJulian: how can I get the sort-of behaviour so I can move this on to real-hardware with 3.3.1?
21:38 tryggvil joined #gluster
21:41 semiosis torrancew: just use the latest, 3.3.1
21:41 JoeJulian torrancew: There's a script on my blog (go way back under the glusterfs category) http://joejulian.name and there's a python script that checks for dirty xattrs. Would have to be run on each brick.
21:41 glusterbot Title: JoeJulian.name (at joejulian.name)
21:41 JoeJulian Or some variation thereof.
21:42 JoeJulian Would be easiest just to upgrade the vm using either the of the ,,(repos)
21:42 glusterbot I do not know about 'repos', but I do know about these similar topics: 'repository', 'yum repository'
21:42 JoeJulian really? damn.
21:42 JoeJulian @learn repos as See @yum repo, @apt repo or @git repo
21:42 glusterbot JoeJulian: The operation succeeded.
21:43 JoeJulian @forget repos
21:43 glusterbot JoeJulian: The operation succeeded.
21:43 JoeJulian @learn repos as See @yum, @ppa or @git repo
21:43 glusterbot JoeJulian: The operation succeeded.
22:01 torrancew JoeJulian: thanks, good advice all around
22:12 bauruine joined #gluster
22:17 Ryan_Lane joined #gluster
22:17 Ryan_Lane any idea when download.gluster.org is going to be back up?
22:17 Ryan_Lane I have a memory leak that very, very badly needs to be fixed: http://ganglia.wikimedia.org/latest/?r=hour​&amp;cs=&amp;ce=&amp;s=by+name&amp;c=Gluste​rfs%2520cluster%2520pmtpa&amp;tab=m&amp;vn=
22:17 glusterbot <http://goo.gl/g0bWk> (at ganglia.wikimedia.org)
22:22 JoeJulian What're you looking for on that site (that I didn't know was down: johnmark, Technicool)
22:23 Ryan_Lane the newest ubuntu package download
22:23 JoeJulian @ppa
22:23 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
22:23 Technicool which site?
22:23 Technicool JoeJulian ^
22:23 JoeJulian Technicool is scrollback challenged... download.gluster.org
22:23 Ryan_Lane uuuuugggghhhhhhhh
22:23 Ryan_Lane are there seriously no lucid packages?
22:24 JoeJulian semiosis: Another vote for lucid.
22:24 JoeJulian semiosis: and a rather big one imo
22:24 Technicool JoeJulian, its true, i am
22:24 Technicool checking on .org, i saw that .com wasn't working which was more cosmetic
22:24 Technicool earlier i mean
22:24 JoeJulian right
22:25 Technicool how convenient since I was going to log in after lunch anyway
22:25 Ryan_Lane ah. org works
22:25 Ryan_Lane but mentions they've been moved to the ppa
22:26 Ryan_Lane if I upgraded my servers to 3.3.1 while my clients were running 3.3.0, would it be a problem?
22:28 johnmark Ryan_Lane: I've been burned by this before, but I want to say this should be ok
22:29 Ryan_Lane well, basically at this point my cluster is failing
22:29 johnmark incidentally, you'll be happy to note this: http://www.gluster.org/community/docum​entation/index.php/Features/Opversion
22:29 glusterbot <http://goo.gl/L29fA> (at www.gluster.org)
22:29 johnmark ah, well. that's not good
22:29 Ryan_Lane that giant spike in memory I linked to was from a single rsync running
22:29 Ryan_Lane a small amount of data
22:34 johnmark Ryan_Lane: I don't understand why your setup is so troublesome
22:34 johnmark it really shouldn't be
22:34 johnmark you're doing things that, frankly, aren't that far out of the ordinary
22:34 johnmark ie. a single rsync running
22:35 johnmark glusterfs client or NFS?
22:36 Ryan_Lane glusterfs
22:36 Ryan_Lane I am, of course, running 300 volumes
22:37 Ryan_Lane so I may not be totally ordinary
22:40 Ryan_Lane I think this is very much related to the memory leak in my version of gluster
22:40 Ryan_Lane of course I can't upgrade because my servers are running lucid
22:40 Ryan_Lane my processes are being OOM killed
22:44 johnmark Ryan_Lane: is there a bug tied to this particular memory leak?
22:44 Ryan_Lane I was told there was definitely a large memory leak in 3.3.0
22:45 johnmark ok
22:45 Ryan_Lane this is pretty frustrating. there's honestly nothing I can do about this
22:45 johnmark I think there was - just wishing it was tied to a specific bug so that I could make sure it was fixed in 3.3.1
22:45 Ryan_Lane I guess I'll just have an outage till whenever a package is available
22:46 johnmark Ryan_Lane: have you ever requested lucid builds before?
22:46 Ryan_Lane well, I didn't ask for lucid to be dropped, it's an LTS, after all
22:47 Ryan_Lane it's been available for a very long time. I assumed you guys would continue to support LTS
22:47 Ryan_Lane I made a poor assumption
22:47 johnmark ah, ok
22:47 johnmark didn't realize lucid was the last LTS
22:47 Ryan_Lane it's not
22:48 johnmark oh
22:48 Ryan_Lane precise is
22:48 * johnmark has lost track
22:48 Ryan_Lane but, it's generally a good idea to support the current and last LTS
22:48 Ryan_Lane as people move from one to the other
22:48 rudimeyer_ joined #gluster
22:49 johnmark that sounds relatively reasonable. we're going to have to evaluate which releases we support
22:53 johnmark which doesn't help you..
22:54 Ryan_Lane this will be a problem for every single ubuntu user
22:54 Ryan_Lane anyone who is using ubuntu in production will have a mix of LTS versions
22:56 johnmark Ryan_Lane: that's good to know
22:57 johnmark I always used the latest LTS, but that's just me :)
22:57 Ryan_Lane well, right
22:57 Ryan_Lane but...
22:57 Ryan_Lane when you have like 1000+ systems, that's not going to work so well ;)
22:58 johnmark agreed :)
22:59 semiosis o_O
22:59 johnmark semiosis: thoughts?
22:59 semiosis i'd rather be precise than lucid
22:59 johnmark lol
22:59 Ryan_Lane I would too
22:59 johnmark I'd rather be either than dapper
22:59 semiosis hahaha
23:00 Ryan_Lane and about 60% of our systems are, and all new installs are precise
23:00 Ryan_Lane but that's still like 400 systems that are lucid ;)
23:00 semiosis i would rather not make a glusterfs-server package for lucid because the upstartified stuff was too hacky in lucid
23:01 semiosis a pure client package wouldn't be a problem though
23:01 semiosis Ryan_Lane: do you need glusterfs-server packages for lucid?
23:02 Ryan_Lane I can live without that
23:02 Ryan_Lane I'll need to upgrade my servers, but that's fine
23:03 semiosis or maybe i'll just go for it & let the chips fall where they may
23:03 Ryan_Lane heh
23:03 Ryan_Lane I wonder what triggers this memory leak
23:03 Ryan_Lane because it's stable, stable, then BAM all memory gone
23:05 johnmark does the memory leak only happen on the lucid machines?
23:05 * johnmark is fishing
23:05 Ryan_Lane semiosis: thanks :)
23:05 Ryan_Lane well, all of my servers are lucid :)
23:05 johnmark oh!
23:05 johnmark heh
23:05 Ryan_Lane so, I have no clue if it works better on precise
23:05 johnmark ok
23:05 johnmark semiosis: you are a rock star
23:06 johnmark ...or ninja... or... choose your preferred hipster reference
23:06 Ryan_Lane they all leak together, as well
23:07 Ryan_Lane if I restart gluster (also killing all glusterfsd processes) node by node, then things go back to being stable
23:13 elyograg I have a couple of gluster peers with no bricks.  I want to remove them from the gluster cluster that they're in right now so I can add them to another one.  After detaching them and killing the processes, what files/dirs do I need to delete?
23:15 semiosis Ryan_Lane: uploading now... with NO support for mounting clients from localhost servers at boot time
23:15 Ryan_Lane thankfully I no longer need that :)
23:15 semiosis Ryan_Lane: and i've not had any time to test this on lucid, so please let me know if you have any trouble
23:15 Ryan_Lane thanks :)
23:15 semiosis Ryan_Lane: you're welcome
23:16 semiosis that localhost-mount-at-boot was the big sticking point for lucid, as upstart didn't have good support for it at that time
23:16 semiosis and my solution was major kludge
23:16 semiosis that i'd rather let die :)
23:17 Ryan_Lane :D
23:17 Ryan_Lane yeah, I remember it. I was using it for a while
23:18 Ryan_Lane I don't run gluster on my compute nodes anymore, though
23:19 JoeJulian elyograg: /var/lib/glusterd/*
23:20 elyograg JoeJulian: thank you.  might just get this thing into production very soon.
23:21 semiosis hmm launchpad is taking a while to ack my uploads
23:24 semiosis uh oh, just got a rejection email saying the ppa doesn't exist
23:24 semiosis that's odd
23:24 semiosis it clearly does exist
23:24 johnmark Ryan_Lane: have you had a chance to try out the qa5 release for 3.4?
23:25 johnmark I'm particularly interested in your take on the QEMU integration
23:25 Ryan_Lane nope
23:25 Ryan_Lane I have no plans on using gluster again for virtual machines
23:25 Ryan_Lane I had an entire day outage from it in the past, so I think I'll avoid it :)
23:26 Ryan_Lane *where I had to restore all of the data from the backend
23:26 semiosis ok, so launchpad rejected my upload because some other user named trevormosey doesn't have a ppa called ubuntu-glusterfs-3.3
23:26 semiosis wtf!
23:26 semiosis i re-uploaded and now it's been accepted, as it should be
23:26 Ryan_Lane hahaha
23:26 Ryan_Lane that sounds bad...
23:29 semiosis woo, no queue on launchpad, builds started immediately!
23:30 plarsen joined #gluster
23:30 JoeJulian semiosis: That's just as bad as the puppet problem I was having. Odd errors in an if, elsif, else statement. I typed it in by hand, verbatim, and it works now.
23:30 semiosis aw nuts
23:31 semiosis dependency fail
23:33 Ryan_Lane semiosis: oh, on a packaging related note...
23:34 Ryan_Lane semiosis: it would be nice if the upstart had a way to modify ulimits
23:34 Ryan_Lane recently I started running out of sockets since I'm running so many volumes
23:35 Ryan_Lane seems the only way to fix that is via the upstart
23:37 semiosis /etc/security/limits.d/files.conf
23:38 semiosis add two lines...
23:38 semiosis <user> nofile soft NNNN
23:38 semiosis <user> nofile hard NNNN
23:38 semiosis iirc
23:39 elyograg ncindex         hard    nofile  65535
23:39 elyograg ncindex         soft    nofile  49151
23:39 elyograg this is from my Solr installation.
23:39 Ryan_Lane gluster runs as root
23:39 semiosis so root then
23:39 Ryan_Lane I tried that
23:39 elyograg I also have:
23:39 elyograg root            hard    nofile  65535
23:39 elyograg root            soft    nofile  49151
23:39 Ryan_Lane it doesn't seem to work
23:40 Ryan_Lane the only thing that worked was adding it to the upstart
23:40 elyograg Ryan_Lane: this is on centos.  i know nothing about upstart.  I'm a debian user, but have not really ever touched ubuntu.
23:41 Ryan_Lane ubuntu docs indicate that's how to handle it
23:41 Ryan_Lane I'm wondering if upstart does something to screw that up
23:41 tryggvil joined #gluster
23:42 semiosis Ryan_Lane: i think you may need to reboot, or maybe there's some other command to run
23:43 semiosis it "just worked" when i did that for a regular user recently, for elasticsearch
23:43 semiosis but for root, maybe not so simple
23:43 semiosis have you rebooted since editing the limits config?
23:43 JoeJulian http://posidev.com/blog/2009/06/04​/set-ulimit-parameters-on-ubuntu/ maybe?
23:43 glusterbot <http://goo.gl/KrhqE> (at posidev.com)
23:44 Ryan_Lane I hadn't. let me see if they are applied as the root user now
23:45 Ryan_Lane as root: ulimit -Sn
23:45 Ryan_Lane 1024
23:45 Ryan_Lane ah, but I no longer have it set that way anyway
23:45 Ryan_Lane I'm just using the upstart
23:52 m0zes the upstart ulimit stuff is broken iirc. you have to call ulimit directly to adjust it before starting glusterd in the <script> section...
23:52 noob2 joined #gluster
23:54 hattenator joined #gluster
23:54 glusterbot New news from newglusterbugs: [Bug 888072] Some files are getting deleted from distributed gluster volume <http://goo.gl/fudB0>
23:55 Ryan_Lane m0zes: in lucid at least, yes
23:55 Ryan_Lane I think I found what's causing my memory/load issue
23:56 Ryan_Lane when I run gluster volume start or stop, my gluster daemon becomes unresponsive
23:56 Ryan_Lane and my memory/load/cpu spike
23:56 Ryan_Lane till I restart the process
23:56 Ryan_Lane I run those commands a *lot*

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary