Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:32 yinyin joined #gluster
00:42 plarsen joined #gluster
01:11 aknapp joined #gluster
01:18 kevein joined #gluster
01:40 SteveWatt joined #gluster
01:41 bala joined #gluster
01:48 SteveWatt left #gluster
01:51 lyang0 joined #gluster
02:04 SteveWatt joined #gluster
02:04 SteveWatt left #gluster
02:15 social is there way to statedump gluster mount?
02:18 kevein_ joined #gluster
02:30 DV joined #gluster
02:36 robo joined #gluster
02:49 yinyin joined #gluster
02:51 saurabh joined #gluster
02:53 awheeler joined #gluster
03:02 vshankar joined #gluster
03:20 bharata-rao joined #gluster
03:43 shubhendu joined #gluster
03:43 jporterfield joined #gluster
03:51 jporterfield joined #gluster
03:55 ajha joined #gluster
03:56 itisravi joined #gluster
04:06 ppai joined #gluster
04:07 Birnie joined #gluster
04:07 shylesh joined #gluster
04:09 rjoseph joined #gluster
04:10 davinder joined #gluster
04:11 davinder2 joined #gluster
04:23 MadSeb joined #gluster
04:25 dusmant joined #gluster
04:27 MadSeb Hi ... quick question on GlusterFS from a newbie... let's say we have a number of data nodes storing X GB of data and some new nodes join the distributed file system ( but we still store X GB of data so the new users upload no data )  ... does the fact that these new nodes have joined the file system imply an increase in upload / download speed for everyone ? To put it a different way, does
04:27 MadSeb performance improve as new nodes join the distrib fs but amount of GB stay the same ? Thanks.
04:30 ndarshan joined #gluster
04:33 jporterfield joined #gluster
04:39 jporterfield joined #gluster
04:47 kanagaraj joined #gluster
04:52 davinder joined #gluster
04:54 hagarth joined #gluster
04:54 ppai MadSeb, I'm guessing here...i may be wrong though...after u add new nodes(add bricks to existing volume) and run rebalance, theoretically if ALL X GB files are accessed, you should see improvement.
04:57 Humble joined #gluster
04:58 hchiramm_ joined #gluster
04:58 MadSeb ppai thanks
05:00 ppai MadSeb, it depends on the pattern in which files are accessed coz in a pure distributed setup, a file can be present in any one of those nodes
05:14 raghu joined #gluster
05:14 sgowda joined #gluster
05:19 psharma joined #gluster
05:21 lalatenduM joined #gluster
05:21 lalatenduM joined #gluster
05:27 ndarshan joined #gluster
05:30 sahina joined #gluster
05:37 guigui joined #gluster
05:37 RameshN joined #gluster
05:40 spandit joined #gluster
05:41 ndarshan joined #gluster
05:44 RameshN clear
05:53 plarsen joined #gluster
05:55 sahina joined #gluster
06:03 bala joined #gluster
06:06 mohankumar__ joined #gluster
06:09 andreask joined #gluster
06:11 shireesh joined #gluster
06:11 vpshastry joined #gluster
06:12 anands joined #gluster
06:15 shubhendu joined #gluster
06:21 bulde joined #gluster
06:21 jtux joined #gluster
06:24 satheesh1 joined #gluster
06:27 ababu joined #gluster
06:28 StarBeast joined #gluster
06:33 rastar joined #gluster
06:39 vimal joined #gluster
06:46 bala joined #gluster
06:47 rgustafs joined #gluster
06:50 davinder I didnt get fuse rpm for Suse
06:50 davinder any knows...?
06:51 hagarth joined #gluster
06:59 ngoswami joined #gluster
07:03 eseyman joined #gluster
07:04 jtux joined #gluster
07:05 jporterfield joined #gluster
07:10 jporterfield joined #gluster
07:13 ctria joined #gluster
07:21 mgebbe joined #gluster
07:22 hybrid5121 joined #gluster
07:23 jporterfield joined #gluster
07:25 ricky-ticky joined #gluster
07:26 clag_ joined #gluster
07:27 hagarth joined #gluster
07:27 clag_ hi, after an gluster volume stop no more process about this volume, but status is "started". Newer gluster volume stop|start give "operation failed" error meesage. Anyway to change this status ?
07:30 DV joined #gluster
07:33 wgao joined #gluster
07:36 andreask joined #gluster
07:44 satheesh1 joined #gluster
07:46 jporterfield joined #gluster
07:49 clag_ found : glusterd pb, restart it and i'ts ok
07:58 psharma joined #gluster
08:18 ninkotech joined #gluster
08:18 ninkotech_ joined #gluster
08:19 jporterfield joined #gluster
08:21 morse joined #gluster
08:22 davinder joined #gluster
08:27 spider_fingers joined #gluster
08:31 glusterbot New news from newglusterbugs: [Bug 1000131] Users Belonging To Many Groups Cannot Access Mounted Volume <http://goo.gl/JOatTA>
08:39 jporterfield joined #gluster
08:43 nightwalk joined #gluster
08:49 satheesh joined #gluster
08:50 nshaikh joined #gluster
09:02 jebba joined #gluster
09:04 jporterfield joined #gluster
09:09 ndarshan joined #gluster
09:22 clag_ left #gluster
09:24 jporterfield joined #gluster
09:26 Elendrys joined #gluster
09:29 Elendrys Hi there i need some help to resolve a "Peer rejected (Connected)" problem. If someone can help me to diagnose this.
09:36 rjoseph joined #gluster
09:37 SOLDIERz_ joined #gluster
09:37 SOLDIERz_ hi there
09:37 SOLDIERz_ i want to ask if there is somebody which got experience with glusterfs to make a web server cluster
09:39 SOLDIERz_ i read often about poor write performance and i'm evaluate for our firm at the moment if glusterfs is a good solution
09:43 ndevos SOLDIERz_: http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ should give you some ideas
09:43 glusterbot <http://goo.gl/uDFgg> (at joejulian.name)
09:45 jporterfield joined #gluster
09:45 ngoswami joined #gluster
09:46 andreask joined #gluster
09:53 SOLDIERz_ well
09:53 SOLDIERz_ ndevos
09:53 SOLDIERz_ thats and answer
09:54 SOLDIERz_ but i really need some experiences
09:54 SOLDIERz_ from person which setted it up for the same function like I
09:54 SOLDIERz_ will
09:55 Elendrys do you have any idea why when i do a "gluster peer status", on one of the two servers, the hostname is filled by the ip adresse rather than hostname ?
09:55 ndevos ~hostnames | Elendrys
09:55 glusterbot Elendrys: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
09:56 ndevos SOLDIERz_: well, I am not such a person, but I guess there are some in this channel, just hang around and wait?
09:57 SOLDIERz_ okay
09:57 SOLDIERz_ ndevos
09:58 SOLDIERz_ no other distributed filesystem which work block oriented?
09:59 ndevos a block oriented filesystem? not sure what you mean...
09:59 Elendrys ndevos: Probe on host orque-deleg port 24007 already in peer list
09:59 NuxRo SOLDIERz_: have a look at ceph (not sure it's what you mean)
10:00 ndevos Elendrys: tried the probe on the other host?
10:00 SOLDIERz_ ndevos glusterfs works file based and if you see probably drbd works block based
10:01 SOLDIERz_ so you get an more less overhead with drbd
10:01 ndevos SOLDIERz_: drbd is not a filesystem, that is a distributed+replicated block-device
10:01 SOLDIERz_ but drbd i'm already using and is not the write solution I want a multi-master scenario with
10:01 SOLDIERz_ ndevos
10:02 SOLDIERz_ that's right thats what I want to tell you
10:02 SOLDIERz_ but the question was a little bit silly asked sry
10:05 Elendrys Correct me if i'm wrong : On the server "Orque-deleg" i get the hostname of the other server "orque-vjf". On "Orque-vjf" i got the ip instead of the hostname of "orque-deleg". So i entered a "gluster peer probe orque-deleg" on it
10:05 Elendrys and i got this
10:06 ndevos hmm, okay, thats indeed what I would do too and expect that 'gluster peer status' would get updated with the hostname
10:07 Elendrys each volume is in state ok and everything seems replicating ok i juste have the gluster peer rejected message with this hostname bug
10:08 nshaikh joined #gluster
10:09 Elendrys i think it's related but why as far as a ping on "orque-deleg" from the other node works fine is that happening
10:13 ndevos Elendrys: you also need to make sure that both servers have a different UUID - @cloned servers
10:13 ndevos ,,(cloned servers)
10:13 glusterbot Check that your peers have different UUIDs ('gluster peer status' on both). The uuid is saved in /var/lib/glusterfs/glusterd.info - that file should not exist before starting glusterd the first time. It's a common issue when servers are cloned. You can delete the /var/lib/glusterfs/peers/<uuid> file and /var/lib/glusterfs/glusterd.info, restart glusterd and peer-probe again.
10:14 Elendrys they are different, the servers are in production for 6 month already
10:22 jporterfield joined #gluster
10:30 stigchri_ joined #gluster
10:30 plarsen joined #gluster
10:32 Alpinist joined #gluster
10:33 satheesh joined #gluster
10:35 Elendrys is there a risk in manually updating the /var/lib/glusterfs/peers/<uuid> file manually when glusterd is stopped ?
10:39 ndevos very little, I would say - but maybe you need to update the .vol files too
10:42 Elendrys all .vol files contains the hostname as it was originally
10:42 Elendrys set*
10:43 ndevos how awkward
10:43 Elendrys yes
10:44 Elendrys volume Synchrone-client-0
10:44 Elendrys type protocol/client
10:44 Elendrys option remote-host orque-deleg
10:46 Elendrys you know if there is something to do to subscribe to the mailing list ?
10:55 spandit joined #gluster
11:03 nshaikh left #gluster
11:04 nshaikh joined #gluster
11:04 failshell joined #gluster
11:06 ngoswami joined #gluster
11:07 aravindavk joined #gluster
11:09 darshan joined #gluster
11:10 vimal joined #gluster
11:12 psharma joined #gluster
11:14 kkeithley left #gluster
11:16 jclift joined #gluster
11:19 hagarth joined #gluster
11:21 satheesh joined #gluster
11:23 ppai joined #gluster
11:23 dusmant joined #gluster
11:31 darshan joined #gluster
11:37 kkeithley joined #gluster
11:55 Han joined #gluster
11:56 Han Can I put data on a partition and later on turn it into a glusterfs without losing the data?
11:56 Han Or do I first have to set it up with gluster and then add the data?
11:57 Han The other server isn't ready to be glustered since it's running production right now.
11:57 lyang0 joined #gluster
12:00 hagarth joined #gluster
12:01 NuxRo Han: you need to set up the volume, mount it and then put data on it, so the latter
12:01 NuxRo if you only have one server you can statrt with a simple volume then when you later add another server can introduce replication
12:03 jcsp joined #gluster
12:04 bulde1 joined #gluster
12:04 dusmant joined #gluster
12:05 andreask joined #gluster
12:06 rjoseph joined #gluster
12:10 SOLDIERz_ joined #gluster
12:11 tobias- Hmm, I'm on a vagrant virtual machine and I try to mount glusterfs through that. All ports and everything is open but it fails to mount. No logs are written by the server, although I see that connections has been made (netstat) and client log files says Transport endpoint is connected. Is there any obvious error i'm doing?
12:12 tobias- (the gluster-cluster is on another network, but accessible with all ports)
12:12 NuxRo tobias-: check the local logs maybe?
12:12 tobias- the logs says transport endpoint is not conneted and reading from socket failed. "EOF from peer".. as it if isn't allowed i guess
12:17 Han NuxRo, sounds like a great plan. Thanks.
12:19 NuxRo Han: you're welcome. replication level can be changed when doing add-brick AFAIK, check the docs
12:22 tobias- think i found the problem with strace on glusterfsd on the server; [pid  2254] write(4, "[2013-08-26 12:21:57.221662] E [rpcsvc.c:491:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request\n", 140) = 140
12:23 NuxRo tobias-: are you trying to mount that as a non-root user?
12:24 sahina joined #gluster
12:26 ppai joined #gluster
12:28 tobias- nope
12:29 rgustafs joined #gluster
12:32 glusterbot New news from newglusterbugs: [Bug 990089] do not unlink the gfid handle upon last unlink without checking for open fds <http://goo.gl/V4YGIB>
12:34 Han Any known drawbacks with using ext4?
12:35 NuxRo Han: there were some serious problems at some point, not sure whether they're fixed or not, but they recommend XFS and that's what I use
12:37 Han http://www.gluster.org/category/howtos/
12:37 glusterbot Title: CentOS 6 upstart overrides: How to make something happen before something else | Gluster Community Website (at www.gluster.org)
12:37 ndarshan joined #gluster
12:38 Han NuxRo, centos6.4 has problems with xfs: http://serverfault.com/questions/4970​49/the-xfs-filesystem-is-broken-in-rh​el-centos-6-x-what-can-i-do-about-it
12:38 glusterbot <http://goo.gl/aAxIWv> (at serverfault.com)
12:39 shubhendu joined #gluster
12:41 NuxRo Han: yep, I'd recommend to use XFS (with inode size of 512, i.e. mkfs.xfs -i size=512)
12:43 Han This was fixed (quietly) by Red Hat April 23, 2013 in RHEL kernel-2.6.32-358.6.1.el6 as part of the 6.4 errata updates...
12:44 bulde joined #gluster
12:44 rcheleguini joined #gluster
12:44 NuxRo is ext4 a sine qva non for you?
12:44 jurrien_ joined #gluster
12:45 Han I just read that thread and found the relieving news that the xfs problem has been fixed.
12:46 rcheleguini joined #gluster
12:46 Han I have no problem with xfs... I can format a 2 terabyte hd in 10 seconds. With ext4 it took 2 hours. :P
12:47 cicero http://www.gluster.org/2012/08/glus​terfs-bit-by-ext4-structure-change/
12:47 LoudNoises joined #gluster
12:47 glusterbot <http://goo.gl/86YVi> (at www.gluster.org)
12:47 kkeithley Yes, despite what the one wag said in that blog post about xfs not being maintained, I can tell you that xfs is definitely maintained.
12:48 kkeithley Gluster's ext4 issue was fixed in 3.4.0 and 3.3.2
12:48 rcheleguini joined #gluster
12:49 Han So both are supported now. That's nice to hear. I still go with the recommended xfs.
12:49 cicero i've been running gluster on ext4 on 3.2.0, so far so good
12:50 kkeithley Yes, Red Hat requires xfs for RHS and we generally recommend it in the community for GlusterFS; but there's nothing wrong, per se, with ext4.
12:50 Han I'd love to use it on jfs though. No other fs has had zero problems with tons of hard system crashes.
12:51 Han For me, that is. ymmv
12:51 msvbhat joined #gluster
12:51 kkeithley Gluster should work just fine on any fs that has extended attributes. If you like jfs you could try it and write up your experiences.
12:53 awheeler joined #gluster
12:53 Elendrys hi. I have a a "Peer rejected (Connected)" problem. If someone can help me to check what went wrong.
12:53 awheeler joined #gluster
12:55 Han kkeithley, jfs is not support on redhat/centos afaik
12:57 robo joined #gluster
12:58 bennyturns joined #gluster
13:00 mohankumar__ joined #gluster
13:01 psharma joined #gluster
13:01 ctria joined #gluster
13:08 spider_fingers joined #gluster
13:14 manik joined #gluster
13:14 ninkotech joined #gluster
13:16 ninkotech_ joined #gluster
13:20 shubhendu joined #gluster
13:20 kkeithley Han: not _supported_, as in there's nobody to help you — paid or otherwise — when things don't work?  Yes, that's probably true. And there's no support in the 2.6.32-358-14.1 kernel for jfs that's true. I'll have to see if that's changed for the 6.5 kernel, but jfsutils  is in Fedora 19 and a jfs fs seems to work fine with glusterfs in a very quick, cursory test.
13:24 hagarth joined #gluster
13:25 Han ok... I remember reading a thread which explained jfs wasn't even in the kernel.
13:25 Han This is news to me.
13:26 Han jfsutils is not installable with yum.
13:26 Han oh well.
13:27 plarsen joined #gluster
13:31 kkeithley No, not installable with yum. On 6.4 I had to compile jfsutils form the (fedora) src.rpm. (and jfs is in the fedora 3.10 kernel. The fedora kernel is a pure upstream kernel)
13:32 Dga joined #gluster
13:33 Han What can I say. It's not in the default install on redhat/centos.
13:35 kkeithley You said you liked it. Beyond that, no need to say much of anything.
13:36 rwheeler joined #gluster
13:38 Han Let me not say anything else on the matter indeed.
13:38 johnmorr joined #gluster
13:42 hagarth joined #gluster
13:45 jebba left #gluster
13:54 aliguori joined #gluster
13:54 bugs_ joined #gluster
13:55 jdarcy joined #gluster
13:55 MadSeb joined #gluster
13:56 robos joined #gluster
13:58 y4m4 joined #gluster
13:58 davinder joined #gluster
13:58 kaptk2 joined #gluster
14:00 vimal joined #gluster
14:00 jclift joined #gluster
14:00 MrNaviPacho joined #gluster
14:06 Han Why can I mount with the native client on one host, but not on another?  output from /var/log/gluster/gluster.log on failing host: http://hastebin.com/buwixajeho.vhdl
14:06 glusterbot Title: hastebin (at hastebin.com)
14:09 vpshastry left #gluster
14:09 crashmag joined #gluster
14:13 jruggiero joined #gluster
14:14 jruggiero left #gluster
14:15 dusmant joined #gluster
14:15 jporterfield joined #gluster
14:22 bulde joined #gluster
14:22 PeteUAR joined #gluster
14:28 Han http://gluster.org/community/document​ation/index.php/Translators/features talks about root-squashing, 3.4.0 wants root-squash though.
14:28 glusterbot <http://goo.gl/zmJ1TD> (at gluster.org)
14:32 robo joined #gluster
14:34 zaitcev joined #gluster
14:37 spider_fingers left #gluster
14:38 Han I want to use root-squash for all clients, but not on the server itself. How can I do that?
14:42 gmcwhistler joined #gluster
14:45 tobias- Fix for running glusterfs client through NAT: downloaded fuse.vol for storage, modified by setting port 24009, and that actually solved it (if anyone here was wondering how it went for me with this)
14:45 tobias- (using 3.3.2)
14:46 gmcwhistler joined #gluster
14:46 gmcwhistler joined #gluster
14:50 grharry2 joined #gluster
14:53 vpshastry joined #gluster
14:57 ricky-ticky joined #gluster
14:59 TuxedoMan joined #gluster
15:01 sprachgenerator joined #gluster
15:04 itisravi joined #gluster
15:04 saurabh joined #gluster
15:06 nightwalk joined #gluster
15:08 manik joined #gluster
15:11 daMaestro joined #gluster
15:12 grharry2 Hey there I need some help here ... after a volume create on clean partitions gluster says : GLVol: failed: /export/brick02 or a prefix of it is already part of a volume
15:12 glusterbot grharry2: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
15:13 zerick joined #gluster
15:15 avati joined #gluster
15:27 Technicool joined #gluster
15:30 hagarth_ joined #gluster
15:35 Guest53741 joined #gluster
15:44 Excolo joined #gluster
15:45 Excolo is there any way to move what directory the volume is stored in? for instance on the brick change it from /export/dir1 to /export/dir2?
15:46 duerF joined #gluster
15:48 bennyturns joined #gluster
15:50 purpleidea joined #gluster
15:50 purpleidea joined #gluster
15:55 bala joined #gluster
16:07 rotbeard joined #gluster
16:15 tziOm joined #gluster
16:16 sjoeboo joined #gluster
16:16 dbruhn joined #gluster
16:22 jclift__ joined #gluster
16:25 jclift joined #gluster
16:28 andreask joined #gluster
16:32 ctria joined #gluster
16:39 jclift_ joined #gluster
16:40 Mo__ joined #gluster
16:43 davinder joined #gluster
16:48 bulde joined #gluster
16:52 manik joined #gluster
17:10 grharry2 left #gluster
17:11 MrNaviPacho joined #gluster
17:13 nightwalk joined #gluster
17:15 dseira joined #gluster
17:19 JerryM joined #gluster
17:19 purpleidea joined #gluster
17:39 MrNaviPacho joined #gluster
17:43 B21956 joined #gluster
17:57 jporterfield joined #gluster
18:01 robos joined #gluster
18:02 vincent_1dk joined #gluster
18:02 edong23_ joined #gluster
18:05 sticky_afk joined #gluster
18:06 stickyboy joined #gluster
18:08 gluslog_ joined #gluster
18:12 badone joined #gluster
18:17 jclift___ joined #gluster
18:23 syntheti_ joined #gluster
18:23 bulde joined #gluster
18:30 compbio joined #gluster
18:30 compbio left #gluster
18:34 SOLDIERz_ joined #gluster
18:34 duerF joined #gluster
18:47 robos joined #gluster
18:48 sashko joined #gluster
18:57 jporterfield joined #gluster
18:58 gmcwhistler joined #gluster
18:59 andreask joined #gluster
19:04 andreask joined #gluster
19:07 robo joined #gluster
19:07 andreask joined #gluster
19:10 SOLDIERz_ left #gluster
19:12 rwheeler joined #gluster
19:22 robo joined #gluster
19:32 JoeJulian johnmark: How many people are signed up for New Orleans?
19:34 semiosis i'd like to make it up there
19:34 robo joined #gluster
19:35 JoeJulian Excolo: Use replace-brick. Here's what I do: kill the brick process for that one brick that I'm moving. Mount a scratch monkey to the target location. Do the "replace-brick...commit force". Kill the brick again. Unmount the scratch and mount the actual brick. "start force" the volume again.
19:38 JoeJulian I wonder if this is actually supposed to be an error or not: E [io-cache.c:557:ioc_open_cbk] 0-mysql1-io-cache: inode context is NULL (0a6534ff-826f-46a8-b837-a4ed0df11589)
19:39 JoeJulian a2_: ? It looks to me like it's just something that's not cached yet.
19:41 syntheti_ JoeJulian: what are the implications of killing a brick's process? is that brick inactive until doing a stop/start on the volume? or restarting glusterd?
19:42 JoeJulian Yes, that brick is inactive unless it's started again. It can be restarted using a "gluster volume start $vol force" command.
19:42 JoeJulian or restarting glusterd
19:42 syntheti_ thanks, good to know
19:50 jporterfield joined #gluster
19:55 robo joined #gluster
19:57 primusinterpares joined #gluster
20:03 robo joined #gluster
20:22 jporterfield joined #gluster
20:25 zerick joined #gluster
20:33 robo joined #gluster
20:39 jdarcy joined #gluster
20:54 robo joined #gluster
21:46 glusterdude joined #gluster
22:02 fidevo joined #gluster
22:08 atrius joined #gluster
22:09 plarsen joined #gluster
22:12 awheele__ joined #gluster
22:20 avati joined #gluster
22:28 JoeJulian johnmark: are you around?
22:28 JoeJulian Technicool?
22:31 hagarth_ joined #gluster
22:32 avati joined #gluster
22:40 robo joined #gluster
22:42 jporterfield joined #gluster
22:47 a2_ JoeJulian, did you get that io-cache error on a rhel-6.4+ client?
22:47 JoeJulian yes
22:52 a2_ JoeJulian, it's a "safe" error.. it is fixed in master branch and requires a backport to 3.4
22:52 a2_ related to readdirplus code changes
22:53 JoeJulian Ah, ok.
22:53 a2_ you specifically need commit 2991503d014f634da5cd10bcb851e986a3dcd5c2
22:54 JoeJulian We're shooting for a 3.4.1 soon anyway, aren't we?
22:54 a2_ yes
22:54 a2_ there's a decent pile up of patches already
22:57 sashko New Orleans sounds good right about now
22:57 sashko JoeJulian semiosis you guys going?
22:57 semiosis i dont have any plans, just seems like it would be fun
22:58 sashko new orleans is tons of fun
22:59 sashko bummer it's only a day :(
23:01 sashko there's one tomorrow in SF, that's close enough
23:01 syntheti_ are there still openings for tomorrow's event in SF?
23:02 sashko it doesn't say registration is closed, so I assume so
23:02 sashko a2_: you going to SF? :)
23:07 a2_ sashko, yes, i plan to
23:07 a2_ who else coming tomorrow?
23:07 syntheti_ sashko: where are seeing registration? rsvp is closed at http://www.meetup.com/GlusterFS-​Silicon-Valley/events/124722542/
23:07 glusterbot <http://goo.gl/MbMVmj> (at www.meetup.com)
23:08 a2_ syntheti_, i'm not sure if they're gonna be super anal about rsvp acks.. johnmark ?
23:08 syntheti_ ah, thanks :)
23:09 johnmark JoeJulian: hey - you rang?
23:10 johnmark @channelstats
23:10 glusterbot johnmark: On #gluster there have been 174852 messages, containing 7383099 characters, 1233471 words, 4927 smileys, and 657 frowns; 1077 of those messages were ACTIONs. There have been 67190 joins, 2093 parts, 65106 quits, 21 kicks, 165 mode changes, and 7 topic changes. There are currently 199 users and the channel has peaked at 226 users.
23:10 johnmark syntheti_: doh - we're not going to be super anal :)
23:10 johnmark but I'll open back up the reg until tomorrow am
23:12 johnmark syntheti_: ok, done
23:12 johnmark you should be able to rsvp now
23:13 johnmark syntheti_: incidentally, I've also created an eventbrite page: http://glusterday-sfo.eventbrite.com/
23:13 glusterbot Title: Gluster Community Day San Francisco- Eventbrite (at glusterday-sfo.eventbrite.com)
23:14 johnmark JoeJulian: for New Orleans, there are >40 people registered
23:14 syntheti_ great, thanks! SF is a bit of a hike for me and I wasn't sure what my schedule would be like until today
23:14 johnmark adn I expect upwards of 100
23:14 johnmark syntheti_: cool :)
23:17 a2_ johnmark, how many registered for SF?
23:18 johnmark a2_: about 45
23:19 johnmark syntheti_: nice - hope to see you there
23:19 JoeJulian Eww... that is not the hero I want to imagine...
23:20 a2_ johnmark, cool!
23:20 johnmark yeah :)
23:20 johnmark JoeJulian: ?
23:20 JoeJulian Super anal.
23:39 edong23 joined #gluster
23:40 jporterfield joined #gluster
23:41 johnmark ha
23:44 GomoX joined #gluster
23:44 GomoX Hello
23:44 glusterbot GomoX: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
23:44 GomoX How do you enable a translator when using new-style volumes?
23:44 asias joined #gluster
23:45 GomoX All the stuff I've found uses configuration files, and I have noe
23:45 GomoX *none
23:45 GomoX I create my volumes through the cli
23:46 semiosis GomoX: using 'gluster volume set' command.  you can see standard options with 'gluster volume set help' and there are also some ,,(undocumented options)
23:46 glusterbot GomoX: Undocumented options for 3.4: http://goo.gl/Lkekw
23:47 semiosis been seeing this question, "how do I enable a translator," a lot lately
23:48 GomoX Is there a way to get the existing conf value?
23:48 GomoX Or is anything different than default listed under volume info?
23:49 semiosis 'gluster volume info' shows any options that have been changed (even changed back to default)
23:49 semiosis right
23:50 GomoX Looks like all the performance magic tricks are enabled out of the box
23:50 ninkotech joined #gluster
23:51 JoeJulian I think that should be changed. The default should be to run as slow as possible so that people feel like they've accomplished something when they enable all the performance translators that currently are on by default.
23:51 ninkotech_ joined #gluster
23:51 GomoX I was kind of hoping that would be the case
23:51 GomoX I just set up a gluster volume as a document root for a PHP app and the performance is dismal
23:51 JoeJulian People are always asking to "tune" the volumes. This would give them the opportunity!
23:51 JoeJulian @php
23:51 sashko a2_: sounds good, i'll see tonight if I can make it
23:51 glusterbot JoeJulian: php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
23:51 sashko JoeJulian: you coming to SF?
23:52 GomoX JoeJulian: I did the APC trick already
23:52 GomoX JoeJulian: still super slow
23:52 JoeJulian sashko: Sorry, no. I was invited to New Orleans.
23:53 JoeJulian GomoX: Odd. What's the application?
23:53 GomoX JoeJulian: It's something we built
23:54 GomoX I was hoping i could tune (heh) the volume to get near-local performance on those files as they hardly ever change
23:55 GomoX Without resorting to rsyncing them to a local volume or other nasty stuff like that
23:55 JoeJulian That's what apc.stat=0 does.
23:55 a2_ it could be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache for PHP workload
23:55 a2_ at least worth measuring how much that can help improve
23:56 GomoX JoeJulian: my APC hit rates don't look too great
23:56 GomoX JoeJulian: not really sure why
23:56 JoeJulian a2_: That's new with 3.4?
23:56 a2_ JoeJulian, none of them are
23:56 sashko JoeJulian: i see, bummer
23:56 GomoX I will investigate some more in the morning I guess
23:56 GomoX Thanks guys
23:56 a2_ maybe --fopen-keep-cache.. let me double check
23:56 JoeJulian I learn something new every day.
23:57 a2_ by HIGH i mean a big number (of seconds)
23:58 JoeJulian Guess I need to update that blog post.
23:58 JoeJulian @learn php as It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
23:58 glusterbot JoeJulian: The operation succeeded.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary