Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 plarsen joined #gluster
00:02 zerick joined #gluster
00:17 mariusp joined #gluster
00:28 shubhendu joined #gluster
01:00 gildub joined #gluster
01:04 sputnik13 joined #gluster
01:08 coredump joined #gluster
01:14 bala joined #gluster
01:17 mariusp joined #gluster
01:20 semiosis joined #gluster
01:33 diegows joined #gluster
01:38 overclk joined #gluster
01:57 haomaiwa_ joined #gluster
01:59 _pol joined #gluster
02:06 haomai___ joined #gluster
02:07 jobewan joined #gluster
02:32 harish_ joined #gluster
02:35 BossR joined #gluster
02:37 BossR_ joined #gluster
02:38 Pupeno_ joined #gluster
03:04 _Bryan_ joined #gluster
03:04 sputnik13 joined #gluster
03:09 BossR joined #gluster
03:15 bharata-rao joined #gluster
03:19 mariusp joined #gluster
03:24 Pupeno joined #gluster
03:29 ira_ joined #gluster
03:59 itisravi joined #gluster
04:06 spandit joined #gluster
04:08 nishanth joined #gluster
04:10 kanagaraj joined #gluster
04:11 nbalachandran joined #gluster
04:14 Rafi_kc joined #gluster
04:16 jiku joined #gluster
04:16 anoopcs joined #gluster
04:16 RameshN joined #gluster
04:19 mariusp joined #gluster
04:25 ndarshan joined #gluster
04:26 nishanth joined #gluster
04:37 kanagaraj_ joined #gluster
04:40 ppai joined #gluster
04:41 BossR_ joined #gluster
04:42 atinmu joined #gluster
04:47 RameshN joined #gluster
04:48 _ndevos joined #gluster
04:49 ramteid joined #gluster
04:49 kshlm joined #gluster
04:58 sputnik13 joined #gluster
05:02 sputnik13 joined #gluster
05:06 jiffin joined #gluster
05:07 kdhananjay joined #gluster
05:08 sputnik13 joined #gluster
05:10 nshaikh joined #gluster
05:11 mariusp joined #gluster
05:14 prasanth_ joined #gluster
05:25 bala joined #gluster
05:25 sputnik13 joined #gluster
05:27 sas joined #gluster
05:29 lalatenduM joined #gluster
05:29 rastar joined #gluster
05:30 hagarth joined #gluster
05:33 _pol joined #gluster
05:36 kanagaraj_ joined #gluster
05:51 overclk joined #gluster
05:55 glusterbot New news from newglusterbugs: [Bug 1129527] DHT :- data loss - file is missing on renaming same file from multiple client at same time <https://bugzilla.redhat.com/show_bug.cgi?id=1129527>
06:00 dusmant joined #gluster
06:03 _pol_ joined #gluster
06:05 atalur joined #gluster
06:08 raghu joined #gluster
06:13 jaytee My .htaccess file doesn't replicate across all clients unless I manually go an ls -la that directory. Does anyone run into that ?
06:14 LebedevRI joined #gluster
06:19 sahina joined #gluster
06:36 meghanam joined #gluster
06:38 saurabh joined #gluster
06:46 dusmant joined #gluster
06:46 ricky-ticky joined #gluster
06:52 nishanth joined #gluster
06:52 ndarshan joined #gluster
06:53 sahina joined #gluster
06:55 glusterbot New news from newglusterbugs: [Bug 1129541] [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists" <https://bugzilla.redhat.com/show_bug.cgi?id=1129541>
06:55 karnan joined #gluster
06:57 ekuric joined #gluster
06:57 ekuric joined #gluster
07:05 andreask joined #gluster
07:05 BossR joined #gluster
07:06 deepakcs joined #gluster
07:20 nbalachandran joined #gluster
07:21 keytab joined #gluster
07:22 ctria joined #gluster
07:27 glusterbot New news from resolvedglusterbugs: [Bug 854156] Support for hadoop distros <https://bugzilla.redhat.com/show_bug.cgi?id=854156>
07:32 fsimonce joined #gluster
07:35 lalatenduM "http://www.gluster.org/" is opening for me , got an "Error 503 Service Unavailable"
07:39 gildub joined #gluster
07:41 fraggeln left #gluster
07:45 TvL2386 joined #gluster
07:53 _ndevos lalatenduM: hmmm, same for me
07:54 lalatenduM ndevos, just sent a mail to gluster-infra
07:54 ndevos JustinClift: you know whats up with the gluster.org website?
07:54 ndevos lalatenduM: ah, thanks!
07:54 ndevos lalatenduM++
07:54 glusterbot ndevos: lalatenduM's karma is now 1
07:54 ndevos lol
07:54 lalatenduM ndevos, the website sometime work for me , but wiki is not at all working
07:55 glusterbot New news from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.com/show_bug.cgi?id=1058300>
07:55 * ndevos hits refresh like a madman
07:55 keytab joined #gluster
07:56 ndevos lalatenduM: nope, the website doesnt like me :-/
07:56 lalatenduM ndevos, there is something wrong for sure, it is very slow
07:57 ndevos lalatenduM: I'll send a txt to JustinClift, maybe he's awake already
07:57 lalatenduM ndevos, cool, you might want to add it to my mail :)
07:57 lalatenduM brb
08:00 nishanth joined #gluster
08:01 ndarshan joined #gluster
08:06 atalur joined #gluster
08:07 TvL2386 hi guys! I've got a server that has mounted a glusterfs volume over nfs. The mountpoint is not owned by root, but after changing the auth.allow property of this volume the permissions of the mountpoint are reset. Anybody got an idea why and if this can be changed?
08:08 TvL2386 with reset I mean the permissions were changed to root:root 700
08:08 TvL2386 I don't know if the 700 was already present before though
08:17 cristov_mac joined #gluster
08:17 dusmant joined #gluster
08:22 bharata-rao joined #gluster
08:24 suliba joined #gluster
08:27 mariusp joined #gluster
08:28 RameshN joined #gluster
08:28 nbalachandran joined #gluster
08:30 karnan joined #gluster
08:34 sahina joined #gluster
08:41 atalur joined #gluster
08:45 ricky-ticky joined #gluster
08:46 ppai joined #gluster
08:50 mariusp joined #gluster
08:53 dockbram joined #gluster
08:57 rastar joined #gluster
08:58 gildub joined #gluster
09:04 Slashman joined #gluster
09:05 vimal joined #gluster
09:15 calum_ joined #gluster
09:18 maniac joined #gluster
09:18 maniac hi guys
09:18 maniac if i want to migrate production to gluster and will do on joiner 2nd node : gluster volume sync <1st node ip> volname
09:19 maniac it will lock access to data in it untill volume will sync ?
09:20 atinmu maniac, are you talking about I/O ?
09:21 maniac yes .. i mean i want to migrate production to gluster … and wanted to know if during sync this directory will be unaccesible completely ?
09:22 maniac beacuse there are some files need to be accessed to an application
09:23 atinmu gluster volume sync <volname> syncs a volume to a particular peer, so it has nothing to do with the I/O, with volume sync you are not modifying your bricks
09:24 ndevos maniac: 'gluster volume sync' synchronizes the .vol files (volume layout) to other peers in case it was out-of-sync, you normally do not need to run it
09:25 ndevos maniac: the sync'ing of the data is done online, and automatically
09:25 maniac hm but what if i did this :
09:26 maniac gluster volume create volname transport tcp 192.168.0.41:/path/to/share force
09:26 maniac and then
09:26 maniac gluster volume add-brick volname replica 2 192.168.0.42:/path/to/share
09:26 maniac in that case it will start sync automatically too ?
09:27 maniac why am i asking bacuse i cannot see any network activity from which i may decide that sync has started
09:28 ndevos yes, it should, or you can use 'gluster volume heal'
09:28 maniac what i want to know .. will it affect application to access data ?)
09:28 maniac read/write
09:33 dockbram joined #gluster
09:52 ninkotech__ joined #gluster
09:53 ndevos maniac: well, during the sync there will be some increased bandwidth usage and load on the servers, you might notice that, but mostly it goes pretty unnoticed
09:54 qdk joined #gluster
09:55 mariusp joined #gluster
09:59 maniac [2014-08-13 09:53:36.233252] I [client-handshake.c:1474:client_setvolume_cbk] 0-smart-main-client-1: Server and Client lk-version numbers are not same, reopening the fds
09:59 maniac [2014-08-13 09:53:36.233519] I [client-handshake.c:450:client_set_lk_version_cbk] 0-smart-main-client-1: Server lk version = 1
09:59 maniac [2014-08-13 09:53:36.233916] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-smart-main-replicate-0: Another crawl is in progress for smart-main-client-1
09:59 maniac [2014-08-13 09:53:36.236115] I [afr-self-heald.c:1690:afr_dir_exclusive_crawl] 0-smart-main-replicate-0: Another crawl is in progress for smart-main-client-1
09:59 glusterbot maniac: This is normal behavior and can safely be ignored.
09:59 maniac this means sync in progress ?
10:00 ndevos maniac: you can check the glustershd.log (shd = self-heal-daemon), that should indicate whats is happening
10:01 RameshN joined #gluster
10:02 deepakcs joined #gluster
10:07 Pupeno joined #gluster
10:07 nage joined #gluster
10:07 ghenry joined #gluster
10:07 d-fence joined #gluster
10:07 ninkotech joined #gluster
10:07 sputnik13 joined #gluster
10:12 dusmant joined #gluster
10:16 TvL2386 hi guys! I've got a server that has mounted a glusterfs volume over nfs. The mountpoint is not owned by root, but after changing the auth.allow property of this volume the permissions of the mountpoint are reset. Anybody got an idea why and if this can be changed?
10:16 TvL2386 with reset I mean the permissions were changed to root:root 700
10:16 TvL2386 I don't know if the 700 was already present before though
10:16 TvL2386 ** repost due to no answer, could not find it myself **
10:24 atalur joined #gluster
10:25 ndevos TvL2386: there was a bug where the ownership got reset, it could be worked aroung by setting the storage.owner-uid and storage.owner-gid volume options
10:26 glusterbot New news from newglusterbugs: [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1117822>
10:26 ndevos TvL2386: the bug had nothing to do with nfs, but was happening on a (re)start of brick processes
10:26 qdk joined #gluster
10:27 ndevos TvL2386: that was bug 1040275
10:27 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1040275 high, high, ---, vbellur, CLOSED CURRENTRELEASE, Stopping/Starting a Gluster volume resets ownership
10:28 edward1 joined #gluster
10:29 ndevos TvL2386: hmm, that seems to have been fixed in 3.4.3 and master (including 3.6), but not in 3.5 :-/
10:29 ndevos TvL2386: what version are you running?
10:30 Pupeno joined #gluster
10:30 nage joined #gluster
10:30 ghenry joined #gluster
10:30 d-fence joined #gluster
10:30 ninkotech joined #gluster
10:56 glusterbot New news from newglusterbugs: [Bug 1129609] Values for cache-priority should be validated <https://bugzilla.redhat.com/show_bug.cgi?id=1129609>
11:02 tdasilva joined #gluster
11:02 ndevos TvL2386: oh, in fact, it has been fixed in 3.5 too already, that was bug 1095971
11:02 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1095971 high, high, ---, ndevos, CLOSED CURRENTRELEASE, Stopping/Starting a Gluster volume resets ownership
11:05 BossR_ joined #gluster
11:07 ppai joined #gluster
11:12 mariusp joined #gluster
11:13 harish_ joined #gluster
11:14 maniac guys, i have a question .. if i have 40GB of data and start to sync it with gluster across 2 nodes … it should start sync right after i create the volume ? because i'm wating for about 1 hour and 2nd node brick size is not increasings
11:19 diegows joined #gluster
11:20 dusmant joined #gluster
11:22 hagarth joined #gluster
11:23 mator joined #gluster
11:23 mator hello
11:23 glusterbot mator: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:24 mator i have quite an odd situation on older glusterfs cluster, we have 6 rhel-6.3 servers in cluster, with gluster-3.2.7 installed
11:27 mator first server is reporting 50g size on my clusterfs, while others are reporting full size (273T)
11:28 glusterbot New news from resolvedglusterbugs: [Bug 1095971] Stopping/Starting a Gluster volume resets ownership <https://bugzilla.redhat.com/show_bug.cgi?id=1095971> || [Bug 1040275] Stopping/Starting a Gluster volume resets ownership <https://bugzilla.redhat.com/show_bug.cgi?id=1040275>
11:28 mator http://pastebin.com/Fyq2rwqP
11:28 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
11:30 shylesh__ joined #gluster
11:31 mariusp joined #gluster
11:41 kkeithley Gluster Community Meeting in ~20 minutes over in #gluster-meeting
11:42 maniac left #gluster
11:44 ira_ joined #gluster
11:46 chirino joined #gluster
11:57 keytab joined #gluster
11:57 ccha2 is there a way to change logs timestamps to use system localtime ?
11:58 ndevos ccha2: sorry, no, it's UTC only
11:59 ccha2 why it is UTC only ?
11:59 ccha2 all other logs are localtime
11:59 ndevos not sure, but it helps when geo-replication is used, otherwise you dont know what time matches on master/slave
11:59 rsavage joined #gluster
12:00 rsavage morning
12:00 Pupeno_ joined #gluster
12:00 ccha2 geo-replication should use UTC and logs localtime :)
12:01 rsavage In gluster 3.5, I have a 3 brick replica volume.  I added 2 more replicas... and now I want to do a rebalance... but it's not working.   gluster volume rebalance testvol start
12:01 rsavage volume rebalance: storageprod: failed: Volume testvol is not a distribute volume or contains only 1 brick.
12:01 rsavage testvol has more than 1 brick, what am I doing wrong?
12:01 rsavage Status: Started
12:01 rsavage Number of Bricks: 1 x 5 = 5
12:01 rsavage Transport-type: tcp
12:03 ccha2 rsavage: not rebalance
12:03 rsavage ccha2: ok?
12:03 ccha2 just use gluster heal to do the job
12:03 rsavage ccha2: ok...
12:03 ccha2 glusterfs heal full
12:04 ccha2 if you want to rebalance now
12:04 mikedep333 joined #gluster
12:04 ccha2 or just wait client access files to balances the files
12:04 rsavage ccha2: oh gotcha
12:05 ccha2 rebalance is for distribute volume
12:05 hchiramm__ joined #gluster
12:05 rsavage ccha2: Ahhh.. makes perfect sense now
12:05 rsavage ccha2: thanks!
12:06 rolfb joined #gluster
12:07 rsavage cchas:I got this error message when I tried a heal:  Error: Self-heal daemon is not running.
12:07 keytab joined #gluster
12:07 mikedep333 Hi, should I report a bug against gluster-smb for this samba packaging bug? http://supercolony.gluster.org/pipermail/gluster-users/2014-August/018374.html
12:07 glusterbot Title: [Gluster-users] Problem with EPEL6 samba-winbind-modules-4.1.9-2.el6.x86_64.rpm (at supercolony.gluster.org)
12:08 ccha2 gluster status to check if self-heal daemon is running
12:09 ccha2 and see why your self-heal daemon not running
12:10 rsavage ccha2: it's not running, how do I find out why it's not?
12:10 Humble joined #gluster
12:11 ccha2 are these test servers ?
12:11 rsavage ccha2: nope
12:12 B21956 joined #gluster
12:12 ccha2 try stop and start the volume ?
12:12 ccha2 and read log
12:12 rsavage ccha2: ok
12:12 ccha2 any firewall ?
12:13 rsavage ccha2: I am wrong, Self-Heal is on
12:14 rsavage I was lookig at Self-heal Daemon on NFS-SERVER1a N/A Y 10627
12:14 rsavage as the NA and thinking it was off
12:14 rsavage my bad
12:14 mator can someone write rewrite rule for http://gluster.org site to redirect /pipemail/ to supercolony.gluster.org ? remake of gluster.org made 404 errors for almost every google search for mailing lists
12:14 glusterbot Title: Write once, read everywhere Gluster (at gluster.org)
12:15 Humble mator, afaik the concerned team is working on the resolution..
12:16 an joined #gluster
12:18 itisravi_ joined #gluster
12:18 rsavage ccha2: thanks again :)
12:26 glusterbot New news from newglusterbugs: [Bug 1119827] Brick goes offline unexpectedly <https://bugzilla.redhat.com/show_bug.cgi?id=1119827>
12:33 rastar joined #gluster
12:33 surabhi joined #gluster
12:35 dusmant joined #gluster
12:35 ira joined #gluster
12:36 nullck_ joined #gluster
12:50 julim joined #gluster
12:53 julim joined #gluster
12:55 bala joined #gluster
12:57 mariusp joined #gluster
12:58 theron joined #gluster
12:59 theron_ joined #gluster
13:07 bennyturns joined #gluster
13:08 sahina joined #gluster
13:11 mariusp joined #gluster
13:22 nishanth joined #gluster
13:22 _pol joined #gluster
13:23 plarsen joined #gluster
13:29 _pol_ joined #gluster
13:31 mikedep333 ignore my above message.
13:35 rsavage
13:37 ramteid joined #gluster
13:43 plarsen joined #gluster
13:49 sputnik13 joined #gluster
13:54 coreping joined #gluster
13:55 _pol joined #gluster
13:58 _pol_ joined #gluster
13:59 shubhendu joined #gluster
14:04 plarsen joined #gluster
14:06 rwheeler joined #gluster
14:08 plarsen joined #gluster
14:17 wushudoin| joined #gluster
14:21 kkeithley semiosis: ping, who is the debian maintainer?
14:27 nshaikh joined #gluster
14:33 jdarcy joined #gluster
14:34 msmith_ joined #gluster
14:37 xandrea joined #gluster
14:37 xandrea hello
14:37 glusterbot xandrea: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:37 julim joined #gluster
14:38 xandrea I really think to setup everything
14:39 xandrea I have to servers centos 6.5, and I activated a replica between them
14:39 xandrea but I supposed to see the same files in both servers… right??
14:40 xandrea if I do “gluster peer status” is it ok
14:40 xandrea if I do “gluster volume info” is it ok
14:41 lmickh joined #gluster
14:41 xandrea why I cannot see files replicated?
14:44 mator left #gluster
14:45 mariusp joined #gluster
14:45 kkeithley "activated a replica"?   Does that mean that you're running geo-rep, or that when you created your volume that you specified "replica 2" in the volume create command?
14:46 kkeithley If both servers are in the same data center, you should not use geo-rep.
14:47 xandrea I used replica 2
14:47 xandrea gluster volume create testvol replica 2 transport tcp vm1.kvmlab:/store/data vm2.kvmlab:/store/data
14:48 kkeithley then yes, when the client writes, it writes to both/all the replicas in the replica set, and you should see the file(s) written to both/all the backing "bricks"
14:49 xandrea mmm.. I didn’t connect a client
14:49 xandrea I just tried to create a file from the first server
14:49 xandrea touch testfile
14:50 nbalachandran joined #gluster
14:50 kkeithley never write directly to the backing "bricks"
14:50 kkeithley All I/O is done from a client mount point
14:50 xandrea cannot I??
14:50 kkeithley you cannot. That's not how gluster works
14:51 xandrea I need to set different permission to my files?
14:51 kkeithley All I/O is done from a client mount point
14:52 xandrea I need to move some files form a directory to the brick
14:52 kkeithley mount the volume locally, then copy to the mountpoint
14:54 kkeithley there are some 133t users who know how to cheat. Until then, do everything through a client mount point
14:54 kkeithley ;-)
14:55 xandrea how can I mount the volume locally?
14:55 xandrea with NFS ?
14:55 kkeithley with nfs or fuse.  `mount -t glusterfs $myhostname:$volname /mnt`
14:56 kkeithley same way you'd mount it from any other client
14:57 rsavage left #gluster
14:57 xandrea I don’t need to share the brick with NFS… right?
14:57 xandrea is it already shared?
14:57 kkeithley If you don't want to, no
14:57 kkeithley To disable NFS you have to explicitly turn it off before you start the volume
14:58 xandrea ok… I’ll try
14:58 xandrea thanks
14:58 bala joined #gluster
15:06 xandrea it works !!!
15:06 xandrea thanks
15:06 xandrea :D
15:06 Philambdo joined #gluster
15:08 bennyturns joined #gluster
15:20 sputnik13 joined #gluster
15:22 overclk joined #gluster
15:23 bennyturns joined #gluster
15:34 mariusp joined #gluster
15:37 mbukatov joined #gluster
15:47 Slashman joined #gluster
15:48 RameshN joined #gluster
16:05 gmcwhistler joined #gluster
16:06 dtrainor joined #gluster
16:06 msmith_ joined #gluster
16:07 mojibake joined #gluster
16:12 hagarth joined #gluster
16:17 jackdpeterson joined #gluster
16:20 theron joined #gluster
16:20 dtrainor joined #gluster
16:22 dtrainor joined #gluster
16:25 jackdpeterson Hello all, I’m attempting to configure Gluster ~3.5 on Ubuntu 12.04 mounted on clients via NFS through an elastic IP (using the public DNS record); however, failover seems to be problematic. I have a script running on each GlusterFS server where it claims the (AWS Elastic IP) should the IP become unresponsive to pings. Once the EIP has been transitioned to the new server the client(s) fail...
16:25 jackdpeterson ...to perform any file/folder actions.  My assumption is that this is related to either: DNS caching or due to some kind of NFS security magic where authentication occurs and a standard “mount –t nfs EIP:/pod1 /var/www” doesn’t re-authenticate.  Has anyone had any success in this scenario? Thanks!
16:25 jackdpeterson Some other details:
16:26 theron joined #gluster
16:26 jackdpeterson Everything's on AWS. 2x GlusterFS servers configured in replica 2 mode.... A few (but potentially quite a few more) web-servers hosting PHP code.
16:29 PeterA joined #gluster
16:30 semiosis jackdpeterson: do you need nfs?  HA is automatic with the fuse client
16:30 jackdpeterson @semiosis, I would prefer it simply for the performance comparison -- I was able to approximately doubly my throughput on web-servers by using NFS
16:30 pdrakeweb joined #gluster
16:31 semiosis jackdpeterson: are your instances in VPC?
16:31 jackdpeterson but if HA is impossible on NFS then that kind of makes it a non-starter... reliability > performance
16:31 jackdpeterson no, ec2 classic
16:32 PeterA1 joined #gluster
16:36 semiosis jackdpeterson: i'd need to try this out & watch the network traffic to see whats going wrong in your EIP setup.  it seems like it could work
16:37 semiosis jackdpeterson: an alternative i've considered, but also never tried, is basically the same thing in VPC using ENI instead of EIP
16:37 semiosis jackdpeterson: but in general I recommend the fuse client
16:37 jackdpeterson Right, the ENI side of things would make that part of the infrastructure simpler; however, I'm not too fond of the idea of having to configure an HA-NAT configuration for all traffic out
16:38 semiosis ah
16:38 jackdpeterson as well as a few other 'gotchas' that come along with VPC -- needing to configure route tables, etc.
16:38 semiosis interesting.  i havent tried vpc yet.  i'm in classic, using fuse clients, very happy with that
16:38 semiosis what kind of web app?  php?
16:41 jackdpeterson Yes, pretty vanilla... PHP (wordpress) using APC, various caching mechanisms (to alleviate the database side of things) ... can't quite do full-page caching yet due to some adaptive elements
16:41 daMaestro joined #gluster
16:42 semiosis see ,,(php)
16:42 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
16:44 jackdpeterson Hmm... the APC.stat part might do the needful
16:44 semiosis you'll need to reload apache when your php changes.  i think you can restrict the stat=0 to certain pages, been a while since i looked into apc
16:45 jackdpeterson oh, that's not a problem... we can reload apache on a rebuid (when the content updates...
16:45 jackdpeterson I'd just need to figure out a way to orchestrate that across multiple servers
16:45 semiosis you'll find there are too many ways to do that
16:47 dtrainor joined #gluster
16:47 jackdpeterson What are your thoughts on this scenario... Stick glusterfs server on each web-server although it wouldn't be hosting any volumes... then mount -t nfs localhost:/somemount /var/www
16:48 semiosis no
16:48 semiosis localhost nfs mount is a bad idea
16:49 jackdpeterson Okay... what's particularly bad about it? PErformance, stability?
16:49 semiosis can cause a deadlock in the kernel :(
16:50 jackdpeterson Well.. that'd be no bueno :-\
16:51 ninthBit joined #gluster
16:55 msmith_ joined #gluster
17:09 jackdpeterson Thanks @Semiosis. I'll continue to do my research here with the fuse client side of things plus optimizing PHP stuff via the apc.stat=0 parts and so forth. results pending :-)
17:10 rotbeard joined #gluster
17:10 semiosis yw
17:11 msmith_ joined #gluster
17:11 luckyinva joined #gluster
17:11 jackdpeterson Ah, one more question for you... I have my gluster FQDNs set up as gluster1.useast1.someDomain.com and gluster2; however, when mounting I'm getting DNS resolution errors (makes sense... they aren't in DNS or anything yet)... How do you handle this given the dynamic nature of AWS and it's IP address re-assignability
17:13 semiosis i always recommend making dedicated gluster hostnames (just like you're doing) and CNAMEing them to the public-hostname of your ec2 instances
17:13 kkeithley the-me: ping. in my copious spare time I want to add a -dev sub-package to the debian packaging. Maybe also a -geo-rep sub-package. Do you have any thoughts on the matter?
17:13 jackdpeterson ... but the public hostname of the instance changes on reboot
17:13 semiosis this resolves to the local-ipv4 from inside EC2 and the public-ipv4 from outside
17:13 semiosis when you need to replace a server, just update the CNAME to the new public-hostname
17:13 C_Kode joined #gluster
17:13 semiosis i recommend against EIP
17:14 jackdpeterson Okay, sounds good enough :-)... Yeah... I don't want to consume unnecessary EIPs from that standpoint. Furthermore w/ that recommendation it seems straightforward enough to dump the EIP and just go with the dynamic magic and accept a certain amount of risk there
17:14 semiosis the public-hostname changes when you stop/start, not a warm reboot
17:15 jackdpeterson Ah, good to know :-)
17:15 jackdpeterson most of my simulations have involved start/stopping boxes to see what failover looks like
17:15 semiosis fwiw, i also recommend to use EBS for your bricks, and to snapshot your glusterfs servers using the CreateImage API call regularly
17:15 semiosis this creates AMIs which you can launch to restore from backup
17:16 semiosis makes recovery really easy
17:16 C_Kode Hi, I just started looking into gluster and I had a question.  The default gluster share type.  Is that a shared filesystem like NFS/CIFS is, or is a single share block level type share like iSCSI?  I'm not clear on what type of share it is
17:16 semiosis C_Kode: it's a filesystem kinda like nfs
17:16 jackdpeterson @semiosis, thanks and Cheers!
17:16 semiosis yw, good luck
17:16 diegows joined #gluster
17:16 C_Kode semiosis, Thanks!
17:16 semiosis yw
17:18 andreask joined #gluster
17:25 zerick joined #gluster
17:28 _Bryan_ joined #gluster
17:32 bene2 joined #gluster
17:37 calum_ joined #gluster
17:47 _pol joined #gluster
17:51 ira joined #gluster
17:55 an joined #gluster
17:57 ndk joined #gluster
18:02 portante joined #gluster
18:02 kkeithley the-me: ping. in my copious spare time I want to add a -dev sub-package to the debian packaging. Maybe also a -geo-rep sub-package. Do you have any thoughts on the matter?
18:03 kkeithley semiosis: ping, ^^^.  Do you have any thoughts?
18:03 kkeithley opinions?
18:04 semiosis whats the geo-rep package for?
18:04 JoeJulian geo-replication
18:04 kkeithley the python scripts
18:04 kkeithley decouple the python dependency
18:05 semiosis JoeJulian--
18:05 glusterbot semiosis: JoeJulian's karma is now 9
18:05 xandrea joined #gluster
18:05 semiosis ha!
18:05 JoeJulian hehe
18:06 JoeJulian And are we going to have a group ppa?
18:06 semiosis and for -dev, what use cases are there where you'd want headers but not the actual libs/executables?
18:06 semiosis that is the point of -dev right?
18:06 JoeJulian other way around.
18:06 JoeJulian You don't want the headers wasting space on your thin client.
18:07 semiosis i see
18:08 JoeJulian Doesn't seem like much until every package and their brother all installs their headers.
18:08 Xanacas joined #gluster
18:10 semiosis right
18:18 BossR_ left #gluster
18:25 kkeithley headers and libfoo.so->libfoo.so.1 symlinks.  I guess Debian/Ubuntu -dev have both, like -devel RPMs
18:25 kkeithley are the .h files in the -common .deb now?
18:25 semiosis yes
18:25 kkeithley oh
18:25 semiosis although i think glfs.h may be the only one :O
18:26 nishanth joined #gluster
18:26 semiosis whatever the libgfapi header file is, glfs.h iirc
18:26 kkeithley yes
18:26 andreask joined #gluster
18:28 semiosis JoeJulian: https://launchpad.net/~gluster :)
18:28 glusterbot Title: Gluster in Launchpad (at launchpad.net)
18:28 kkeithley then maybe there's no real urgency. Although maybe there's something to be said for making a -dev and getting the libfoo.so symlinks and .h files out of -common?
18:28 semiosis i'll create the PPAs & upload packages later today
18:28 semiosis no urgency afaict
18:29 kkeithley meh, I'll leave well enough alone then
18:29 semiosis no one has ever complained (that i've heard) about bloated packages
18:29 semiosis although I agree in principle/policy they should be separated
18:29 kkeithley nobody feels strongly in Debian or Ubuntu about dependencies on python for geo-rep?
18:29 discretestates joined #gluster
18:30 semiosis no one's mentioned it to me
18:30 kkeithley but there's no strong policing of policy in Debian like there is in Fedora, e.g.
18:30 semiosis i guess not, since it's never come up
18:30 caiozanolla hello all, if i mount gluster at the client just using the hostname and defaults on /etc/fstab, gluster builds the config based on the server's conf. not using a hostname will make the client use the file at /etc/gluster/glusterd.vol. How do I set a client only config, such as a subvolume for performance but keeping all the options at the servers? I mean, how do i push specific options from the servers to the clients?
18:31 kkeithley meh, then I drop it. (for now)
18:31 semiosis kkeithley: cathedral vs. bazaar :)
18:31 kkeithley I dare you to post that in a #fedora group
18:31 semiosis lmao
18:33 T0aD- joined #gluster
18:34 semiosis caiozanolla: when you mount a glusterfs client using hostname:volume the client is dynamically configured by the cluster.  when you change ,,(options) the clients are reconfigured dynamically
18:34 glusterbot caiozanolla: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
18:36 julim_ joined #gluster
18:36 skippy features.worm didn't seem to work for me:  it seemed to effectively make the volume completely read-only.
18:36 mjrosenb_ joined #gluster
18:37 dockbram_ joined #gluster
18:39 skippy aside from "option rpc-auth-allow-insecure on", what other options can go in /etc/glusterfs/glusterd.vol ?
18:39 _dist joined #gluster
18:39 semiosis kkeithley: ubuntu's tiered repository structure is a response to debian's epic sprawl of a repo.  as we saw recently when i pushed to get glusterfs into Main
18:40 caiozanolla semiosis, by setting the options on the server using "set" wont it be active on both the server and the client? or am I misunderstanding it? at the gluster page: "The IO-Cache translator is useful on both the client and server sides of a connection."… "If this translator is loaded on the client side, it may help reduce the load" … "If this translator is loaded on the server side, it will allow the server to keep data that is being accessed"
18:40 Lee- joined #gluster
18:40 atrius_ joined #gluster
18:40 jobewan joined #gluster
18:40 caiozanolla semiosis, I understand it is dinamycally created, I just not sur on how it separates client from servres specific configs
18:40 portante_ joined #gluster
18:40 atoponce joined #gluster
18:42 semiosis caiozanolla: you can see the configs generated by the gluster command in /var/lib/gluster/vols/$volname, the -fuse.vol is the client config, the bricks/*.vol files are the server configs
18:42 theron joined #gluster
18:43 caiozanolla semiosis, so if I edit the -fuse.vol files directly it will push configs to clients?
18:43 semiosis no
18:44 dtrainor joined #gluster
18:45 caiozanolla semiosis, ok, so let me ask diferently. how do I say to gluster: Use "this setting" at the server and "that setting" at the client, being the setting applies to the io-cache translator?
18:45 caiozanolla semiosis, Im having a hard time trying to understand this separation.
18:46 power8 joined #gluster
18:47 LebedevRI joined #gluster
18:50 semiosis caiozanolla: can I have the link to that doc you're reading?
18:50 litwol joined #gluster
18:50 litwol Hello
18:50 glusterbot litwol: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:50 litwol heh
18:51 litwol joined #gluster
18:51 semiosis caiozanolla: i suspect the page you're reading has a big red box at the top saying it's outdated.  in any case, i just checked on my 3.4.2 cluster and io-cache is enabled on the client side by default.  so you're done
18:51 litwol oops
18:52 semiosis caiozanolla: generally speaking, you're not meant to mess with volfiles or xlators, except through the standard ,,(options)
18:52 glusterbot caiozanolla: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
18:52 Pupeno joined #gluster
18:54 litwol I am researching glusterfs and have not found a solution to my need yet. i am in need to be able to on-demand add more replica nodes to a volume... not grow volume storage, but only make new nodes replicate existing data in a new geographical location.
18:54 litwol If anyone knows of material i can read about this please send me a link.
18:54 semiosis litwol: with writes going into the shared volume at all those locations?
18:55 semiosis or are they just read-only replicas?
18:55 litwol semiosis: writes going to all locations.
18:55 semiosis litwol: that's a very hard problem
18:55 semiosis and glusterfs has not solved it yet (although it may in a future release)
18:55 litwol is there a page i can read about it ?
18:56 semiosis probably several books have been written about it
18:56 litwol files to be written will almost always (high probability) be unique and not conflict with other nodes (ie, no split brain on single files)
18:56 andreask joined #gluster
18:57 litwol I will do my own testing to ensure results/reliability. i just need information on how to join new node dynamically to a volume
18:58 litwol specifically... from the new client. must not have admin interaction to probe new node from existing pool
18:59 semiosis doubt it's going to work out for you.  even if you can solve that problem (doubtful) you probably will be unhappy with the performance
18:59 litwol my dataset is very small
18:59 _dist litwol: I have (without issue) added new bricks to a replicate volume it's really easy. However, I believe there are still a few bugs if you're doing it online.
19:00 _dist Also, all your writes/fsyncs will be as slow as your network connection
19:00 litwol i just need availability of local storage, which is replicated off to some master, and from there distributed to the rest of the nodes... although i read gluster sends to all nodes in parallel...
19:00 semiosis litwol: just store it in one central place & have your remote locations connect there
19:00 litwol semiosis: that is what i am looking for.. i think... something like "star" layout.
19:01 xandrea joined #gluster
19:01 semiosis litwol: but the data lives in the center, not replicated at the points
19:01 xandrea hello… how does gluster work with big files?? like virtual disk of kvm ?
19:01 semiosis that would be far easier
19:01 _BryanHM_ joined #gluster
19:01 litwol i need data to be local...
19:01 litwol or at least cached
19:01 semiosis litwol: why?
19:01 skippy schedule an rsync to distribute data?
19:01 litwol because getting small files across continents is slow
19:01 semiosis it sure is
19:02 semiosis litwol: could each location have it's own volume which replicates (one-way, read only) to the others?
19:02 semiosis that might be doable
19:02 litwol to put it simply (though i'm sure implementation is anything but simple) i am looking for a master-master replication for filesystem.
19:02 litwol where i can add as many masters to the set as i need on demand
19:03 litwol while system is online.
19:03 semiosis and that is not latency sensitive
19:03 semiosis gluster can meet all those criteria except the last one you forgot to mention
19:04 _dist semiosis: I recently did a gz extract of like 100k files through the fuse client (all < 32k or so) and it was very fast, but all our servers are 10gbe linked
19:04 semiosis you *could* do this with gluster, but the performance would probably disappoint you
19:04 semiosis ...across wan links
19:04 litwol for example mariadb/galera offers clusterring where a new node can join existing "pool", it joins to any node, but you can choose a "master" point to connect to get pool details
19:04 litwol then the new node grabs configuration from master, and connects to the remaining nodes
19:04 litwol then it replicates all data using either rsync or mysqldump or w/e you configure
19:04 semiosis interesting
19:05 litwol once in sync it comes online and becomes writable
19:05 litwol i have tested this yesterday
19:05 litwol tested local. then sent my mv across continent, spun it up there and all came online
19:05 litwol good stuff :)
19:05 litwol now if i can only get fs todo the same...
19:05 litwol :-D
19:06 litwol i don't mind having single point of failure by having single "arbitrator master" into which all other nodes will connect to read/write
19:07 litwol but i definitely need 1) full replication at local node. 2) connect new nodes on demand to online pool. 3) local file reads are local, assuming replication finished.
19:07 litwol 4) node local writes are written at least to master before sending to the other nodes.
19:07 skippy Configuration management tools like Puppet or Chef can help with #2 in your list.
19:07 caiozanolla semiosis, no red box here: http://www.gluster.org/community/documentation/index.php/Translators/performance/io-cache
19:07 glusterbot Title: Translators/performance/io-cache - GlusterDocumentation (at www.gluster.org)
19:08 semiosis caiozanolla: ah you're right
19:09 semiosis litwol: if you find something that meets all that, please let me know!
19:09 litwol :'(
19:10 litwol unison is great for add-itive syncronization of what i said above.
19:10 litwol but it absolutely terrible for file removals
19:30 coredump joined #gluster
19:32 _pol_ joined #gluster
19:37 etaylor_ joined #gluster
19:43 _pol joined #gluster
19:48 PeterA1 what does this mean?
19:48 PeterA1 E [rpcsvc.c:547:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
19:49 litwol left #gluster
19:49 plarsen joined #gluster
19:54 tom][ joined #gluster
20:09 etaylor_ Hello all, I have a two node cluster that appears to configured correctly.  I can mount the testvol on each host.  I build a client node with gluster and gluster-fuse software.  When I mount I get the errors below
20:09 etaylor_ [2014-08-13 19:21:23.941969] I [fuse-bridge.c:4977:fuse_graph_setup] 0-fuse: switched to graph 0
20:09 etaylor_ [2014-08-13 19:21:23.942275] I [fuse-bridge.c:3914:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.22
20:09 etaylor_ [2014-08-13 19:21:23.942569] W [fuse-bridge.c:747:fuse_attr_cbk] 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not connected)
20:09 etaylor_ [2014-08-13 19:21:23.944307] I [fuse-bridge.c:4818:fuse_thread_proc] 0-fuse: unmounting /mnt
20:09 etaylor_ [2014-08-13 19:21:23.944519] W [glusterfsd.c:1095:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x7ff149ed3e9d] (-->/lib64/libpthread.so.0(+0x7f18) [0x7ff14a580f18] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x4053e5]))) 0-: received signum (15), shutting down
20:09 glusterbot etaylor_: ('s karma is now -17
20:09 etaylor_ [2014-08-13 19:21:23.944531] I [fuse-bridge.c:5475:fini] 0-fuse: Unmounting '/mnt'.
20:09 glusterbot etaylor_: ('s karma is now -18
20:09 glusterbot etaylor_: ('s karma is now -19
20:10 etaylor_ My probe output looks like below
20:10 etaylor_ sudo gluster volume info
20:10 etaylor_
20:10 etaylor_ Volume Name: testvol
20:10 etaylor_ Type: Replicate
20:10 etaylor_ Volume ID: c16d8f70-f6cf-4f24-a5bc-b930beb00ef7
20:10 etaylor_ Status: Started
20:10 etaylor_ Number of Bricks: 1 x 2 = 2
20:10 etaylor_ Transport-type: tcp
20:10 etaylor_ Bricks:
20:10 etaylor_ Brick1: 172.16.3.242:/data/brick
20:10 etaylor_ Brick2: 172.16.7.25:/data/brick
20:10 skippy etaylor_: are you NATing from the client to the server?
20:10 etaylor_ and
20:10 etaylor_ udo gluster volume info
20:10 etaylor_
20:10 etaylor_ Volume Name: testvol
20:10 etaylor_ Type: Replicate
20:10 etaylor_ Volume ID: c16d8f70-f6cf-4f24-a5bc-b930beb00ef7
20:10 etaylor_ Status: Started
20:10 etaylor_ Number of Bricks: 1 x 2 = 2
20:10 etaylor_ Transport-type: tcp
20:10 etaylor_ Bricks:
20:10 etaylor_ Brick1: 172.16.3.242:/data/brick
20:10 etaylor_ Brick2: 172.16.7.25:/data/brick
20:11 coreping joined #gluster
20:11 etaylor_ I can mount from the two gluster nodes.  But I can't mount from the client.  The client is on the same subnet as one of the gluster nodes.
20:13 coreping joined #gluster
20:13 semiosis etaylor_: dont do multiline pastes in channel.  please use pastie.org or similar
20:13 etaylor_ Sorry about that.
20:14 coreping joined #gluster
20:15 semiosis etaylor_: possibly iptables?  can you telnet from the client machine to the servers on port 24007 & 49152?
20:15 semiosis see also ,,(ports)
20:15 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:15 neofob joined #gluster
20:19 etaylor_ The client can't talk to 2400[8-9] and 49152.
20:19 etaylor_ On either gluster node.
20:19 etaylor_ Checking AWS security groups.
20:20 qdk joined #gluster
20:20 semiosis if you're using glusterfs 3.4 or newer (you really should be) then you need 24007 & 49152+ (not 24008 or 9)
20:21 semiosis @forget ports
20:21 glusterbot semiosis: The operation succeeded.
20:21 etaylor_ I'm using glusterfs 3.5.2.
20:22 semiosis @learn ports as glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:22 glusterbot semiosis: The operation succeeded.
20:22 semiosis @ports
20:22 glusterbot semiosis: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:22 semiosis etaylor_: good
20:23 etaylor_ I'm using gluster native client.
20:23 etaylor_ Is it true that the gluster native client supports failover?
20:24 _pol_ joined #gluster
20:24 etaylor_ Should I open ports 38465-38467/tcp?  My security group rules allows 49152.
20:25 semiosis only if you will be using nfs clinets
20:25 semiosis s/clinet/client/
20:25 glusterbot What semiosis meant to say was: only if you will be using nfs clients
20:29 etaylor_ Thanks.  I managed to get the client mounted.  I'm going to shutdown one of the gluster nodes.  Hopefully the other node will take over
20:30 etaylor_ I just stopped one of the gluster nodes.  I'm not able to access the mount point on the clinet
20:31 skippy wait 42 seconds
20:31 etaylor_ Damn.
20:31 etaylor_ Its back.
20:31 skippy ,,(ping)
20:31 glusterbot I do not know about 'ping', but I do know about these similar topics: 'ping-timeout'
20:31 etaylor_ This is awesome.
20:31 skippy ,,(ping-timeout)
20:31 glusterbot The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
20:31 mariusp joined #gluster
20:32 mariusp joined #gluster
20:34 skippy given a large physical array of disks in RAID 6, under what circumstances would you recommend expanding a Gluster volume with LVM and extending LVs; and when would you recommend making new bricks?
20:34 mariusp joined #gluster
20:35 mariusp joined #gluster
20:35 etaylor_ So, if a gluster node crashes, we should expect at least 1 minute delay/reconnect?
20:35 jackdpeterson joined #gluster
20:35 clyons joined #gluster
20:36 mariusp joined #gluster
20:36 etaylor_ Restarted node and syncing is complete.
20:37 mariusp joined #gluster
20:37 mariusp joined #gluster
20:38 mariusp joined #gluster
20:39 skippy etaylor_: you can adjust the ping timeout
20:39 mariusp joined #gluster
20:39 mariusp joined #gluster
20:39 etaylor_ Thanks guys.
20:39 skippy gluster volume set <volname> network.ping-timeout 5
20:40 mariusp joined #gluster
20:40 mariusp joined #gluster
20:41 mariusp joined #gluster
20:42 mariusp joined #gluster
20:43 mariusp joined #gluster
20:43 mariusp joined #gluster
20:44 _Bryan_ joined #gluster
20:44 mariusp joined #gluster
20:44 luckyinva left #gluster
20:45 mariusp joined #gluster
20:45 mariusp joined #gluster
20:45 mariusp joined #gluster
20:47 mariusp joined #gluster
20:47 mariusp joined #gluster
20:48 mariusp joined #gluster
20:48 mariusp joined #gluster
20:48 semiosis glusterbot: help kick
20:48 glusterbot semiosis: (kick [<channel>] <nick>[, <nick>, ...] [<reason>]) -- Kicks <nick>(s) from <channel> for <reason>. If <reason> isn't given, uses the nick of the person making the command as the reason. <channel> is only necessary if the message isn't sent in the channel itself.
20:49 semiosis glusterbot: whoami
20:49 glusterbot semiosis: semiosis
20:49 mariusp joined #gluster
20:49 semiosis glusterbot: kick mariusp please rejoin once you resolve your connectivity issues.
20:49 glusterbot semiosis: Error: mariusp is not in #gluster.
20:49 mariusp joined #gluster
20:49 semiosis glusterbot: kick mariusp please rejoin once you resolve your connectivity issues.
20:49 mariusp was kicked by glusterbot: please rejoin once you resolve your connectivity issues.
20:50 mariusp joined #gluster
20:50 altmariusp joined #gluster
20:50 semiosis oh come on
20:51 semiosis glusterbot: help kban
20:51 glusterbot semiosis: (kban [<channel>] [--{exact,nick,user,host}] <nick> [<seconds>] [<reason>]) -- If you have the #channel,op capability, this will kickban <nick> for as many seconds as you specify, or else (if you specify 0 seconds or don't specify a number of seconds) it will ban the person indefinitely. --exact bans only the exact hostmask; --nick bans just the nick; --user bans just the user, and --host bans just the
20:51 glusterbot semiosis: host. You can combine these options as you choose. <reason> is a reason to give for the kick. <channel> is only necessary if the message isn't sent in the channel itself.
20:52 mariusp joined #gluster
20:53 mariusp joined #gluster
20:54 mariusp joined #gluster
20:54 semiosis glusterbot: kban --exact mariusp@188.26.230.232 180 flooding channel with join/quit
20:54 glusterbot semiosis: Error: mariusp@188.26.230.232 is not in #gluster.
20:54 coreping joined #gluster
20:54 mariusp joined #gluster
20:54 semiosis glusterbot: kban --exact mariusp@188.26.230.232 180 flooding channel with join/quit
20:54 glusterbot semiosis: Error: mariusp@188.26.230.232 is not in #gluster.
20:54 semiosis glusterbot: kban --exact ~mariusp@188.26.230.232 180 flooding channel with join/quit
20:54 glusterbot semiosis: Error: ~mariusp@188.26.230.232 is not in #gluster.
20:55 semiosis meh
20:55 mariusp joined #gluster
20:56 mariusp joined #gluster
20:57 mariusp joined #gluster
20:58 mariusp joined #gluster
20:58 mariusp joined #gluster
20:59 sage joined #gluster
20:59 mariusp joined #gluster
20:59 mariusp joined #gluster
21:00 mariusp joined #gluster
21:01 mariusp joined #gluster
21:02 mariusp joined #gluster
21:04 mariusp joined #gluster
21:04 mariusp joined #gluster
21:05 mariusp joined #gluster
21:05 mariusp joined #gluster
21:06 mariusp joined #gluster
21:07 mariusp joined #gluster
21:07 mariusp joined #gluster
21:08 mariusp joined #gluster
21:09 altmariusp joined #gluster
21:10 altmariusp joined #gluster
21:10 altmariusp joined #gluster
21:11 mariusp_ joined #gluster
21:12 altmariusp joined #gluster
21:12 Lee-- joined #gluster
21:13 msciciel1 joined #gluster
21:13 altmariusp joined #gluster
21:13 capri joined #gluster
21:13 harish_ joined #gluster
21:13 coreping joined #gluster
21:13 Andreas-IPO joined #gluster
21:13 mariusp joined #gluster
21:13 ron-slc joined #gluster
21:14 mariusp joined #gluster
21:15 mariusp joined #gluster
21:16 altmariusp joined #gluster
21:16 jackdpeterson @semiosis -- configured DNS as suggested earlier, switched to Fuse, implemented APC.stat=0, and upped my realpath cache size .... performance is terrible -- looks CPU bound by glusterfs  on the client machines
21:16 jackdpeterson oh, using the native adapter instead of NFS
21:17 jackdpeterson I'm getting like 8 pages per second this way with two c3.large servers -- pretty bad
21:17 mariusp joined #gluster
21:17 semiosis jackdpeterson: client instance type?
21:17 rturk joined #gluster
21:17 mariusp joined #gluster
21:18 jackdpeterson @semiosis -- what do you mean? glusterfs 3.5.2 - ubuntu 12.04, mount -t glusterfs VIP:/pod1
21:18 mariusp joined #gluster
21:18 semiosis you mentioned c3.large servers, i was wondering what the clients are
21:18 semiosis hope not t1.micro
21:18 jackdpeterson all are c3.large
21:18 semiosis oh gotcha
21:18 jackdpeterson so 4x servers
21:18 jackdpeterson 2x gluster, 2x web-ervers
21:18 semiosis right
21:19 semiosis sure your apc stat config is working?
21:19 jackdpeterson Are there any caching mechanisms offered by the gluster client?
21:19 mariusp joined #gluster
21:19 jackdpeterson yes, verified apc.stat is "off" in php.ini
21:19 neofob left #gluster
21:20 mariusp joined #gluster
21:21 jackdpeterson and in the phpinfo() output
21:21 mariusp joined #gluster
21:22 mariusp joined #gluster
21:22 mariusp joined #gluster
21:24 mariusp joined #gluster
21:25 semiosis jackdpeterson: is your include path optimized?
21:25 mariusp joined #gluster
21:25 altmariusp joined #gluster
21:26 semiosis does wordpress use autoloading or does it use require in the files?
21:26 jackdpeterson Hard to say with 100% confidence -- we're leveraging WordPress as our core. The plugins that we write are fairly optimized (perform reasonably well on regular systems w/o Gluster implemented) the other plugins would require further analysis.
21:26 mariusp joined #gluster
21:27 jackdpeterson that being said... there is some dynamic nature to the plugin include path for each plugin
21:27 _pol joined #gluster
21:28 mariusp joined #gluster
21:28 mariusp joined #gluster
21:29 mariusp joined #gluster
21:29 skippy aside from "option rpc-auth-allow-insecure on", what other options can go in /etc/glusterfs/glusterd.vol ?
21:29 skippy are they documented anywhere?
21:30 mariusp joined #gluster
21:31 semiosis idk
21:31 mariusp joined #gluster
21:32 skippy dk if there are any, or if they're documented? ;)
21:32 mariusp joined #gluster
21:32 semiosis yes
21:32 mariusp joined #gluster
21:32 skippy whose on first?
21:32 semiosis ok
21:33 mariusp joined #gluster
21:33 mariusp joined #gluster
21:34 mariusp joined #gluster
21:35 skippy any guidance on when to grow Gluster volumes by extending logical volumes versus adding bricks?  Are there compelling situations for either option?
21:35 skippy seems to me that extending a logical volume is easy enough; but are there non-obvious benefits to adding more bricks instead?
21:35 mariusp joined #gluster
21:36 mariusp joined #gluster
21:36 semiosis skippy: add-brick requires rebalance which is expensive (prohibitively so, imho)
21:37 skippy ah. good to know.
21:37 mariusp joined #gluster
21:37 semiosis skippy: if you're running out of bandwidth on a server, then you need to add-brick to expand out to more servers (or if you planned ahead with many bricks per server, you could use replica-brick to move bricks out to new servers, my preferred plan)
21:37 semiosis skippy: but if you just need to add capacity (not bandwidth) then growing the bricks is a great option
21:38 mariusp joined #gluster
21:38 semiosis s/replica-brick/replace-brick/
21:38 glusterbot What semiosis meant to say was: skippy: if you're running out of bandwidth on a server, then you need to add-brick to expand out to more servers (or if you planned ahead with many bricks per server, you could use replace-brick to move bricks out to new servers, my preferred plan)
21:38 mariusp joined #gluster
21:38 skippy as i'm still new to Gluster, can you quantify how one might know they're running out of bandwidth?
21:39 mariusp joined #gluster
21:39 semiosis server nics are saturated
21:40 mariusp joined #gluster
21:41 mariusp joined #gluster
21:41 siel joined #gluster
21:42 mariusp joined #gluster
21:42 mariusp joined #gluster
21:43 mariusp joined #gluster
21:43 mariusp joined #gluster
21:44 mariusp joined #gluster
21:44 mariusp joined #gluster
21:45 mariusp joined #gluster
21:47 mariusp joined #gluster
21:47 mariusp joined #gluster
21:48 mariusp joined #gluster
21:49 altmariusp joined #gluster
21:49 mariusp joined #gluster
21:50 altmariusp joined #gluster
21:51 msmith_ joined #gluster
21:52 mariusp joined #gluster
21:52 mariusp joined #gluster
21:53 mariusp joined #gluster
21:53 mariusp joined #gluster
21:55 mariusp joined #gluster
21:55 mariusp joined #gluster
21:56 mariusp joined #gluster
21:56 mariusp joined #gluster
21:57 mariusp joined #gluster
21:57 mariusp joined #gluster
21:58 DV joined #gluster
21:58 mariusp joined #gluster
21:58 mariusp joined #gluster
21:59 mariusp joined #gluster
22:00 mariusp joined #gluster
22:01 etaylor_ what are the benifits of using NFS over gluster native mount if any?
22:01 mariusp joined #gluster
22:01 skippy no client to install?
22:01 skippy other than that, I don't see many.  I intend to use the native client.
22:01 etaylor_ With NFS, there is no failover correct?
22:01 mariusp joined #gluster
22:02 mariusp joined #gluster
22:03 mariusp joined #gluster
22:03 mariusp joined #gluster
22:04 mariusp joined #gluster
22:04 msmith_ joined #gluster
22:05 mariusp joined #gluster
22:05 altmariusp joined #gluster
22:06 mariusp joined #gluster
22:06 mariusp joined #gluster
22:07 mariusp joined #gluster
22:07 skippy i dont believe so, but I'm still learning and haven't been looking at NFS at all.
22:07 mariusp joined #gluster
22:08 mariusp joined #gluster
22:08 mariusp joined #gluster
22:09 mariusp joined #gluster
22:09 mariusp joined #gluster
22:10 mariusp joined #gluster
22:10 mariusp joined #gluster
22:11 mariusp joined #gluster
22:12 mariusp joined #gluster
22:12 mariusp joined #gluster
22:14 mariusp joined #gluster
22:15 _pol joined #gluster
22:16 mariusp joined #gluster
22:17 mariusp joined #gluster
22:18 jackdpeterson hey all, how do I verify that I have the correct translators loaded on the client side? e.g. performance io cache
22:18 mariusp joined #gluster
22:19 mariusp joined #gluster
22:19 mariusp joined #gluster
22:20 mariusp joined #gluster
22:20 mariusp joined #gluster
22:21 mariusp joined #gluster
22:21 semiosis correct?
22:21 semiosis you can see the generated configs (but dont edit them) in /var/lib/gluster/vols
22:22 mariusp joined #gluster
22:22 semiosis jackdpeterson: of course all configuration should be done through ,,(options)
22:22 glusterbot jackdpeterson: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
22:22 mariusp joined #gluster
22:23 mariusp joined #gluster
22:23 mariusp joined #gluster
22:24 jackdpeterson @semiosis -- any suggestions for small file performance at this point? I'd like to use the native gluter client mounting option for the failover bit.... but the performance is so bad. NFS performance is within reason and seems satisfactory -- but the failover issues are so bad that I'm very hesitant to take a big SPOF for the sake of performance if it can be avaoided.
22:24 mariusp joined #gluster
22:25 mariusp joined #gluster
22:25 Ch3LL_ joined #gluster
22:25 mariusp joined #gluster
22:26 mariusp joined #gluster
22:27 mariusp joined #gluster
22:27 mariusp joined #gluster
22:28 mariusp joined #gluster
22:29 mariusp joined #gluster
22:29 mariusp joined #gluster
22:30 mariusp joined #gluster
22:30 mariusp joined #gluster
22:31 mariusp joined #gluster
22:31 altmariusp joined #gluster
22:31 semiosis jackdpeterson: hard to say, i dont know what your bottleneck is
22:32 semiosis i used to run php apps on gluster (even wordpress!), in ec2, and performance was acceptable
22:32 mariusp joined #gluster
22:32 semiosis i didnt really do any tuning
22:32 Pupeno_ joined #gluster
22:33 mariusp joined #gluster
22:33 mariusp joined #gluster
22:34 mariusp joined #gluster
22:36 mariusp joined #gluster
22:36 semiosis what distro are you using?
22:37 mariusp joined #gluster
22:37 mariusp joined #gluster
22:39 mariusp joined #gluster
22:39 mariusp joined #gluster
22:40 mariusp joined #gluster
22:41 jackdpeterson ubuntu 12.04, presumably your ppa repository
22:41 jackdpeterson w/ the 3.5 branch
22:42 gildub joined #gluster
22:42 mariusp joined #gluster
22:42 jackdpeterson everything else it pretty much the standard rigamarole, replica 2 xfs inode 512 yada yada
22:42 mariusp joined #gluster
22:43 jackdpeterson gets cpu bound when using native client... no load when using nfs
22:43 semiosis jackdpeterson: you might try trusty (14.04) for a client machine.  that newer kernel might perform better for the fuse client
22:44 mariusp joined #gluster
22:44 semiosis but wow, cpu bound.  are you hammering it with a load test?  if not, that suggests some issue with your setup.
22:45 mariusp joined #gluster
22:45 mariusp joined #gluster
22:45 jackdpeterson yes, hammering w/ a load test -- pretty simple -- hit the index page w/ 100 simultaneous clients... surprisingly glusterfs pops to the top of the load and not apache
22:46 msmith joined #gluster
22:46 jackdpeterson alright, anywayhs... gotta bounce for now
22:47 mariusp joined #gluster
22:47 mariusp joined #gluster
22:48 mariusp joined #gluster
22:49 mariusp joined #gluster
22:49 mariusp joined #gluster
22:50 mariusp joined #gluster
22:50 altmariusp joined #gluster
22:51 mariusp joined #gluster
22:51 mariusp joined #gluster
22:52 mariusp joined #gluster
22:53 mariusp joined #gluster
22:53 mariusp joined #gluster
22:55 mariusp joined #gluster
22:55 mariusp joined #gluster
22:56 altmariusp joined #gluster
22:57 mariusp joined #gluster
22:57 mariusp joined #gluster
22:58 mariusp joined #gluster
22:59 mariusp joined #gluster
23:00 mariusp joined #gluster
23:01 mariusp joined #gluster
23:01 mariusp joined #gluster
23:02 mariusp joined #gluster
23:03 mariusp joined #gluster
23:03 mariusp joined #gluster
23:04 mariusp joined #gluster
23:06 mariusp joined #gluster
23:08 mariusp joined #gluster
23:08 mariusp joined #gluster
23:09 mariusp joined #gluster
23:11 mariusp joined #gluster
23:11 mariusp joined #gluster
23:12 mariusp joined #gluster
23:13 mariusp joined #gluster
23:13 mariusp joined #gluster
23:15 mariusp joined #gluster
23:16 mariusp joined #gluster
23:16 mariusp joined #gluster
23:17 mariusp joined #gluster
23:18 mariusp joined #gluster
23:18 mariusp joined #gluster
23:22 mariusp joined #gluster
23:22 _Bryan_ joined #gluster
23:22 mariusp joined #gluster
23:23 mariusp joined #gluster
23:25 mariusp joined #gluster
23:26 sputnik13 joined #gluster
23:27 mariusp joined #gluster
23:28 mariusp joined #gluster
23:29 mariusp joined #gluster
23:29 mariusp joined #gluster
23:30 mariusp joined #gluster
23:31 mariusp joined #gluster
23:32 mariusp joined #gluster
23:33 jobewan joined #gluster
23:33 mariusp joined #gluster
23:33 mariusp joined #gluster
23:34 mariusp joined #gluster
23:35 mariusp joined #gluster
23:35 mariusp joined #gluster
23:36 mariusp joined #gluster
23:36 mariusp joined #gluster
23:38 mariusp joined #gluster
23:38 sputnik13 joined #gluster
23:39 mariusp joined #gluster
23:39 altmariusp joined #gluster
23:39 mariusp joined #gluster
23:41 mariusp joined #gluster
23:41 altmariusp joined #gluster
23:42 mariusp joined #gluster
23:42 mariusp joined #gluster
23:43 mariusp joined #gluster
23:43 mariusp joined #gluster
23:45 mariusp joined #gluster
23:46 mariusp joined #gluster
23:46 mariusp joined #gluster
23:47 mariusp joined #gluster
23:48 mariusp joined #gluster
23:49 mariusp joined #gluster
23:50 mariusp joined #gluster
23:51 mariusp joined #gluster
23:51 mariusp joined #gluster
23:52 mariusp joined #gluster
23:53 gEEbusT joined #gluster
23:53 mariusp joined #gluster
23:54 mariusp joined #gluster
23:55 mariusp joined #gluster
23:55 mariusp joined #gluster
23:56 sputnik13 joined #gluster
23:56 mariusp joined #gluster
23:58 mariusp joined #gluster
23:58 mariusp joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary