Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 DV joined #gluster
00:32 aronwp joined #gluster
00:37 aronwp hey guys, The other day I was asking what a good setup would be as a fileserver for about 35 websites (50 soon) I was asking if it would be better to run two 2GB (ram) servers in replicate or 4 x 1GB (ram) servers setup replicate-2
00:38 aronwp I was told the extra ram in the 2GB servers would probably be better
00:38 msolo how many disks will you have?
00:38 msolo and how many iops do you need?
00:39 aronwp these are all linode vps servers and each will have 1 ssd disk attached (probably virtual)
00:40 aronwp not sure about iops but i never run high on those
00:40 aronwp my issue is cpu usage
00:40 msolo linux has some fixed overhead, lets say 250-500MB, so in the 2 x 2GB you would end up with more usable buffercache
00:41 msolo cpu usage in glsuter itself?
00:41 aronwp i'm now thinking of running 2 2gb and 2 1gb
00:41 aronwp ya
00:41 aronwp high cpu usage from bluster
00:41 aronwp not all the time
00:41 msolo interesting, are you mostly doing small file reads?
00:42 aronwp yes probably, i currently have 35 WordPress sites an I'm using bluster for the www folder
00:43 aronwp so with all the small WordPress php files and also the cache folders its allot of files
00:44 msolo right. i'm fairly new to gluster tuning, so i don't know where the cache lives in cluster. is it the case that your clients are also servers?
00:44 aronwp I'm actually running gluster on 2 backend web servers in replicate and I want to move the pool to separate servers
00:44 msolo i see
00:45 msolo are you using the NFS client, or the FUSE client?
00:45 aronwp i have a loadbalancer in from of 2 web servers (LEMP) and bluster is running on those so it was not an ideal setup but worked to get started
00:45 xleo joined #gluster
00:45 aronwp *gluster
00:46 msolo from what i can tell, the gluster servers (those running glusterfsd) will have the files in their page cache
00:46 msolo what i can't say for certain, is whether clients will also cache that data
00:46 msolo i suspect that is tunable to some degree, but my NFS foo is a bit weak at the moment
00:47 msolo i've noticed some performance differences between FUSE and NFS clients
00:47 msolo and it think that is work-load specific
00:47 msolo if you are doing many small reads, you might try NFS if you haven't
00:48 aronwp at high cpu usage times glisters uses about 30-50% and so does glusterfsd
00:48 aronwp my server without gluster never used more than 30%
00:48 msolo right, well, your server is doing substantially more with gluster
00:49 msolo certain writes will cost much more
00:49 aronwp I haven't tried nfs and was going to but decided to move the gluster server off the web servers. Easier to add more nodes that way anyways
00:49 msolo and there is the general overhead of server netwok chats
00:50 msolo i agree -- partitioning the service is a smart move regardles
00:50 aronwp ya, I was just trying to figure out an ideal setup and how many servers
00:51 aronwp 2 x 2gb or 4 x 1gb or 2x2gb & 2x1gb...
00:52 msolo gluster is easier to manage when the machines are homogenous
00:53 aronwp homogenous?
00:53 msolo all the same basic configuration
00:53 msolo 2 cpu, 2 gb ram, 2 disk, etc
00:54 aronwp ok so I can always add more bricks later
00:54 msolo i would say move to the 2 x 2gb
00:54 msolo once gluster is isolated, you can determine your next bottlenecks
00:55 msolo if you need more bricks or cpus, you add them
00:55 aronwp ya, thats probably the best way to go
00:55 msolo isolation is a general recipe for success
00:57 aronwp gluster is the high cpu usage reason. without it the server ran super fast
00:57 aronwp *servers
01:00 msolo joined #gluster
02:02 msolo joined #gluster
02:09 aronwp joined #gluster
03:41 aronwp joined #gluster
04:05 plarsen joined #gluster
04:40 JoeJulian joined #gluster
05:28 shubhendu_ joined #gluster
05:31 LebedevRI joined #gluster
05:42 sputnik13 joined #gluster
06:54 bala joined #gluster
07:07 rjoseph joined #gluster
07:19 bala joined #gluster
07:45 rajesh joined #gluster
08:01 Philambdo joined #gluster
08:03 elico joined #gluster
08:27 sputnik13 joined #gluster
09:01 sputnik13 joined #gluster
09:07 Pupeno joined #gluster
09:12 Bardack joined #gluster
09:34 Zordrak_ joined #gluster
09:51 calum_ joined #gluster
10:28 glusterbot New news from newglusterbugs: [Bug 1138992] gluster.org broken links <https://bugzilla.redhat.com/show_bug.cgi?id=1138992>
10:28 clyons joined #gluster
10:58 getup- joined #gluster
10:59 VerboEse joined #gluster
11:48 haomaiwa_ joined #gluster
12:00 haomaiw__ joined #gluster
12:47 bala joined #gluster
12:53 diegows joined #gluster
13:16 Pupeno joined #gluster
13:19 Pupeno_ joined #gluster
13:21 soumya joined #gluster
13:45 tdasilva joined #gluster
14:12 recidive joined #gluster
14:15 clyons|2 joined #gluster
14:53 clyons joined #gluster
15:42 bala joined #gluster
15:42 sputnik13 joined #gluster
15:44 edward1 joined #gluster
15:52 rotbeard joined #gluster
16:53 plarsen joined #gluster
17:36 Pupeno joined #gluster
17:42 Lilian joined #gluster
17:54 hagarth joined #gluster
18:05 Philambdo joined #gluster
18:11 daxatlas joined #gluster
18:13 bennyturns joined #gluster
18:16 MacWinner joined #gluster
18:31 daxatlas joined #gluster
18:49 recidive joined #gluster
19:06 xleo joined #gluster
19:08 daxatlas joined #gluster
19:17 Pupeno_ joined #gluster
19:37 daxatlas joined #gluster
19:45 mkzero joined #gluster
20:15 _Bryan_ joined #gluster
20:15 Pupeno joined #gluster
20:28 DV joined #gluster
21:20 tg2 how long does it take a reblaance to stop once you issue the stop command?
21:20 juhaj joined #gluster
21:21 tg2 i'm going on 1+ hours since I hit stop, the status shows "stopped" on rebalance status, but i can't remove-brick because it says a rebalance is running
21:28 LHinson joined #gluster
21:29 tg2 > gluster volume rebalance storage stop -> Stopped rebalance process on volume storage
21:29 tg2 gluster volume remove-brick storage storage:/storage1 start -> Rebalance is in progress. Please retry after completion
21:34 juhaj joined #gluster
21:39 tg2 JoeJulian, kkeithley1 - any way to force it out of rebalance mode?
21:48 Lilian joined #gluster
21:48 James joined #gluster
21:53 juhaj joined #gluster
22:04 sickness joined #gluster
22:05 stickyboy joined #gluster
22:13 m0zes joined #gluster
22:42 daxatlas joined #gluster
22:55 Pupeno_ joined #gluster
23:27 Pupeno joined #gluster
23:30 glusterbot New news from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.com/show_bug.cgi?id=1058300>
23:37 LHinson joined #gluster
23:52 bala joined #gluster
23:57 daxatlas joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary