Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:27 mpietersen joined #gluster
01:01 haomaiwang joined #gluster
01:09 overclk joined #gluster
01:15 amye left #gluster
01:41 DV_ joined #gluster
01:46 nangthang joined #gluster
01:52 nishanth joined #gluster
02:00 dgbaley joined #gluster
02:01 amye- joined #gluster
02:01 haomaiwa_ joined #gluster
02:07 zhangjn joined #gluster
02:13 zhangjn joined #gluster
02:26 zhangjn_ joined #gluster
02:46 zhangjn joined #gluster
02:49 zhangjn joined #gluster
02:56 nangthang joined #gluster
03:01 haomaiwa_ joined #gluster
03:34 nangthang joined #gluster
03:35 nbalacha joined #gluster
03:47 [7] joined #gluster
04:01 haomaiwa_ joined #gluster
04:23 hagarth joined #gluster
04:43 RameshN joined #gluster
05:01 haomaiwa_ joined #gluster
05:15 GB21 joined #gluster
05:53 haomaiwang joined #gluster
06:01 haomaiwa_ joined #gluster
06:06 _fortis joined #gluster
06:11 LebedevRI joined #gluster
06:23 Akee joined #gluster
06:39 Peppard joined #gluster
07:01 haomaiwa_ joined #gluster
07:55 maveric_amitc_ joined #gluster
08:01 haomaiwa_ joined #gluster
09:01 haomaiwa_ joined #gluster
09:16 skoduri joined #gluster
09:22 prg3 joined #gluster
09:41 GB21 joined #gluster
09:46 nbalacha joined #gluster
10:14 neha_ joined #gluster
10:39 EinstCrazy joined #gluster
10:52 davidself joined #gluster
12:42 shubhendu__ joined #gluster
12:54 DV joined #gluster
12:55 skoduri joined #gluster
13:04 bluenemo joined #gluster
13:08 bluenemo hi guys (hi to JoeJulian who helped me the last days too :) So I've tweaked around on my glusterfs setup and now using two file servers with rep and four web workers, mounting gluster via NFS. It works out quite nice. I tweaked the cache of NFS a bit and also using cachefilesd with the mounts now. The speed isnt incredibly fast - but its much better than first setup with glusterfs own mount. I can write 10MB with dd with 100mb/s writing speed.
13:08 bluenemo Judging by my benchmarks, this will work very well for my production site. I'm quite happy that I found my redundant NFS Server that just works! :)
13:09 bluenemo Also I wrote a saltstack formula for the setup. It can setup disks, bricks, volumes and settings for servers + mounts via nfs or glusterfs for "clients". I'll see into writing some doc for it and publishing that soon. I doesnt use the config file but the gluster volume set / get commands, but imho that does for now. Config Files seemed a bit complicated to me atm.
13:12 mhulsman joined #gluster
13:19 nbalacha joined #gluster
13:33 haomaiwa_ joined #gluster
13:44 nishanth joined #gluster
13:49 overclk joined #gluster
13:53 Mr_Psmith joined #gluster
13:54 mhulsman joined #gluster
14:01 haomaiwa_ joined #gluster
14:24 shyam joined #gluster
14:32 Philambdo joined #gluster
15:01 haomaiwa_ joined #gluster
15:11 RedW joined #gluster
15:28 overclk joined #gluster
15:33 plarsen joined #gluster
15:34 shaunm joined #gluster
15:56 ir8 Hey gang can someone lend a helping hand to help me figure out something of what is going on with my a brick on mine?
15:56 ir8 http://pastebin.com/GvCF12Tp (is a output of my status)
15:56 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:57 ir8 http://fpaste.org/271981/44336943/
15:57 glusterbot Title: #271981 Fedora Project Pastebin (at fpaste.org)
15:57 ir8 the issue i am having my writes are taking on unholy amount of time.
16:01 overclk joined #gluster
16:01 haomaiwa_ joined #gluster
16:06 nangthang joined #gluster
16:17 overclk joined #gluster
16:19 overclk joined #gluster
16:34 overclk joined #gluster
16:36 mhulsman joined #gluster
16:36 cholcombe joined #gluster
16:42 haomaiwa_ joined #gluster
16:54 JoeJulian bluenemo: fwiw, using the set/get commands is better anyway. That's the only way for live clients to be notified of state changes.
16:57 bluenemo Ah ok, yeah, makes sense. I'm just parsing it from a py dict now, as in gluster volume fo set {{ key }} {{ value }} and then I run gluster volume get all and just grep for key and value
16:57 bluenemo works :)
16:57 bluenemo but yeah, using NFS with some tweaks now really rocks it!
16:58 bluenemo (as far as I can tell - didnt run apache benchmark against it yet, but it feels very smooth so far)
16:58 bluenemo as in not slower from my feeling than the normal nfs server on a single node
16:58 bluenemo JoeJulian,
16:59 bluenemo not faster either but I dont need faster. Even though, now that I have a CDN in front of it, it will have to serve much less stuff anyway.
17:00 JoeJulian bluenemo: Wasn't there already state management in salt for gluster volume settings?
17:02 bluenemo there is a module, but no state module.. the module "does" stuff regardless if they have been "done before". a state module checks the state before. Its kinda salt logic..
17:02 bluenemo ;)
17:02 bluenemo But yes, I could actually use the module in a state, calling a module, and just checking if I need to call it via the bash command gluster volume get | grep key value
17:03 bluenemo however the module is not in production repos for ubuntu yet, and I'm using those
17:03 bluenemo if it shows up, its only a change of a couple of lines
17:03 bluenemo did you write it JoeJulian ?
17:03 bluenemo (hm no if you ask you propably didnt ;) )
17:03 JoeJulian I did not.
17:04 JoeJulian I'm writing state wrappers for the tls modules atm.
17:04 ir8 I am having a issue when extracting tarballs.
17:04 ir8 A tarball of 4mb is taking roughly 5minutes.
17:05 ir8 that is excessive. How can I isloate this cause?
17:07 Rapture joined #gluster
17:09 bluenemo ir8, tell us sth about your setup :)
17:11 bluenemo ah btw. When you install the latest version of gluster via PPA, imho it should include the new NFS server as a recommendation.
17:11 ir8 bluenemo: right now its a three node cluster.
17:11 bluenemo currenlty it will disable old nfs, then try to start the new nfs and then say its not present
17:11 ir8 Its a replicated
17:11 ir8 Its a replicated configuration.
17:11 bluenemo what kind of stuff do you have in it? do you mount -t glusterfs ?
17:12 bluenemo (stuff: php webroot, movies, backup files, virt machine images)
17:12 ir8 localhost:vol0 on /mnt type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
17:12 ir8 its web cluster serving 3 wordpress sites.
17:12 bluenemo ah ok. I just did a similar setup. try using NFSv3 for mount. my mount options:
17:13 bluenemo omega:/gfs_fin_web/var/wwwnfsnosharecache,fsc=fincallorca_web,noatime,tcp,bg,nosuid,rsize=65536,wsize=65536,hard,proto=tcp0 0
17:13 bluenemo also I configured cachefilesd..
17:13 bluenemo hm shouludnt have posted all of it M)
17:13 bluenemo well if you need a good holiday ;)
17:14 bluenemo i8, cachefilesd: https://paste.debian.net/hidden/f03e1bfa/
17:14 glusterbot Title: Debian Pastezone (at paste.debian.net)
17:15 bluenemo the line above starting with omega is the /etc/fstab options I used for NFS mounting - your opinion on it would be interesting JoeJulian :)
17:15 ir8 bluenemo so wait your just mounting a file?
17:16 bluenemo Using this, kind of everything feels very smooth now - but as stated above, I did not run apache benchmark on it yet, so idk how it behaves when putting random file lookups via php on it.
17:16 bluenemo ah yes, you can use a partition too
17:16 ir8 correct.
17:17 bluenemo might be cooler to do that, yes. I didnt want to klick a small disk on Amazon so I just used that
17:17 bluenemo ext4 options might be tunable for the usecase, didnt do that either
17:18 ir8 [root@pinac2]/home/elange# mount -t nfs -o tcp,nfsvers=3,nolock localhost:/vol0 /mnt2
17:18 ir8 mount.nfs: Connection timed out
17:18 ir8 hmm
17:18 ir8 interesitng.
17:18 bluenemo look in gluster volume status if nfs is on
17:19 bluenemo if not its sth like gluster volume my_vol set nfs.disable OFF
17:19 bluenemo (to enable it M)
17:19 bluenemo also for php, imho you wont need atime
17:20 ir8 right.
17:20 bluenemo I read this http://www.netapp.com/us/media/tr-3183.pdf
17:20 bluenemo and this http://nfs.sourceforge.net/nfs-howto/ar01s05.html
17:20 glusterbot Title: 5.�Optimizing NFS Performance (at nfs.sourceforge.net)
17:20 bluenemo imho my mount options are ok, you can sure further optimize stuff
17:20 bluenemo depends on what kinda load your wp gets
17:20 ir8 its about about 1.2million page views a month.
17:21 ir8 its timing out still even with it being on.
17:21 bluenemo is it on the same server?
17:22 bluenemo as in are you mounting nfs on the server running gluster server?
17:22 bluenemo fw?
17:22 ir8 localhost:/vol0 on /mnt2 type nfs (rw,tcp,nfsvers=3,nolock,addr=127.0.0.1)
17:22 ir8 that work on server1 and not on server2
17:24 bluenemo hm
17:24 bluenemo can you mount on both servers via gluster? as in specifying both nodes (not at the sime time) ;)
17:24 bluenemo same
17:25 ir8 yeah host data3 mount correctly.
17:25 bluenemo well write intensive tasks are what gluster isnt exactly nifty for - you can however stripe the writes.
17:26 bluenemo I'm currenlty test-untaring a linux kernel. takes 1 second on my laptop. takes much more than a minute in gluster via nfs ;)
17:26 ir8 okay.
17:26 ir8 So NFS is much faster it seems.
17:26 bluenemo so if you want to read with NFS its kind of ok for me so far - as I said, no production load yet. However JoeJulian runs some php in production with gluster. He advises to cache as close to the client as possible
17:27 bluenemo so yes, with gluster you definitly get quite much less speed, especially in write, but also in read than compared to your "local disk"
17:27 bluenemo however you got the cluster fs fancyness for little setup and maintancnce cost imho
17:27 ir8 correct.
17:27 bluenemo (what I've seen so far)
17:27 ir8 I need to figure out what is wrong with my server2
17:27 bluenemo and with php its mostly ready..
17:27 bluenemo yeah even without cachefilesd its way faster. i have no real idea why
17:28 bluenemo but its also nice as I was looking to replace NFS and now I just can replace the backends with basically no downtime M0
17:28 bluenemo M)
17:28 bluenemo I also think (but not sure) that looking up 20 files like php is more expensive than just getting some jpg
17:29 bluenemo but I'm not sure about that
17:30 ir8 yeah totally.
17:30 bluenemo the thing is that every write makes the "round trip" across the servers. that costs time
17:30 bluenemo but yeah for like hosting stuff thats kinda static write isnt that important.
17:30 armyriad joined #gluster
17:31 bluenemo or like a fs for streaming stuff
17:31 ir8 not for streaming at all.
17:31 ir8 Its for static files.
17:35 bluenemo than give it a try :)
17:47 ir8 yeah
17:50 ir8 okay mount as nfs worked much better
18:12 ir8 joined #gluster
18:12 ir8 Hey guys thanks for the advice.
18:12 ir8 I removed fuse from the mix and used NFS.
18:13 marlinc joined #gluster
18:14 sage joined #gluster
18:21 Philambdo joined #gluster
18:26 bluenemo yw :)
18:26 bluenemo did you try cachefilesd? it makes quite the difference for me
18:29 ir8 not yet
18:30 ir8 mount nfs and volume show different sizes
18:30 sage joined #gluster
18:30 ir8 almost like a split brain
18:35 Philambdo joined #gluster
19:07 overclk joined #gluster
19:14 blu_ joined #gluster
19:15 blu__ joined #gluster
19:15 Philambdo joined #gluster
19:30 Philambdo joined #gluster
19:49 marlinc joined #gluster
20:45 Mr_Psmith joined #gluster
21:15 plarsen joined #gluster
21:18 ir8 joined #gluster
21:18 ir8 Thanks again.
22:26 Danishman joined #gluster
23:34 gildub joined #gluster
23:36 zhangjn joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary