Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 StarBeast joined #gluster
00:24 nueces joined #gluster
00:25 vpshastry joined #gluster
00:35 awheeler joined #gluster
01:22 flrichar joined #gluster
01:38 majeff joined #gluster
01:45 jag3773 joined #gluster
03:34 jag3773 joined #gluster
03:37 mjrosenb joined #gluster
03:58 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
03:58 m0zes joined #gluster
04:32 edong23 joined #gluster
06:03 92AAANQ55 joined #gluster
06:30 StarBeast joined #gluster
07:19 jiku joined #gluster
07:31 Chiku|dc joined #gluster
07:31 Chiku|dc joined #gluster
07:46 StarBeast joined #gluster
07:46 jiku joined #gluster
08:05 Guest79483 joined #gluster
08:56 StarBeast joined #gluster
08:58 bulde joined #gluster
08:59 glusterbot New news from newglusterbugs: [Bug 962226] 'prove' tests failures <http://goo.gl/J2qCz>
09:30 rotbeard joined #gluster
09:32 StarBeas_ joined #gluster
09:33 StarBeast joined #gluster
10:09 alexturner joined #gluster
10:09 alexturner Hey all, totally just deleted my glusterd.info file could someone pastebin one - love you long time
10:15 alexturner Ah
10:15 alexturner Stupid question
10:25 alexturner I've deleted the contents of /etc/glusterd/peers on all of machines, all UUID's are unique yet whenever I do a probe I'm getting "faac-01 is already part of another cluster"
10:31 kobigurk joined #gluster
10:31 kobigurk hello everyone!
10:31 kobigurk I am considering testing GlusterFS for our uses and I've come to get advice
10:32 kobigurk we have tens of millions of files (and will have billions the next year, and growing) of images (500kb-5mb)
10:32 kobigurk we currently use Windows DFS, and it performs very poorly on replicating these files
10:33 kobigurk would Gluster perform better?
10:33 kobigurk we currently get about 1MB/s network traffic with DFS replication, while copying a large files performs at 90MB/s
10:41 H__ kobigurk: don't know if it will be faster but traversing many objects on gluster performs bad IMO. Still it might help you. Do test.
10:45 92AAANQ55 hello! i have this interesting case :)
10:45 92AAANQ55 which makes me wonder
10:45 92AAANQ55 i have gluster - with share to windows with samba
10:46 92AAANQ55 and when i test writing speeds - the samba writes 50MB/s, but linux client writes only 20MB/s
10:46 92AAANQ55 :)
10:47 92AAANQ55 how is this even possible? because samba usses fuse client to write to gluster, and gluster client too... where is the difference?
10:50 kobigurk H__: what do you mean by traversing many objects? I'm considering only replication perofrmance
10:51 H__ think recursive ls, or rsync, or find.
10:54 kobigurk oh, I don't want that at all
10:54 kobigurk what I really need is backup
10:54 kobigurk across locations
10:54 kobigurk not locations, actually, but servers located near one another
10:57 92AAANQ55 kobigurk: I am doing the same thing :)
10:58 kobigurk I totally missed your messages
10:59 kobigurk do share!
10:59 kobigurk you have millions of files?
10:59 92AAANQ55 we run render farm
10:59 kobigurk and you persist those files for future use?
11:01 92AAANQ55 well, the files change
11:01 92AAANQ55 with users
11:01 92AAANQ55 but the storage capacity and number of files is large
11:01 92AAANQ55 and the sizes vary from 10kb to hundreds of megs or couple gigs
11:02 92AAANQ55 and for now theres like couple hundred thousands
11:03 kobigurk insteresting
11:03 92AAANQ55 but we are growing rapidly and too windows is getting really slow with this setup
11:03 kobigurk what are you using in windows now?
11:03 kobigurk (actually, the question is relevant only if we're talking about replication)
11:03 kobigurk do you consider replication major as well?
11:03 92AAANQ55 yes
11:04 kobigurk awesome
11:04 92AAANQ55 replication is priority for us
11:04 kobigurk !
11:04 kobigurk so what are you using in windows? just to compare :)
11:04 92AAANQ55 no, theres no replication in windows right now
11:04 92AAANQ55 theres only raid
11:05 kobigurk oh, i see
11:05 kobigurk and you want to replace the raid?
11:07 92AAANQ55 we will use it,
11:07 92AAANQ55 but that's not the case
11:07 92AAANQ55 your question is about performance
11:08 92AAANQ55 and with lots of small files gluster is not that impressive when it comes to writing
11:08 kobigurk maybe I'm naive, but I'm not worried for now about writing
11:08 kobigurk even NTFS handles that for now
11:08 kobigurk the problem we have is the replication
11:09 92AAANQ55 gluster gives you ability to set each brick on different NIC
11:10 92AAANQ55 with that, and some good planing you should be a way better than with windows
11:13 kobigurk alright
11:13 kobigurk thanks
11:35 ChikuLinu__ joined #gluster
12:01 flrichar joined #gluster
12:03 Chiku|dc joined #gluster
12:48 andrei__ joined #gluster
13:26 jiku joined #gluster
13:26 jikz_ joined #gluster
13:30 glusterbot New news from newglusterbugs: [Bug 961856] [FEAT] Add Glupy, a python bindings meta xlator, to GlusterFS project <http://goo.gl/yCNTu>
13:42 chirino joined #gluster
14:11 jiku joined #gluster
14:28 StarBeas_ joined #gluster
14:38 maple joined #gluster
14:57 nightwalk joined #gluster
15:01 majeff joined #gluster
15:20 robos joined #gluster
15:53 jiku joined #gluster
16:05 ujjain joined #gluster
17:05 dustint joined #gluster
17:17 StarBeast joined #gluster
17:20 MrNaviPacho joined #gluster
17:33 jiku joined #gluster
17:33 bulde joined #gluster
17:34 m0zes joined #gluster
18:03 plarsen joined #gluster
18:09 piotrektt_ joined #gluster
18:14 jiku joined #gluster
18:31 andrei__ joined #gluster
18:43 andrei__ joined #gluster
18:47 robos joined #gluster
19:09 andrei__ joined #gluster
19:46 plarsen joined #gluster
20:05 lalatenduM joined #gluster
20:15 MrNaviPacho joined #gluster
20:21 Guest79483 joined #gluster
20:31 kris-- joined #gluster
20:38 Airbear_ joined #gluster
20:41 Kins joined #gluster
21:09 lbalbalba joined #gluster
21:10 ramkrsna joined #gluster
21:10 ramkrsna joined #gluster
21:57 lbalbalba hi anyone around for a little help with git ?
21:57 lbalbalba my 1st rfc.sh submit failed (something about the comment line being too long), and now rfc.sh gives me this: http://fpaste.org/13084/36900060/
21:57 glusterbot Title: #13084 Fedora Project Pastebin (at fpaste.org)
21:58 lbalbalba the (new) file is './tests/basic/nfs-fops.t'.
21:58 lbalbalba and i dont enough about git to fix it :(
22:09 gorkhaan joined #gluster
22:10 gorkhaan Hello. I have 2 brick, with replica mode. Everything works great. Appservers mounted glusterfs volume with the native fuse client, meaning multiple routes are available to data transfer. BUT
22:10 gorkhaan when I remove (simulate brick fail), a huge network lag takes in place, probably a timeout. and the appsrv just hangs and doing nothing.
22:11 gorkhaan How can I decrease this timeout or what can i do to get this hanging down around 5-10 sec?
22:13 lbalbalba yeah. mv'ed the file out of the source tree, did a new git branch, and mv ' ed the file over back in. its prolly not the way its supposed to go, but it works ;)
22:13 gorkhaan Is this the option what I should decrease?  network.ping-timeout
22:20 Nagilum_ that would be my first guess
22:20 fidevo joined #gluster
22:21 lbalbalba hrm. docs say reaching that timeout should be avoided: http://gluster.org/community/documentation/index​.php/Gluster_3.2:_Setting_Volume_Options#network.ping-timeout
22:21 glusterbot <http://goo.gl/NugxI> (at gluster.org)
22:24 lbalbalba also. docs say default is 42 sec , not the 5 you mention
22:25 gorkhaan lbalbalba,  yeah, so any solution for this? I would like to have a quicker brick failure
22:25 lbalbalba ... but maybe i shouldnt talk about things of which i know nothing about :P
22:26 gorkhaan xD
22:27 gorkhaan If an amazon ELB fails, I am more f_cked than when "only" the instance fails. At least that does a timeout for some time in the future.
22:34 gorkhaan I mean it's 2013, we have 10Gb connections, why don't "we" optimize TCP protocol, timeouts, etc.   :)
22:39 lbalbalba hrm. maybe its the appsrv that takes 5-10 sec to figure out the brick is gone ?
22:40 lbalbalba I mean it's 2013, why don't "we" optimize appsrvs ;)
22:43 lbalbalba what happens when you run the app from a local disk, and remove/unmount that from under it ?
22:45 gorkhaan damn I am good.
22:45 gorkhaan http://www.sekuda.com/overriding_the_default_li​nux_kernel_20_second_tcp_socket_connect_timeout
22:45 glusterbot <http://goo.gl/JF7MN> (at www.sekuda.com)
22:46 gorkhaan I have modified sysctl.conf on 2 appsrv and on the 2 bricks.
22:46 gorkhaan it worked. I wonder why gluster doc doesnt mention this.
22:47 gorkhaan plus borrowed a few options from here: http://www.enigma.id.au/linux_tuning.txt
22:48 gorkhaan my postgresql repmgr autofailover has the same timeout issue. :) let's fix that too
22:49 bstromski joined #gluster
22:49 lbalbalba :)
23:20 RicardoSSP joined #gluster
23:20 RicardoSSP joined #gluster
23:35 majeff joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary