Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:59 \_pol joined #gluster
01:18 \_pol joined #gluster
01:41 msciciel_ joined #gluster
02:13 harish_ joined #gluster
02:36 sprachgenerator joined #gluster
02:56 \_pol joined #gluster
02:58 \_pol joined #gluster
03:00 shylesh joined #gluster
03:00 saurabh joined #gluster
03:12 bharata-rao joined #gluster
03:14 shubhendu joined #gluster
03:15 davinder joined #gluster
03:16 kshlm joined #gluster
03:22 dusmant joined #gluster
03:22 ppai joined #gluster
03:41 shyam joined #gluster
03:45 raghu joined #gluster
03:46 itisravi joined #gluster
03:58 vpshastry joined #gluster
04:02 sgowda joined #gluster
04:08 raghu joined #gluster
04:13 RameshN joined #gluster
04:21 shruti joined #gluster
04:24 kanagaraj joined #gluster
04:32 ndarshan joined #gluster
04:43 anands joined #gluster
05:14 nshaikh joined #gluster
05:15 rjoseph joined #gluster
05:24 aravindavk joined #gluster
05:27 raghu joined #gluster
05:36 spandit joined #gluster
05:36 davinder joined #gluster
05:39 avati joined #gluster
05:42 satheesh1 joined #gluster
05:43 bala joined #gluster
06:00 satheesh1 joined #gluster
06:04 rc10 joined #gluster
06:15 shubhendu joined #gluster
06:15 rastar joined #gluster
06:20 psharma joined #gluster
06:23 rjoseph joined #gluster
06:23 sgowda joined #gluster
06:24 kPb_in joined #gluster
06:25 vimal joined #gluster
06:27 ngoswami joined #gluster
06:27 eseyman joined #gluster
06:29 jtux joined #gluster
06:30 RameshN joined #gluster
06:43 polfilm joined #gluster
06:48 ctria joined #gluster
06:51 bulde joined #gluster
06:52 dusmant joined #gluster
06:54 RameshN joined #gluster
07:01 yongtaof joined #gluster
07:02 davinder2 joined #gluster
07:04 tjikkun_work joined #gluster
07:08 ricky-ticky joined #gluster
07:15 keytab joined #gluster
07:25 rjoseph joined #gluster
07:25 sgowda joined #gluster
07:28 bulde joined #gluster
07:31 rc10 hi, gluser doesnt auto-detect brick failures
07:31 rc10 is it expected ?
07:34 dusmant joined #gluster
07:35 RedShift joined #gluster
07:36 RedShift how can I delete files that have been split brained?
07:41 andreask joined #gluster
07:51 mohankumar joined #gluster
08:00 shruti joined #gluster
08:04 20WADA8VI joined #gluster
08:04 mgebbe joined #gluster
08:07 dusmant joined #gluster
08:14 KORG joined #gluster
08:19 KORG Guys, are there available updated Administration Guide for glusterfs 3.4 ?
08:39 ndevos ~split brain | RedShift
08:39 glusterbot RedShift: I do not know about 'split brain', but I do know about these similar topics: 'split-brain', 'splitbrain'
08:39 ndevos dammit, ,,(split-brain)
08:39 glusterbot To heal split-brain in 3.3+, see http://goo.gl/FPFUX .
08:41 ndevos rc10: depends on the version of glusterfs you're on, 3.4 includes http://review.gluster.org/5176 - check with 'glustre volume set help' if you have a storage.health-check-interval option
08:41 glusterbot Title: Gerrit Code Review (at review.gluster.org)
08:41 65MAA2LB6 joined #gluster
08:42 ndevos KORG: not sure if there is one, but https://access.redhat.com/site/documen​tation/en-US/Red_Hat_Storage/2.1/html-​single/Administration_Guide/index.html should be suitable too
08:42 glusterbot <http://goo.gl/3pp2Ll> (at access.redhat.com)
08:46 raghu joined #gluster
08:51 harish joined #gluster
08:52 rc10 is 3.4 stable version for  debian squeeze ?
08:53 shruti joined #gluster
08:53 rc10 looks like there is no alpha, beta, stable versions  - there is only one
08:56 vshankar joined #gluster
08:57 rastar joined #gluster
09:03 tryggvil joined #gluster
09:07 rjoseph joined #gluster
09:13 lalatenduM joined #gluster
09:13 ndevos rc10: maybe the ,,(ppa) works for you?
09:13 glusterbot rc10: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
09:16 satheesh joined #gluster
09:17 shruti joined #gluster
09:30 ngoswami joined #gluster
09:30 Guest7741 joined #gluster
09:32 glusterbot New news from newglusterbugs: [Bug 1016000] Implementation of object handle based gfapi extensions <http://goo.gl/y8Xo7P>
09:32 Debolaz joined #gluster
09:42 RedShift thansk
09:42 RedShift *thanks
09:50 dusmant joined #gluster
10:01 mohankumar joined #gluster
10:13 meghanam joined #gluster
10:21 rjoseph joined #gluster
10:22 vpshastry joined #gluster
10:38 shruti joined #gluster
10:40 Norky joined #gluster
10:44 dusmant joined #gluster
10:46 Norky joined #gluster
10:48 shubhendu joined #gluster
10:52 ndarshan joined #gluster
10:59 X3NQ joined #gluster
10:59 rc10 is there any performance  tuning for small files . I have millions of small files
11:01 rc10 http://thr3ads.net/gluster-users/2011/01/4​78601-very-bad-performance-on-small-files
11:01 glusterbot <http://goo.gl/APmZC7> (at thr3ads.net)
11:01 rc10 does this hold true still ?
11:07 shyam left #gluster
11:11 shruti joined #gluster
11:13 ndarshan joined #gluster
11:15 chirino joined #gluster
11:19 ppai joined #gluster
11:21 RameshN_ joined #gluster
11:24 edward2 joined #gluster
11:26 pkoro joined #gluster
11:35 yinyin joined #gluster
11:38 Norky if you have many files in a given directory, yes, performance will be poor. Using Gluster's NFS support will improve matters.
11:40 Norky v3.4 does address the problem to some degree, but it still does not bring metadata access over gluster native protocol to the same level as NFS
11:42 eseyman joined #gluster
11:49 vpshastry joined #gluster
12:00 shruti joined #gluster
12:04 andreask joined #gluster
12:05 itisravi joined #gluster
12:07 dusmant joined #gluster
12:12 ndarshan joined #gluster
12:19 ctria joined #gluster
12:26 yinyin joined #gluster
12:32 glusterbot New news from newglusterbugs: [Bug 1012863] Gluster fuse client checks old firewall ports <http://goo.gl/3UsZxe>
12:34 glusterbot New news from resolvedglusterbugs: [Bug 1006269] tests/basic/rpm.t takes too long <http://goo.gl/Dmrjxz> || [Bug 1004756] Not all tests call 'cleanup' in the end, causing difficulties with single test runs <http://goo.gl/NSdcM7>
12:39 hagarth joined #gluster
12:40 RameshN_ joined #gluster
12:44 lalatenduM joined #gluster
12:49 KORG joined #gluster
12:58 jtux joined #gluster
13:10 yinyin joined #gluster
13:13 ctria joined #gluster
13:17 kPb_in joined #gluster
13:18 abradley joined #gluster
13:23 mooperd joined #gluster
13:24 rastar joined #gluster
13:25 bennyturns joined #gluster
13:34 rwheeler joined #gluster
13:39 Staples84 joined #gluster
13:40 dneary joined #gluster
13:44 failshell joined #gluster
13:45 failshell joined #gluster
13:47 shylesh joined #gluster
13:51 harish_ joined #gluster
13:51 dusmant joined #gluster
13:52 kaptk2 joined #gluster
13:55 dtyarnell joined #gluster
13:56 polfilm joined #gluster
14:03 kkeithley ports
14:03 kkeithley @!ports
14:03 kkeithley @ports
14:03 glusterbot kkeithley: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
14:05 ndevos ~ports | kkeithley
14:05 glusterbot kkeithley: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
14:06 ndevos kkeithley: or, you can also see ,,(ports) :P
14:06 glusterbot kkeithley: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
14:07 giannello joined #gluster
14:08 giannello heya all
14:08 kkeithley what I really need is fingers that hit the correct keys
14:08 giannello any idea on how to deal with folders with sticky bits?
14:08 giannello some of them were synced with rsync, and are now pretty much broken
14:08 giannello if I mount my volume using gluster native client, I get "d?????????  ? ?        ?         ?            ? development/"
14:09 giannello with nfs I get "d-wxr----t  3 root     root      3 Jul  9 08:26 development/" and in both cases I cannot do anything, not even rm the directory
14:11 dbruhn joined #gluster
14:12 dbruhn Does 3.3.1 have quorum?
14:16 wushudoin joined #gluster
14:17 dusmant joined #gluster
14:35 bugs_ joined #gluster
14:37 vpshastry joined #gluster
14:46 vpshastry left #gluster
14:51 Guest34797 joined #gluster
14:53 rwheeler joined #gluster
14:54 sjoeboo joined #gluster
15:03 glusterbot New news from newglusterbugs: [Bug 1015819] SELinux is preventing /usr/sbin/glusterfsd from name_bind access on the tcp_socket <http://goo.gl/vVQZdh>
15:04 zaitcev joined #gluster
15:06 B21956 joined #gluster
15:10 bennyturns joined #gluster
15:12 mooperd joined #gluster
15:13 Alpinist joined #gluster
15:18 vpshastry joined #gluster
15:22 bulde joined #gluster
15:28 vpshastry left #gluster
15:31 zerick joined #gluster
15:32 sprachgenerator joined #gluster
15:34 morse joined #gluster
15:34 Technicool joined #gluster
15:36 jbrooks joined #gluster
15:49 portante joined #gluster
15:54 andreask joined #gluster
15:55 ndk joined #gluster
16:13 abradley I'm having some trouble finding how to setup the native gluster client in ubuntu 12
16:14 abradley is there a guide or is it so simple that there is no guide necessary?
16:14 giannello abradley: mount -t glusterfs gluster_ip:/volume_name /mount/point
16:15 abradley I have a new ubu12 server set up (vm). It sounds like I need to install gluster here too.
16:15 giannello you just need to install the client part
16:15 abradley previously, on the gluster servers, I used install gluster-server. Would I install gluster-client here?
16:15 giannello yes, glusterfs-client
16:16 abradley ah, thank you
16:16 \_pol joined #gluster
16:16 giannello (shameless tip: go for gluster 3.4, from the official gluster ppa)
16:16 giannello it's waaaaaaaaaaaaay better
16:18 LoudNoises joined #gluster
16:22 rwheeler joined #gluster
16:22 rwheeler_ joined #gluster
16:23 ctria joined #gluster
16:24 vpshastry joined #gluster
16:27 jclift joined #gluster
16:33 \_pol joined #gluster
16:36 hagarth joined #gluster
16:43 bulde joined #gluster
16:46 a1 joined #gluster
16:50 polfilm joined #gluster
16:58 bulde joined #gluster
17:01 mistich1 joined #gluster
17:01 mistich1 hello
17:01 glusterbot mistich1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:03 mistich1 Does anyone know of a good doc for tuning gluster 3.4.1 for small files I have searched google for hours and cannot seem to get the performance up
17:04 Mo__ joined #gluster
17:04 chirino joined #gluster
17:05 abradley am I correct in understanding that if I'm looking only to use NFS shares, that I need not install the gluster client anywhere, I just access one of the gluster servers directly?
17:06 l0uis abradley: right. you just mount using nfs, w/ the nfs host being one of the gluster servers exporting nfs
17:07 abradley excellent, thanks
17:08 johnmark mistich1: what distro?
17:09 johnmark I'm starting to find that whenever anyone talks about small file performance, the first thing to refer them to is readdirplus
17:09 johnmark and the Fuse module
17:09 mistich1 redhat 6.6 x86_64
17:09 mistich1 I'm using the fuse module
17:10 mistich1 10 gig network with 9 gluster nodes
17:11 jporterfield left #gluster
17:12 mistich1 redhat 6.3 sorry
17:13 NeatBasis joined #gluster
17:15 NuxRo johnmark: is there a release announcement for 3.4.1?
17:17 jclift mistich1: Are you able to update to 6.4?
17:18 jclift mistich1: Asking because I kind of remember something about a fuse module change for readdirplus in the latest release...
17:19 jclift mistich1: I'm not sure if that would be something that GlusterFS 3.4.1 would make use of better... but if it's easy to try it might be worth doing so.
17:19 lpabon joined #gluster
17:22 ferringb joined #gluster
17:22 jclift mistich1: Hmmm... now that I think about it, it might actually be an even more recent kernel module than 6.4.  It might only be in (to be released) 6.5. Unsure. :(
17:23 ferringb so... replace-brick migration of a brick between two servers, w/ the source brick being dead.
17:23 * ferringb notes the dead brick is part of a replicate pair; so the data is there for restoration
17:24 ferringb however gluster pretty much refuses to do it; best I can tell is a remove-brick then adding a new brick is the option, which doesn't seem right.  This is for a 3.3 setup btw.  suggestions/comments welcome. ;)
17:27 bulde joined #gluster
17:29 abradley how do you go about monitoring your live cluster? are there tools or must one manually check each node somehow regularly?
17:29 compbio as a quick check, the gtop tool is fairly useful
17:29 compbio https://forge.gluster.org/gtop
17:29 glusterbot Title: gtop - Gluster Community Forge (at forge.gluster.org)
17:29 abradley thanks compbio
17:33 mistich1 jclift is there a doc somewhere on tuning they system for small files that you know of
17:45 jclift mistich1: Not that I know of. :(
17:46 mistich1 ok I'll keep playing with it. any suggestions where to start?
17:47 jclift mistich1: At the moment no.  My focus on gluster usage isn't very Real World applicable atm, so I'm not going to be of much help to you atm.
17:47 jclift mistich1: You might want to ask on the gluster-users mailing list too, if you haven't already?
17:47 mistich1 thats the next stop
17:48 jclift mistich1: http://supercolony.gluster.org/​mailman/listinfo/gluster-users
17:48 glusterbot <http://goo.gl/mW77qQ> (at supercolony.gluster.org)
17:48 mistich1 thanks for the help
17:52 mooperd joined #gluster
17:53 kanagaraj joined #gluster
17:55 B21956 joined #gluster
18:16 saltsa joined #gluster
18:34 ferringb left #gluster
18:49 johnmark jclift: yes, it was backported into 6.4
18:49 jclift Cool
18:49 johnmark mistich1: ^^
18:55 mistich1 johnmark so upgrading to rhel 6.4 should solve my problem
18:57 plarsen joined #gluster
18:57 johnmark mistich1: I'll put it this way: it should improve the performance you're seeing
18:58 mistich1 ok thanks I'll try it
18:59 rwheeler joined #gluster
19:07 Guest34797 joined #gluster
19:11 mistich1 johnmark upgrade the clients, servers, or both
19:15 rwheeler joined #gluster
19:23 dbruhn joined #gluster
19:23 polfilm joined #gluster
19:25 JoeJulian mistich1: readdirplus is an enhancement to fuse so that improvement is only utilized by clients*.
19:28 dtyarnell joined #gluster
19:29 mistich1 thanks
19:30 mistich1 just reduced my work load from 50 to 15
19:30 tg2 clients can use 3.4 while server is on 3.3.2 right?
19:30 tg2 since the 3.3.x x-compatability work done?
19:35 B21956 joined #gluster
19:35 B21956 left #gluster
19:36 B21956 joined #gluster
19:46 tg2 joined #gluster
19:47 abradley Why does my gluster nfs share show up as 1.8GB when it should be 78GB? http://i.imgur.com/znw5Dy0.png
19:51 XpineX abradley, it almost looks like you have shared your sda1 instead of sda2
19:51 abradley how would I go about confirming that?
19:52 semiosis ,,(pasteinfo)
19:52 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
19:52 semiosis get brick path from volume info then use 'mount' to see what device that path is on
19:53 abradley http://paste.ubuntu.com/6206444/
19:53 XpineX gluster volume info - will show you which directory is used as bricks in your gluster
19:53 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
19:53 XpineX and output of mount ?
19:54 JoeJulian tg2: Yes, that is intended to work, and my limited tests confirm that.
19:55 JoeJulian or: df /export/sda2/brick1
19:57 JoeJulian by the way, abradley, you should consider using ,,(hostnames) instead of ip addresses.
19:57 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
19:58 abradley Are hostnames preferable to ips?
19:58 abradley doesn't a hostname just point to an ip?
19:58 JoeJulian Yes, but what about if that ip needs to change for some reason? With a hostname, you just change the reference. With an ip, you need to recreate the volume*.
19:59 abradley where do you change the reference?
20:00 JoeJulian dns, /etc/hosts, wherever you define your hostnames.
20:38 nueces joined #gluster
20:49 mooperd joined #gluster
21:05 dbruhn left #gluster
21:14 tryggvil joined #gluster
21:17 jclift left #gluster
21:19 dneary joined #gluster
21:27 StarBeas_ joined #gluster
21:29 marcoceppi_ joined #gluster
21:32 ninkotech joined #gluster
21:33 ninkotech_ joined #gluster
21:35 portante joined #gluster
21:36 TDJACR joined #gluster
21:36 a1 joined #gluster
21:36 Shdwdrgn joined #gluster
21:37 mistich1 joined #gluster
21:39 badone joined #gluster
21:41 B21956 left #gluster
21:50 chirino joined #gluster
21:51 Guest34797 joined #gluster
22:14 nasso joined #gluster
22:18 dtyarnell joined #gluster
22:19 plarsen joined #gluster
22:35 mooperd joined #gluster
22:41 chirino joined #gluster
22:46 StarBeast joined #gluster
22:55 ultrabizweb joined #gluster
23:17 dtyarnell joined #gluster
23:26 mooperd joined #gluster
23:29 a2_ joined #gluster
23:56 StarBeast joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary