Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 jebba1 joined #gluster
00:30 minoritystorm so yet no gluster dev to help?
00:44 chirino joined #gluster
00:59 minoritystorm joined #gluster
01:00 chirino joined #gluster
01:01 joshit_ darinschmidt, there is documentation for all diff gluster versions
01:01 joshit_ however gluster still lacks details when it comes to reading and learning the insides and outs
01:07 minoritystorm joshit_, gluster preferable over rhel5 or rhel6 ?
01:08 minoritystorm from a kernel perspective
01:14 asias joined #gluster
01:15 joshit_ i'm pretty sure glusterfs has extra features newer kernels and it's recommened
01:17 chirino joined #gluster
01:28 chirino joined #gluster
01:33 inevity joined #gluster
01:43 chirino joined #gluster
01:48 harish joined #gluster
01:57 chirino joined #gluster
02:00 inevity2 joined #gluster
02:02 satheesh joined #gluster
02:10 chirino joined #gluster
02:23 chirino joined #gluster
02:31 chirino joined #gluster
02:32 harish joined #gluster
02:44 chirino joined #gluster
02:50 lalatenduM joined #gluster
02:56 vshankar joined #gluster
02:59 chirino joined #gluster
03:04 bharata joined #gluster
03:15 chirino joined #gluster
03:28 chirino joined #gluster
03:33 satheesh1 joined #gluster
03:39 itisravi joined #gluster
03:43 chirino joined #gluster
03:53 hagarth joined #gluster
03:55 sgowda joined #gluster
03:59 chirino joined #gluster
04:10 inevity joined #gluster
04:14 RameshN joined #gluster
04:14 RameshN_ joined #gluster
04:15 chirino joined #gluster
04:18 shylesh joined #gluster
04:26 chirino joined #gluster
04:27 ababu joined #gluster
04:30 mohankumar joined #gluster
04:33 dusmant joined #gluster
04:33 inevity2 joined #gluster
04:34 premera joined #gluster
04:34 chirino joined #gluster
04:36 _pol joined #gluster
04:47 chirino joined #gluster
04:50 mohankumar joined #gluster
04:56 chirino joined #gluster
04:57 psharma joined #gluster
05:03 neuroticimbecile left #gluster
05:10 chirino joined #gluster
05:10 RameshN joined #gluster
05:13 shruti joined #gluster
05:13 kshlm joined #gluster
05:20 chirino joined #gluster
05:30 chirino joined #gluster
05:32 meghanam joined #gluster
05:33 vijaykumar joined #gluster
05:41 chirino joined #gluster
05:43 bulde joined #gluster
05:49 deepakcs joined #gluster
05:50 ababu joined #gluster
05:51 chirino joined #gluster
05:54 rjoseph joined #gluster
05:55 ndarshan joined #gluster
06:02 chirino joined #gluster
06:06 rgustafs joined #gluster
06:10 inevity joined #gluster
06:13 chirino joined #gluster
06:17 jtux joined #gluster
06:17 bala joined #gluster
06:18 lalatenduM joined #gluster
06:23 chirino joined #gluster
06:26 vimal joined #gluster
06:27 bala joined #gluster
06:27 sgowda joined #gluster
06:36 lalatenduM joined #gluster
06:36 ricky-ticky joined #gluster
06:36 chirino joined #gluster
06:40 inevity joined #gluster
06:41 jtux joined #gluster
06:45 CheRi joined #gluster
06:47 guigui1 joined #gluster
06:52 chirino joined #gluster
06:52 RameshN joined #gluster
06:57 eseyman joined #gluster
07:00 ricky-ticky joined #gluster
07:01 chirino joined #gluster
07:02 deepakcs joined #gluster
07:03 hybrid512 joined #gluster
07:08 piotrektt joined #gluster
07:08 ngoswami joined #gluster
07:10 inevity joined #gluster
07:10 chirino joined #gluster
07:20 chirino joined #gluster
07:21 satheesh1 joined #gluster
07:26 ricky-ticky joined #gluster
07:30 chirino joined #gluster
07:31 satheesh joined #gluster
07:42 msvbhat joined #gluster
07:42 chirino joined #gluster
07:47 ricky-ticky joined #gluster
07:49 puebele joined #gluster
08:00 glusterbot New news from resolvedglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
08:01 chirino joined #gluster
08:05 ricky-ticky joined #gluster
08:06 puebele joined #gluster
08:10 chirino joined #gluster
08:19 chirino joined #gluster
08:22 bharata-rao joined #gluster
08:27 chirino joined #gluster
08:27 psharma joined #gluster
08:34 hagarth rjoseph: https://bugzilla.redhat.com/show_bug.cgi?id=893778 - there's a request for inclusion in 3.4.1.
08:34 glusterbot <http://goo.gl/NLoE3> (at bugzilla.redhat.com)
08:34 glusterbot Bug 893778: unspecified, unspecified, ---, vagarwal, ASSIGNED , Gluster 3.3.1 NFS service died after writing bunch of data
08:35 bharata-rao joined #gluster
08:36 eseyman joined #gluster
08:38 X3NQ joined #gluster
08:40 inevity joined #gluster
08:41 ntt_ joined #gluster
08:42 chirino joined #gluster
08:44 mooperd joined #gluster
08:45 ujjain joined #gluster
08:49 ninkotech joined #gluster
08:49 ninkotech_ joined #gluster
08:53 chirino joined #gluster
08:56 mohankumar joined #gluster
08:58 harish joined #gluster
09:04 raghu joined #gluster
09:04 chirino joined #gluster
09:11 inevity joined #gluster
09:13 chirino joined #gluster
09:15 bharata-rao joined #gluster
09:20 dusmant joined #gluster
09:21 chirino joined #gluster
09:25 Norky joined #gluster
09:31 ngoswami joined #gluster
09:32 chirino joined #gluster
09:32 ppai joined #gluster
09:39 spider_fingers joined #gluster
09:44 chirino joined #gluster
09:51 chirino joined #gluster
09:59 bharata-rao joined #gluster
10:02 chirino joined #gluster
10:03 mohankumar joined #gluster
10:04 mbukatov joined #gluster
10:05 dusmant joined #gluster
10:05 cyberbootje joined #gluster
10:11 inevity joined #gluster
10:12 chirino joined #gluster
10:13 RameshN joined #gluster
10:19 sgowda joined #gluster
10:22 chirino joined #gluster
10:27 mdjunaid joined #gluster
10:29 mohankumar joined #gluster
10:29 chirino joined #gluster
10:30 ntt_ Hi. I have a replica = 2 glusterfs on 2 physical nodes. Each node has an array (raid) of disks (hot swap). I have to replace an hd, should I stop glusterfs on the physical node during the replacement (and rebuild) ??
10:31 glusterbot New news from resolvedglusterbugs: [Bug 797729] [glusterfs-3.3.0qa24]: if replace-brick fails (due to crash), then replace-brick cannot be aborted <http://goo.gl/C4kk0>
10:39 bharata-rao joined #gluster
10:39 chirino joined #gluster
10:39 rnts ntt_: not if you have a proper raidcard that supports hotswap
10:45 edward1 joined #gluster
10:45 Gilbs1 joined #gluster
10:45 inevity joined #gluster
10:46 ntt_ rnts: I'm doing tests. I have removed and reinserted a disk. Now my array is in rebuild state. If i try tro write on glusterfs, replica doesn't work. I write only on the node where raid is sane. Is this correct? Have I to manually resync glusterfs when rebuild ends?
10:49 rnts It depends, in our setup we have LSI-raidcards so failing drives, rebuilds and scrubs does not get noticed by gluster
10:49 dusmant joined #gluster
10:49 rnts so we can continue to use the chassi with the rebuilding drives at a lower performance since caching is shutoff during rebuild
10:50 rnts in any case gluster should not need to worry about the drives if it's a proper raid in the backend
10:50 chirino joined #gluster
10:52 ntt_ rnts: I confirm. After a relatively long time, node where array is in rebuild is in sync (and rebuild is on)
10:52 rnts nice
10:53 bulde joined #gluster
10:54 sgowda joined #gluster
10:55 ngoswami joined #gluster
10:56 hagarth @channelstats
10:56 glusterbot hagarth: On #gluster there have been 167672 messages, containing 7107831 characters, 1187401 words, 4758 smileys, and 633 frowns; 1046 of those messages were ACTIONs. There have been 64128 joins, 2011 parts, 62109 quits, 21 kicks, 164 mode changes, and 7 topic changes. There are currently 217 users and the channel has peaked at 217 users.
11:03 shruti_ joined #gluster
11:04 RameshN joined #gluster
11:05 psharma joined #gluster
11:05 chirino joined #gluster
11:10 neofob joined #gluster
11:11 edward1 joined #gluster
11:11 bfoster joined #gluster
11:15 inevity joined #gluster
11:18 chirino joined #gluster
11:18 social anyone with gluster 3.4.0 deleted fdleak?
11:19 social I can clearly see one thread opening file and another deleting it and the first thread does not close the file :/
11:19 dusmant joined #gluster
11:20 CheRi joined #gluster
11:21 social I onestly don't know what I should be looking for now :/
11:21 hagarth joined #gluster
11:22 ppai joined #gluster
11:29 kaushal_ joined #gluster
11:34 chirino joined #gluster
11:46 inevity joined #gluster
11:47 duerF joined #gluster
11:48 chirino joined #gluster
11:50 kkeithley joined #gluster
11:55 inevity joined #gluster
12:01 chirino joined #gluster
12:03 itisravi joined #gluster
12:07 bulde1 joined #gluster
12:08 bulde2 joined #gluster
12:11 chirino joined #gluster
12:11 ndarshan joined #gluster
12:13 kaushal_ joined #gluster
12:23 mdjunaid joined #gluster
12:25 chirino joined #gluster
12:26 CheRi joined #gluster
12:26 rcheleguini joined #gluster
12:32 kshlm joined #gluster
12:32 awheeler joined #gluster
12:36 kshlm joined #gluster
12:37 guigui1 joined #gluster
12:38 B21956 joined #gluster
12:40 chirino joined #gluster
12:53 aliguori joined #gluster
12:58 Peanut_ Hi folks - I have a working gluster setup (nice!) but, as shown in the examples, I created the backing store in xfs - would ext4 work as well? Be better? Any tuning hints for Ubuntu KVM hosts/guests tuning in such a setup?
13:00 recidive joined #gluster
13:00 stickyboy Peanut_: Most people use XFS.
13:00 stickyboy Until verrrrry recently ext4 was actually problematic, due to some kernel behavior in many current LTS kernels.
13:01 stickyboy I'm still using 3.3.x with XFS on CentOS 6.x.
13:01 cicero 3.3.x with ext4 on ubuntu here
13:01 cicero http://lwn.net/Articles/544298/
13:01 glusterbot Title: A kernel change breaks GlusterFS [LWN.net] (at lwn.net)
13:02 cicero drama
13:02 chirino joined #gluster
13:02 mic1 joined #gluster
13:03 mic1 greetings all.. looking for a little help on a setup here...
13:04 mic1 I have an existing gluster setup, replicating across four nodes
13:04 aravindavk joined #gluster
13:04 mic1 those are all running 3.2
13:05 mic1 have a new server set up, running fedora 19 and gluster 3.4, and can't connect in to the others
13:05 bennyturns joined #gluster
13:05 mic1 peer probe from them to it simply times out
13:05 mic1 probe from it to them responds with "peer probe: failed: Error through RPC layer, retry again later"
13:07 cicero 1) GlusterFS 3.3.0 is not compatible with any earlier released versions. Please make sure that you schedule a downtime before you upgrade.
13:07 cicero from http://vbellur.wordpress.com/2012/​05/31/upgrading-to-glusterfs-3-3/
13:07 glusterbot <http://goo.gl/qOiO7> (at vbellur.wordpress.com)
13:07 cicero so i presume 3.4 isn't compatible with 3.2 either
13:07 mic1 probably should have known that one, shouldn't I
13:08 mic1 that would have been something to check
13:08 mic1 thanks for the response, at any rate :)
13:08 cicero gl
13:11 awheeler joined #gluster
13:11 saurabh joined #gluster
13:16 mdjunaid joined #gluster
13:19 chirino joined #gluster
13:27 nshaikh joined #gluster
13:30 puebele joined #gluster
13:30 Norky correct, 3.2 is not directly compatible with 3.3 OR 3.4
13:31 shylesh joined #gluster
13:31 Norky it is possible to do an 'online' upgrade from 3.3 to 3.4, but for any other versions, you will need downtime
13:34 bugs_ joined #gluster
13:35 awheele__ joined #gluster
13:37 chirino joined #gluster
13:42 failshell joined #gluster
13:42 jdarcy joined #gluster
13:47 andreask joined #gluster
13:50 plarsen joined #gluster
13:56 Peanut_ stickyboy: ok, thanks, I'll stick with XFS then.
14:01 hagarth joined #gluster
14:05 awheeler joined #gluster
14:10 bivak_ joined #gluster
14:13 chirino joined #gluster
14:22 chirino joined #gluster
14:28 jruggiero joined #gluster
14:35 sprachgenerator joined #gluster
14:47 kaptk2 joined #gluster
14:48 chirino joined #gluster
14:49 eseyman joined #gluster
14:50 cdsalmons joined #gluster
14:51 cdsalmons does anyone in here run gluster with apache?
14:56 jbrooks joined #gluster
15:04 Gilbs2 joined #gluster
15:15 Perihelion joined #gluster
15:23 jclift joined #gluster
15:43 jcsp joined #gluster
15:51 jag3773 joined #gluster
15:54 edmv_ joined #gluster
15:58 emoreno joined #gluster
16:23 chirino joined #gluster
16:48 bennyturns joined #gluster
16:51 hagarth @channelstats
16:51 glusterbot hagarth: On #gluster there have been 167781 messages, containing 7110007 characters, 1187766 words, 4759 smileys, and 633 frowns; 1046 of those messages were ACTIONs. There have been 64206 joins, 2011 parts, 62201 quits, 21 kicks, 164 mode changes, and 7 topic changes. There are currently 203 users and the channel has peaked at 218 users.
16:52 Mo__ joined #gluster
16:59 ngoswami joined #gluster
17:01 paratai joined #gluster
17:04 JoeJulian @yum repo
17:04 glusterbot JoeJulian: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
17:05 _pol joined #gluster
17:07 _pol_ joined #gluster
17:07 rotbeard joined #gluster
17:07 jesse joined #gluster
17:22 _pol joined #gluster
17:24 awheeler joined #gluster
17:24 _pol joined #gluster
17:24 mohankumar joined #gluster
17:30 zaitcev joined #gluster
17:34 lpabon joined #gluster
17:36 hchiramm_ joined #gluster
17:39 LoudNoises joined #gluster
17:45 jag3773 joined #gluster
17:49 kshlm joined #gluster
17:58 Gilbs1 joined #gluster
18:05 dbruhn joined #gluster
18:14 awheele__ joined #gluster
19:06 Mo_ joined #gluster
19:07 Gilbs joined #gluster
19:13 hchiramm_ joined #gluster
19:15 bulde joined #gluster
19:20 mibby- joined #gluster
19:21 Gilbs2 joined #gluster
19:21 a2_ joined #gluster
19:26 Technicool joined #gluster
19:27 y4m4 joined #gluster
19:41 andreask joined #gluster
19:41 jesse joined #gluster
19:51 jag3773 joined #gluster
19:53 puebele joined #gluster
19:59 social JoeJulian: any idea how to trace down glusterd leaking fds in 3.4.0?
20:00 social JoeJulian: I have ton of deleted files that in statedump seem to be open on some clients. I guess that means that the client didn't close the fd am I right?
20:00 JoeJulian sounds like that would be the case.
20:37 sprachgenerator joined #gluster
20:42 _pol where can I find docs on per-volume root-squash on 3.4?
20:52 badone joined #gluster
21:14 zerick joined #gluster
21:30 MugginsM joined #gluster
21:34 mooperd joined #gluster
21:36 recidive joined #gluster
21:37 _pol presumably "service stop glusterfsd" and "service stop glusterd" are the best way to stop those processes, but what's the best way to shut down "glusterfs" processes?
21:38 _pol By "best" I mean "safest"
21:38 _pol (CentOS .64)
21:38 tqrst- left #gluster
21:38 _pol (with the period shifted one step to the right)
21:41 _pol Man, this channel is like *crickets*... should I be using the mailing list?
21:43 JoeJulian umount
21:49 nueces joined #gluster
22:00 Gilbs3 joined #gluster
22:04 tjstansell joined #gluster
22:10 _pol JoeJulian: you mean that glusterfs processes are directly related to mounted volumes?
22:11 JoeJulian Mostly. The exceptions are self-heal and nfs services.
22:11 JoeJulian @processes
22:11 glusterbot JoeJulian: The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
22:12 stevenlokie joined #gluster
22:12 _pol JoeJulian: so, on my server, the nfs daemon would be run by glusterfs (since I am not running a client on this particular server, nor am I mounting any volumes)
22:13 _pol JoeJulian: In my case, is a "killall glusterfs" safe?
22:13 JoeJulian yes
22:14 JoeJulian btw... it took you 27 minutes to respond to my answer. For someone who thought that 4 minutes was "*crickets*" ... :P
22:15 _pol Sorry, I apologize.  I am anecdotally getting the feeling that I am in a different timezone.
22:15 JoeJulian heh
22:15 JoeJulian I have this window open while doing my day job. It does happen occasionally that I'm actually busy doing paid work.
22:15 _pol I think it is related to the fact that I am having a hard time finding docs.
22:16 _pol Specifically 3.4 docs.
22:18 _pol Maybe docs on glusterbot would be good enough. :)
22:18 JoeJulian There's a transition happening to change the old doc format to markdown. Once that happens, things should actually get updated and a current version will always be posted.
22:18 _pol Can I help, or is this a redhat-only thing?
22:19 JoeJulian No, it's not.... let me see if I can find the contact that's doing it... just a sec...
22:20 JoeJulian Here's the email thread: http://www.mail-archive.com/glust​er-devel@nongnu.org/msg09834.html
22:20 glusterbot <http://goo.gl/3qyifz> (at www.mail-archive.com)
22:21 JoeJulian https://github.com/lkundra​k/glusterfs/commits/lr-doc
22:21 glusterbot <http://goo.gl/QsLNvu> (at github.com)
22:22 _pol Also, the recommended method for doing a rolling update from 3.3 to 3.4 has a step of "If you have pending data for self-heal, run “gluster volume heal <volname>” and wait for self-heal to complete."
22:22 _pol Isn't that heal a background task?
22:22 _pol Is there a status or something that I can run?
22:22 _pol I take that back... I mean, is there a foreground one that I can run that will exit when it is finished.
22:23 _pol (and thanks for the doc info)
22:23 JoeJulian ... and there's no documentation for root-squash... :/
22:24 JoeJulian Looks like it's just "gluster volume set $vol root-squash $boolean"
22:25 JoeJulian or "server.root-squash"
22:25 JoeJulian No, just "root-squash"
22:26 _pol so, hm.  that means that root-squashing can be done via nfs or is there some kind of native root-squash in gluster-client?
22:27 JoeJulian The code seems to suggest that it happens in the server translator. That would affect both the fuse client and the nfs client.
22:27 _pol *squeee* that is awesome news.
22:27 _pol I was thinking I was going to have to do nfs jiggery-pokery to secure a large shared volume.
22:28 _pol (with intermediate gclients resharing via nfs subdirs)
22:29 _pol Ok, I am going to get back to finishing this ansible gluster server rolling update playbook.
22:29 JoeJulian gah... I should have checked help first:
22:29 JoeJulian Option: server.root-squash
22:29 JoeJulian Default Value: off
22:29 JoeJulian Description: Map  requests  from  uid/gid 0 to the anonymous uid/gid. Note that this does not apply to any otheruids or gids that might be equally sensitive, such asuser bin or group staff.
22:29 _pol JoeJulian: thanks, and I am sorry for being grouchy about volunteered time. :)
22:29 JoeJulian "gluster volume set help"
22:30 _pol Hm, I wonder what
22:30 _pol anonyomous" is.  uid 99?
22:33 JoeJulian 65534
22:34 JoeJulian git also
22:34 JoeJulian s/git/gid/
22:34 glusterbot What JoeJulian meant to say was: http://goo.gl/GWb3Gw
22:34 JoeJulian lol
22:34 JoeJulian no, glusterbot, that's not what I meant...
22:34 JoeJulian glusterbot: meh
22:34 glusterbot JoeJulian: I'm not happy about it either
22:36 JoeJulian @learn root-squash as Enable root squash in the server translator by setting root-squash for the volume. UID and GID 0 will be remapped to 65534. This works for both nfs and fuse clients.
22:36 glusterbot JoeJulian: The operation succeeded.
22:40 Gilbs1 joined #gluster
22:43 social what information is in statedumps of gluster?
22:44 social Lets say I dumped fds and I look at something like [conn.2.id.fdentry[8]] ; pid=8197 ; refcount=1 ; flags=0
22:45 social conn.2 means connection 2 which means some client, fdentry number is probably irelevant? pid is pid of what? daemon/mount/real holder of fd on client?
22:45 mooperd joined #gluster
22:48 JoeJulian I've been asking for some documentation on that for years.
22:49 social it would be really helpful for me as I'm chasing fdleak and as I don't have reproducer I have to trace indirectly on production :/
22:50 bstr joined #gluster
22:50 a2_ pid=8197 <-- pid of the process in the client system which has opened the file
22:51 social a2_: I can see client clearly sending close acrualy the process does not exist anymore
22:51 social but I still see in statedump the fd :/
22:51 a2_ hmm, that may be an fd ref leak in the mount process
22:52 a2_ what version are you running?
22:52 social 3.4.0
22:54 a2_ this is likely the fd ref leak we fixed after 3.4.0
22:54 a2_ http://review.gluster.org/4745
22:54 glusterbot Title: Gerrit Code Review (at review.gluster.org)
22:55 social hmm mar
22:55 social should be in our build
22:55 a2_ maybe not
22:55 social sec I'll have look
22:57 a2_ it's not that leak
22:57 a2_ something else
22:59 social interesting, it's not there
22:59 social when it landed into git?
23:04 Gilbs joined #gluster
23:04 mooperd__ joined #gluster
23:06 a2_ it's in master
23:06 a2_ release-3.4 does not have the patch which introduced the bug either
23:06 a2_ so it's not that leak
23:08 social ah
23:08 social where should I look then? it seems to be on client mount
23:31 dhsmith_ joined #gluster
23:33 social so strace says lots of open > close > unlink operations
23:33 social glusterd is holding deleted fd so it'll have something with unlink
23:40 recidive joined #gluster
23:42 a2_ social, are you accessing over nfs?
23:42 a2_ anything wierd in the client logs?
23:45 social no, fuse mount
23:45 social nothing wired, we sound out only that we have issue with posix acl md cache
23:49 badone joined #gluster
23:55 dhsmith joined #gluster
23:56 dhsmith joined #gluster
23:59 duerF joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary