Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 StarBeast joined #gluster
00:51 kevein joined #gluster
00:59 vpshastry joined #gluster
01:00 bennyturns joined #gluster
01:02 nueces joined #gluster
01:03 asias joined #gluster
01:16 jporterfield joined #gluster
01:22 jporterfield joined #gluster
01:25 bala joined #gluster
01:29 dbruhn joined #gluster
01:30 rcoup joined #gluster
01:36 harish joined #gluster
01:42 foexle_ joined #gluster
01:51 jporterfield joined #gluster
02:25 bharata-rao joined #gluster
02:25 asias joined #gluster
02:31 awheeler joined #gluster
02:36 awheeler joined #gluster
02:40 lalatenduM joined #gluster
02:51 saurabh joined #gluster
02:54 bulde joined #gluster
03:20 bala joined #gluster
03:25 shubhendu joined #gluster
03:26 jporterfield joined #gluster
03:31 hagarth joined #gluster
03:39 sprachgenerator joined #gluster
03:43 ndarshan joined #gluster
03:48 jporterfield joined #gluster
03:50 vshankar joined #gluster
03:54 DV joined #gluster
03:59 itisravi joined #gluster
04:02 awheeler joined #gluster
04:06 aravindavk joined #gluster
04:08 jporterfield joined #gluster
04:11 ppai joined #gluster
04:13 spandit joined #gluster
04:15 RameshN joined #gluster
04:15 sgowda joined #gluster
04:20 jporterfield joined #gluster
04:23 mohankumar joined #gluster
04:48 vpshastry joined #gluster
04:51 kanagaraj joined #gluster
04:55 rjoseph joined #gluster
04:58 nightwalk joined #gluster
05:09 SunilVA joined #gluster
05:10 rastar joined #gluster
05:15 davinder joined #gluster
05:16 SunilVA2 joined #gluster
05:16 SunilVA2 left #gluster
05:17 SunilVA joined #gluster
05:20 raghu joined #gluster
05:25 shylesh joined #gluster
05:27 mohankumar joined #gluster
05:32 satheesh joined #gluster
05:33 mohankumar__ joined #gluster
05:42 lalatenduM joined #gluster
05:43 jporterfield joined #gluster
05:46 ababu joined #gluster
05:48 lalatenduM joined #gluster
05:51 sahina joined #gluster
05:55 nshaikh joined #gluster
06:13 bulde joined #gluster
06:15 shruti joined #gluster
06:15 jtux joined #gluster
06:25 anands joined #gluster
06:27 dusmant joined #gluster
06:27 davinder joined #gluster
06:30 mohankumar__ joined #gluster
06:33 vimal joined #gluster
06:41 glusterbot New news from newglusterbugs: [Bug 996391] NFS: Memory leak when dbench is run <http://goo.gl/g5sJCo>
06:46 verdurin_ joined #gluster
06:48 jtux joined #gluster
06:56 jporterfield joined #gluster
07:01 eseyman joined #gluster
07:07 kanagaraj joined #gluster
07:07 ctria joined #gluster
07:15 glusterbot New news from resolvedglusterbugs: [Bug 996391] NFS: Memory leak when dbench is run <http://goo.gl/g5sJCo>
07:15 jtux joined #gluster
07:55 mgebbe_ joined #gluster
07:56 Elendrys v
07:56 kwevers joined #gluster
07:59 kwevers Has anybody experienced similar problems to https://bugzilla.redhat.com/show_bug.cgi?id=991035 ?
07:59 glusterbot <http://goo.gl/8nQMxt> (at bugzilla.redhat.com)
07:59 glusterbot Bug 991035: high, unspecified, ---, sgowda, NEW , ACL mask is calculated incorrectly
08:12 mmalesa joined #gluster
08:15 ntt_ joined #gluster
08:15 ntt_ Hi. I'm trying to use glusterfs with pacemaker. Someone can help me?
08:17 ndevos ntt_: I think you can find those details in http://www.hastexo.com/misc/static/p​resentations/lceu2012/glusterfs.html
08:17 glusterbot <http://goo.gl/STxh6i> (at www.hastexo.com)
08:18 ntt_ ndevos: thanks. i have already seen. But i'm searching for an example/tutorial
08:19 ndevos ntt_: maybe the description of http://review.gluster.org/3043 ?
08:19 glusterbot Title: Gerrit Code Review (at review.gluster.org)
08:20 ndevos ntt_: its something I still want to try out myself too, but have not found the time yet
08:22 ntt_ ndevos: i have 2 node that i would use with pacemaker and a separated infrastructure for a glusterfs. Nodes can mount glusterfs, but i don't understand how proceed....
08:24 ntt_ examples show how use pacemaker with drbd. But in this case, physical disks are "local"
08:24 ndevos ntt_: I'm a complete pacemaker noob and need to figure it out myself, so I can not help you (yet, maybe in a couple of weeks)
08:41 ctria joined #gluster
08:44 ProT-0-TypE joined #gluster
08:44 StarBeast joined #gluster
08:45 mohankumar__ joined #gluster
08:47 msvbhat joined #gluster
08:49 shubhendu joined #gluster
08:52 raghu left #gluster
08:55 dusmant joined #gluster
08:59 mmalesa joined #gluster
09:00 shylesh joined #gluster
09:04 edward1 joined #gluster
09:09 Guest67758 kwevers: ping
09:10 social kwevers: https://bugzilla.redhat.com/show_bug.cgi?id=998967
09:10 glusterbot <http://goo.gl/B2gFno> (at bugzilla.redhat.com)
09:10 glusterbot Bug 998967: unspecified, unspecified, 3.4.0, sgowda, NEW , gluster 3.4.0 ACL returning different results with entity-timeout=0 and without
09:10 social imho it's in posi
09:11 social * imho it's in posix_acl_inherit_mode
09:11 glusterbot New news from newglusterbugs: [Bug 991035] ACL mask is calculated incorrectly <http://goo.gl/8nQMxt>
09:12 spider_fingers joined #gluster
09:14 mmalesa_ joined #gluster
09:19 haidz joined #gluster
09:22 CheRi joined #gluster
09:28 mooperd_ joined #gluster
09:38 kwevers social: Thanks!
09:40 dbruhn joined #gluster
09:47 shubhendu joined #gluster
09:48 mmalesa__ joined #gluster
09:49 harish joined #gluster
09:53 rjoseph joined #gluster
09:56 ctria joined #gluster
09:56 wgao joined #gluster
09:57 dusmant joined #gluster
09:57 shylesh joined #gluster
10:04 mbukatov joined #gluster
10:06 eseyman joined #gluster
10:06 pkoro joined #gluster
10:10 JonathanD joined #gluster
10:12 glusterbot New news from newglusterbugs: [Bug 998967] gluster 3.4.0 ACL returning different results with entity-timeout=0 and without <http://goo.gl/B2gFno>
10:17 harish joined #gluster
10:19 manik joined #gluster
10:22 psharma joined #gluster
10:26 CheRi joined #gluster
10:30 shylesh joined #gluster
10:30 sgowda joined #gluster
10:38 rjoseph joined #gluster
10:45 jcsp joined #gluster
10:50 dseira joined #gluster
10:53 dseira left #gluster
10:53 Han It's still not possible to disable root-squash. Is there a workaround?
10:56 CheRi joined #gluster
11:04 mmalesa joined #gluster
11:09 ababu_ joined #gluster
11:11 ctria joined #gluster
11:14 Han left #gluster
11:16 glusterbot New news from resolvedglusterbugs: [Bug 830121] Nfs mount doesn't report "I/O Error" when there is GFID mismatch for a file <http://goo.gl/IJCGo> || [Bug 885802] NFS errors cause Citrix XenServer VM's to lose disks <http://goo.gl/xil6p> || [Bug 810944] glusterfs nfs mounts hang when second node is down <http://goo.gl/CwSxZ>
11:16 nullck joined #gluster
11:19 failshell joined #gluster
11:19 sgowda joined #gluster
11:19 hagarth joined #gluster
11:19 CheRi joined #gluster
11:20 bala joined #gluster
11:27 Han joined #gluster
11:27 Han gluster volume create cluster transport tcp woodstock-one:/export/sdb1
11:27 Han volume create: cluster: failed: /export/sdb1 or a prefix of it is already part of a volume
11:27 glusterbot Han: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
11:31 kshlm joined #gluster
11:31 Han Will the described fix for the switching of of root-squashing be included in the next release?
11:32 Han Meaning this one: https://bugzilla.redhat.com/show_bug.cgi?id=927616
11:32 glusterbot <http://goo.gl/tZW0X> (at bugzilla.redhat.com)
11:32 glusterbot Bug 927616: unspecified, unspecified, ---, rabhat, POST , root-squash: root-squashing does not get disabled dynamically
11:32 ProT-0-TypE joined #gluster
11:35 tryggvil joined #gluster
11:35 tryggvil joined #gluster
11:44 glusterbot New news from newglusterbugs: [Bug 997140] Gluster NFS server dies <http://goo.gl/7aXct0>
11:46 glusterbot New news from resolvedglusterbugs: [Bug 849526] High write operations over NFS causes client mount lockup <http://goo.gl/ZAcyz>
11:54 shruti joined #gluster
12:00 jporterfield joined #gluster
12:08 sprachgenerator joined #gluster
12:08 dusmant joined #gluster
12:12 mambru joined #gluster
12:13 mambru ,01
12:15 bennyturns joined #gluster
12:17 hagarth joined #gluster
12:27 duerF joined #gluster
12:29 tziOm joined #gluster
12:30 rgustafs joined #gluster
12:31 harish joined #gluster
12:36 awheeler joined #gluster
12:37 awheeler joined #gluster
12:37 m0zes bug 890502
12:37 glusterbot Bug http://goo.gl/5LWrp unspecified, medium, ---, kparthas, ASSIGNED , glusterd fails to identify peer while creating a new volume
12:38 SunilVA2 joined #gluster
12:39 DV joined #gluster
12:40 whx joined #gluster
12:41 CheRi joined #gluster
12:47 satheesh1 joined #gluster
12:47 ctria joined #gluster
12:50 B21956 joined #gluster
12:52 robo joined #gluster
12:55 manik joined #gluster
12:55 kaushal_ joined #gluster
13:00 hagarth joined #gluster
13:02 ricky-ticky joined #gluster
13:03 bulde1 joined #gluster
13:04 dusmant joined #gluster
13:09 ppai joined #gluster
13:14 rcheleguini joined #gluster
13:16 ababu_ joined #gluster
13:20 dewey joined #gluster
13:23 mooperd joined #gluster
13:23 StarBeast joined #gluster
13:24 harish joined #gluster
13:25 X3NQ joined #gluster
13:26 hybrid5122 joined #gluster
13:27 X3NQ Over the weekend I'm moving one of two gluster nodes to a new IP address (keeping the same domain used with peer probe). Will it automatically work after I bring backup the server, or will I need to change something?
13:28 deepakcs joined #gluster
13:36 gluster-meetb0t joined #gluster
13:42 aravindavk joined #gluster
13:42 ababu_ joined #gluster
13:47 gluster-meetb0t joined #gluster
13:49 vpshastry left #gluster
13:51 bugs_ joined #gluster
13:53 mooperd joined #gluster
13:55 manik joined #gluster
13:57 mooperd_ joined #gluster
13:58 jdarcy joined #gluster
13:58 jdarcy left #gluster
13:59 jdarcy joined #gluster
14:02 jclift joined #gluster
14:04 dusmant joined #gluster
14:04 failshell joined #gluster
14:06 davinder joined #gluster
14:06 failshell joined #gluster
14:07 kaptk2 joined #gluster
14:11 wushudoin joined #gluster
14:16 gmcwhistler joined #gluster
14:17 hagarth joined #gluster
14:18 gluster-meetb0t joined #gluster
14:48 aliguori joined #gluster
14:51 asias joined #gluster
14:52 jbrooks joined #gluster
14:55 vimal joined #gluster
14:55 sforsythe joined #gluster
14:56 sforsythe I had a question about replication and self-healing
14:58 sforsythe I created a new replicated volume that had content on the node in brick1 ... after an hour it appears the sync has completed, but a du --max-depth=1 -h of the brick on node2 shows 99G in .glusterfs and only 20M in the actual directory
14:59 jdarcy So this was pre-existing content from before you started using that directory for a brick?
14:59 sforsythe If I go into the content directory, and do du ... the 99G shows there
14:59 sforsythe yes
15:00 StarBeast joined #gluster
15:00 jdarcy sforsythe: Then we lack the information which would exist if the file had been added afterward, which is what we use to ensure correct replication.
15:01 manik joined #gluster
15:01 sforsythe so when a brick/volume is created .... it has to be blank and all data copied afterwards?
15:01 sforsythe What if a brick is used in one volume ... but then you want to recreate/start over that volume
15:01 jdarcy Sadly, yes.  IWBNI we had an import script, but so far no.
15:02 sforsythe so my case was two nodes, had two networks, 1gig and 10gig , mistakenly first created peer on 1Gig network .... I wanted to remove it as a peer from the 1gig , and readd with the 10gig
15:03 sforsythe If you have an establish volume, can you remove all other peers, leave as stand alone node/brick ... then readd in other nodes ?
15:04 jdarcy Hm.  I'd say yes if it wasn't replicated.
15:04 plarsen joined #gluster
15:05 jdarcy With replication, I'd have to say you're stuck with a choice of adding a third replica temporarily (ick) or going non-replicated during the transition (more ick).
15:06 sforsythe so ok, I'm confused on peer status and volume and what 'defines' the storage completely ... what is the 'failover' scenario ... if I have node1 and node2, peered and a replicated volume created
15:07 sforsythe if node1 goes up in smoke ... now only have node2 ... how do I 'remove' node1 and add a new node3 ?
15:07 ppai joined #gluster
15:07 jdarcy sforsythe: That's a "replace" operation to us.
15:08 jdarcy I guess you could use that for the migration case too.
15:08 zerick joined #gluster
15:11 sforsythe I see this http://gluster.org/community/document​ation//index.php/Gluster_3.2:_Brick_R​estoration_-_Replace_Crashed_Server
15:11 glusterbot <http://goo.gl/IqaGN> (at gluster.org)
15:11 Guest53741 joined #gluster
15:12 sforsythe and 7.4 Migrating volumes ... but doesn't seem to have alot of detail, is there any other place you suggest I can look?
15:12 kkeithley I thought someone like JoeJulian had a recipe for creating a volume using a brick with existing files on it?
15:12 jdarcy sforsythe: That's essentially the "non-replicated transition window" scenario.  If you're OK with that fine, but I feel like I have to warn you.
15:12 kkeithley A brick that wasn't previously part of a gluster volume
15:14 sforsythe warn about what?
15:14 daMaestro joined #gluster
15:15 sforsythe any suggestions where this article went? http://community.gluster.org/q/a-replica-no​de-has-failed-completely-and-must-be-replac​ed-with-new-empty-hardware-how-do-i-add-the​-new-hardware-and-bricks-back-into-the-repl​ica-pair-and-begin-the-healing-process/
15:15 glusterbot <http://goo.gl/4hWXJ> (at community.gluster.org)
15:16 sprachgenerator joined #gluster
15:22 JuanBre joined #gluster
15:23 X3NQ Sorry for reposting but.. Over the weekend I'm moving one of two gluster nodes to a new IP address (keeping the same domain used with peer probe). Will it automatically work after I bring backup the server, or will I need to change something?
15:24 kkeithley Do the peers know each other by name or by IP? If by IP, you should reprobe them by name first.
15:24 X3NQ kkeithley, by DNS of course :)
15:26 kkeithley And you don't have any names "hard coded" in /etc/hosts, right? Then you should be good.
15:26 X3NQ kkeithley, i just don't know how gluster handles names, does it resolve them on service start or when the probe is done first time or?
15:26 X3NQ kkeithley, nope
15:28 kkeithley generally speaking gluster doesn't try to second guess you. If you probe by IP, it remembers the IP. If you probe by name, it remembers and uses the name. If you made a mistake and probed by IP, you can fix it by rerunning the probe with the name.
15:31 X3NQ No mistakes, it was originally done by using the FQDN. But i'm going to be taking down one of the peers, changing its domain name to its new IP, move the server and bring it back up. I just want to know how broken everything will be when I start gluster back up on this node. Will i have to re probe it, or just restart the service on the other peer as well?
15:32 kkeithley reprobe is not needed, the other peers know it by its FQDN, which isn't going to change.
15:32 X3NQ Okay brilliant, so it should just work and catch up with any data it missed
15:33 kkeithley It should.
15:34 kkeithley Maybe JoeJulian or semiosis have some more advice, but I think you're good.
15:38 mmalesa joined #gluster
15:39 bala joined #gluster
15:42 X3NQ kkeithley, great. Thanks for the advice :)
15:45 glusterbot New news from newglusterbugs: [Bug 1002207] remove unused parameter and correctly handle mem alloc failure <http://goo.gl/QvShSi>
15:47 JuanBre joined #gluster
15:52 hagarth joined #gluster
15:53 wcchandler joined #gluster
15:54 nshaikh left #gluster
15:59 lalatenduM joined #gluster
16:00 sforsythe left #gluster
16:01 MrNaviPacho joined #gluster
16:06 JoeJulian @later tell sforsythe The reason du showed 99G in .glusterfs and only 20M in the actual directory is because the .glusterfs tree is almost all hardlinks to the files. Hardlinks aren't counted twice by du.
16:06 glusterbot JoeJulian: The operation succeeded.
16:06 gluster-meetb0t JoeJulian: Error: "later" is not a valid command.
16:07 * JoeJulian needs coffee...
16:08 jclift @coffee
16:08 gluster-meetb0t jclift: Error: "coffee" is not a valid command.
16:08 jclift That seems like a command, ripe for the implementation
16:08 JoeJulian yeah
16:09 gluster-meetb0t was kicked by glusterbot: JoeJulian
16:09 gluster-meetb0t joined #gluster
16:10 gluster-meetb0t was kicked by glusterbot: JoeJulian
16:10 JoeJulian Whomever's bot that is needs to change the default trigger or at least silence it.
16:14 LoudNoises joined #gluster
16:20 basic- joined #gluster
16:21 kkeithley joined #gluster
16:21 basic` joined #gluster
16:26 kkeithley joined #gluster
16:36 kkeithley joined #gluster
16:37 kshlm joined #gluster
16:47 ctria joined #gluster
16:48 MrNaviPacho joined #gluster
17:04 Mo__ joined #gluster
17:05 zaitcev joined #gluster
17:06 MrNaviPacho joined #gluster
17:14 pono left #gluster
17:15 glusterbot New news from newglusterbugs: [Bug 1002220] Contains an rpath <http://goo.gl/6bDNzs>
17:27 Technicool joined #gluster
17:31 compbio at the San Francisco gluster meetup yesterday, there was talk about an essential FUSE kernel patch
17:32 compbio but I can't find any information about a FUSE kernel patch through google or the websites from the talk
17:32 compbio does anybody have info on that?
17:33 ctria joined #gluster
17:33 nonsenso dang, there was a sf gluster meetup yesterday and i missed it.
17:33 sforsythe joined #gluster
17:33 hagarth compbio: this thread has some details - http://fuse.996288.n3.nabble.com/PATCH-REPOST-fu​se-drop-dentry-on-failed-revalidate-td11546.html
17:33 glusterbot <http://goo.gl/WE5NmK> (at fuse.996288.n3.nabble.com)
17:37 compbio hagarth: thanks!
17:37 compbio does this only affect filesystems that are, in essence, double mounted?
17:38 compbio btw, the SF meetup was great, there's a lot of helpful knowledge that's gained from those
17:39 hagarth compbio: both nfs and fuse seem to have a problem.
17:39 hagarth compbio: thanks, good to know that it was helpful.
17:45 nonsenso compbio: i'll definitely look forward to attending the next one.
17:46 MrNaviPacho joined #gluster
18:03 deepakcs joined #gluster
18:04 deepakcs left #gluster
18:07 dbruhn joined #gluster
18:16 redragon_ joined #gluster
18:17 redragon_ quick question, is there a way to setup restrictions so defined IPs can only mount glusterfs read only?
18:18 bennyturns joined #gluster
18:29 jporterfield joined #gluster
18:45 JoeJulian redragon_: no
18:45 JoeJulian a2_, avati, hagarth: Could you tell me what additional information would be useful for bug 1001585 ? What string is hashed to produce the socket filename? I think it's a concatination of "/var/run${server}${brick_​path_with_slash_removed}" could you confirm or correct?
18:45 glusterbot Bug http://goo.gl/FKY5ZN unspecified, unspecified, ---, kparthas, NEW , glusterd loses connection with the bricks
18:52 hagarth JoeJulian: seems to be the md5sum of "/var/lib/glusterd/vols/<voln​ame>/run/$server-$brick_path"
18:56 mmalesa joined #gluster
18:57 compbio ceph vs gluster debate from a few months ago: http://www.youtube.com/watch?v=JfRqpdgoiRQ
18:57 glusterbot Title: [Linux.conf.au 2013] - grand distributed storage debate glusterfs and ceph going head head - YouTube (at www.youtube.com)
18:58 compbio (sorry to relive the past, just found it now!)
19:01 edward1 joined #gluster
19:02 redragon_ JoeJulian, thank you
19:02 redragon_ JoeJulian, is this a concept in thought at some stage?
19:03 lpabon joined #gluster
19:03 redragon_ meaning restrictions as a whole, and is it possible to enforce ip restrictions if its nfs mount?
19:03 JoeJulian redragon_: I haven't heard of it being discussed, but with some of the changes that are being proposed, eventually that should be possible. file a bug report asking for that enhancement.
19:03 glusterbot http://goo.gl/UUuCq
19:04 redragon_ coolio will do
19:26 lpabon joined #gluster
19:27 Guest53741 joined #gluster
19:28 jcsp joined #gluster
19:37 B21956 joined #gluster
19:38 SunilVA joined #gluster
19:44 JoeJulian wtf... why do none of the socket hashes match up?
19:46 JoeJulian hmm, ok... maybe my test method is still wrong..
19:49 plarsen joined #gluster
20:05 ninkotech__ joined #gluster
20:17 JoeJulian Dammit, hagarth, where does 4efb008e4e433ff7735a5a76111461d1 come from? :P
20:18 JoeJulian hagarth: What else, besides bricks, should glusterd be trying to communicate with via socket?
20:20 DV joined #gluster
20:28 jporterfield joined #gluster
20:33 dblack joined #gluster
20:46 glusterbot New news from newglusterbugs: [Bug 1001585] glusterd loses connection with the bricks <http://goo.gl/FKY5ZN>
21:21 JoeJulian a2_: Nope, even rebooted each server after the upgrade.
21:23 redragon_ @bug
21:23 glusterbot redragon_: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
21:23 JoeJulian You looking for "file a bug"?
21:23 glusterbot http://goo.gl/UUuCq
21:23 redragon_ thanks
21:28 redragon_ JoeJulian, i submitted the feature request, finally found a few minutes inbetween tasks here
21:30 jporterfield joined #gluster
21:46 glusterbot New news from newglusterbugs: [Bug 1002313] request for ip based access control <http://goo.gl/6gmuhk>
21:51 JoeJulian a2_: Even if I had left the old glusterfsd running after upgrading, that hash that it's looking for should still match something.
21:56 daMaestro joined #gluster
22:12 jporterfield joined #gluster
22:16 glusterbot New news from newglusterbugs: [Bug 1002322] gluster volume rebalance displays wrong nodes <http://goo.gl/ZZroAT>
22:16 mmalesa_ joined #gluster
22:17 tryggvil joined #gluster
22:33 twx_ joined #gluster
22:33 hflai joined #gluster
22:35 Guest53741 joined #gluster
22:36 asias joined #gluster
22:36 _br_ joined #gluster
22:37 awheele__ joined #gluster
23:16 glusterbot New news from newglusterbugs: [Bug 986775] file snapshotting support <http://goo.gl/ozgmO>
23:26 aliguori joined #gluster
23:37 jporterfield joined #gluster
23:38 chirino_m joined #gluster
23:39 plarsen joined #gluster
23:53 StarBeast joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary