Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 _pol joined #gluster
00:02 StarBeast joined #gluster
00:16 vpshastry joined #gluster
00:49 jporterfield joined #gluster
01:02 jporterfield joined #gluster
01:13 StarBeast joined #gluster
01:14 mmalesa joined #gluster
01:30 masterzen joined #gluster
01:34 kevein joined #gluster
01:35 awheeler joined #gluster
02:07 andrewklau joined #gluster
02:08 andrewklau What's the best way to do a self heal on a reinstalled gluster node? The cluster was 2 node replication, I tried gluster volume heal test-volume but that was unsuccessful
02:10 RameshN joined #gluster
02:20 johnbot11 joined #gluster
02:28 aravindavk joined #gluster
02:33 lalatenduM joined #gluster
02:34 ajha joined #gluster
02:36 andrewklau left #gluster
02:58 lpabon joined #gluster
03:00 saurabh joined #gluster
03:02 setuid joined #gluster
03:02 setuid Anyone know where the Gluster Virtual Appliance went? The link on this page leads to a 404 on Red Hat's site: http://gluster.org/community/documentation/in​dex.php/Gluster_3.2:_Downloading_and_Installi​ng_Gluster_Virtual_Storage_Appliance_for_Xen
03:02 glusterbot <http://goo.gl/sABT4p> (at gluster.org)
03:08 kshlm joined #gluster
03:11 bulde joined #gluster
03:15 shubhendu joined #gluster
03:19 purpleidea setuid: afaik that has been gone for ages now...
03:19 purpleidea setuid: can i offer you easy deployment with a puppet module instead ?
03:30 setuid purpleidea, Anything that gets me a working Guster cluster in my ESXi instance, yes.
03:40 bharata-rao joined #gluster
03:53 itisravi joined #gluster
03:55 purpleidea joined #gluster
03:56 hagarth joined #gluster
04:00 bulde joined #gluster
04:02 mohankumar joined #gluster
04:11 sgowda joined #gluster
04:14 davinder2 joined #gluster
04:17 johnmark joined #gluster
04:18 arusso joined #gluster
04:19 Humble joined #gluster
04:24 meghanam joined #gluster
04:24 meghanam_ joined #gluster
04:24 dusmant joined #gluster
04:28 ababu joined #gluster
04:30 harish joined #gluster
04:31 jporterfield joined #gluster
04:31 psharma joined #gluster
04:33 rjoseph joined #gluster
04:41 ppai joined #gluster
04:44 jporterfield joined #gluster
04:45 lala_ joined #gluster
04:45 awheeler joined #gluster
04:46 ndarshan joined #gluster
04:53 kanagaraj joined #gluster
04:55 vpshastry joined #gluster
04:55 badone joined #gluster
04:58 mohankumar sgowda: ping
05:00 anands joined #gluster
05:04 davinder2 Hi All
05:04 davinder2 I am unable to detach server
05:05 davinder2 getting below error "peer detach: failed: Brick(s) with the peer gfscluster1 exist in cluster"
05:06 davinder2 if I am trying to remove brick then also it is giving error
05:06 davinder2 gluster volume remove-brick gluster_mnt_hmc  gfsgluster1:/gluster_mnt_hmc commit
05:06 davinder2 Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
05:06 davinder2 volume remove-brick commit: failed: Removing bricks from replicate configuration is not allowed without red
05:07 davinder2 any once can help?
05:12 sgowda joined #gluster
05:14 shylesh joined #gluster
05:16 tjstansell i think to remove a brick from a replicate volume you have to also change the replica count.
05:16 spandit joined #gluster
05:16 tjstansell volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK>
05:17 tjstansell if that's what you're really trying to do.
05:17 hagarth joined #gluster
05:18 tjstansell that's about all i can contribute, though, as i'm heading out.
05:21 bala joined #gluster
05:41 bulde joined #gluster
05:48 davinder2 if one server failed what are steps to detach that server
05:48 davinder2 I am unable to detach
05:52 raghu joined #gluster
05:54 CheRi joined #gluster
05:58 anands joined #gluster
05:59 RameshN joined #gluster
06:07 mohankumar joined #gluster
06:13 rgustafs joined #gluster
06:20 jporterfield joined #gluster
06:21 tziOm joined #gluster
06:23 jtux joined #gluster
06:25 dusmant joined #gluster
06:33 sgowda joined #gluster
06:35 hagarth joined #gluster
06:38 davinder joined #gluster
06:39 bharata-rao how to get peer probe succeed w/o turning iptables completely off ? Is there a recommended rule documented that I have to set ?
06:39 samppah_ @ports
06:39 glusterbot samppah_: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
06:40 samppah_ bharata-rao: port 24007 i guess?
06:43 bharata-rao samppah_, I would assume so, but how to check it is indeed 24007 ?
06:43 bharata-rao samppah_, and more importantly how to get peer probe working w/o turning iptables completely
06:46 glusterbot New news from resolvedglusterbugs: [Bug 921437] [QEMU/KVM-RHS] EXT4 filesystem corruption in application VMs, after rebalance process <http://goo.gl/ZkLbVj>
06:47 deepakcs joined #gluster
06:50 StarBeast joined #gluster
06:50 hagarth bharata-rao: for peer probe to succeed, you would need connections to 24007 to be allowed
06:53 bharata-rao hagarth, beginning to realize that, thanks
07:08 jtux joined #gluster
07:09 ctria joined #gluster
07:14 vshankar joined #gluster
07:15 samppah_ @planning
07:15 haritsu joined #gluster
07:16 samppah_ glusterbot halp!
07:16 glusterbot samppah_: I do not know about 'halp!', but I do know about these similar topics: 'hack'
07:18 samppah_ @learn
07:18 glusterbot samppah_: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
07:18 mooperd_ joined #gluster
07:21 samppah_ learn planning as GlusterFS 3.4.1 backport candidates http://goo.gl/vYhSkT &  GlusterFS 3.5 planning http://goo.gl/8cr8iW
07:21 glusterbot Title: Release 341 backport candidates - GlusterDocumentation (at goo.gl)
07:22 samppah_ @planning
07:22 samppah_ @learn planning as GlusterFS 3.4.1 backport candidates http://goo.gl/vYhSkT &  GlusterFS 3.5 planning http://goo.gl/8cr8iW
07:22 glusterbot samppah_: The operation succeeded.
07:22 samppah_ of course
07:23 nueces joined #gluster
07:25 eseyman joined #gluster
07:30 ricky-ticky joined #gluster
07:33 glusterbot New news from newglusterbugs: [Bug 998352] [RHEV-RHS] vms goes into paused state after starting rebalance <http://goo.gl/mPCKdv>
07:36 DV__ joined #gluster
07:45 anands joined #gluster
07:45 vimal joined #gluster
07:49 dusmant joined #gluster
07:49 jcsp joined #gluster
07:53 hagarth joined #gluster
07:59 jporterfield joined #gluster
08:02 kPb_in joined #gluster
08:07 jcsp joined #gluster
08:17 jporterfield joined #gluster
08:33 meghanam joined #gluster
08:34 ababu joined #gluster
08:34 haritsu joined #gluster
08:35 bulde joined #gluster
08:37 rastar joined #gluster
08:38 bulde1 joined #gluster
08:41 satheesh1 joined #gluster
08:50 meghanam_ joined #gluster
08:52 sgowda joined #gluster
08:52 dusmant joined #gluster
08:56 Alex___ joined #gluster
08:57 Alex___ Hello @all!
08:58 Alex___ I need some information on making backup from a GlusterFS Volume. Can anyone help me with best practice or sth?
09:03 glusterbot New news from newglusterbugs: [Bug 986429] Backupvolfile server option should work internal to GlusterFS framework <http://goo.gl/xSA6n>
09:04 shruti joined #gluster
09:07 X3NQ joined #gluster
09:08 bala joined #gluster
09:11 manik joined #gluster
09:11 manik joined #gluster
09:20 Alpinist joined #gluster
09:24 mohankumar joined #gluster
09:27 mmalesa joined #gluster
09:28 msvbhat window 3
09:29 msvbhat Ignore that. forgot the / in the irssi command
09:30 jporterfield joined #gluster
09:32 jtux joined #gluster
09:48 edward1 joined #gluster
09:58 Norky joined #gluster
09:59 tryggvil joined #gluster
10:04 bulde joined #gluster
10:04 sgowda joined #gluster
10:07 StarBeast joined #gluster
10:24 jporterfield joined #gluster
10:26 satheesh joined #gluster
10:33 dusmant joined #gluster
10:41 haritsu joined #gluster
10:42 jporterfield joined #gluster
10:49 satheesh1 joined #gluster
10:52 andreask joined #gluster
10:53 ahomolya__ joined #gluster
10:57 davinder joined #gluster
10:58 bala joined #gluster
10:59 bala joined #gluster
11:01 rwheeler joined #gluster
11:03 glusterbot New news from newglusterbugs: [Bug 1005164] Add code for syncenv_destroy() and clean up syncenv_new() <http://goo.gl/HkJjya>
11:12 ppai joined #gluster
11:25 anands joined #gluster
11:29 haritsu joined #gluster
11:29 sgowda joined #gluster
11:34 glusterbot New news from newglusterbugs: [Bug 1002907] changelog binary parser not working <http://goo.gl/UB57mL> || [Bug 1002940] change in changelog-encoding <http://goo.gl/dmQAcW>
11:37 bala joined #gluster
11:44 mbukatov joined #gluster
11:48 sgowda joined #gluster
11:52 vpshastry joined #gluster
11:54 anands joined #gluster
12:01 kkeithley hagarth_: ping
12:03 andreask joined #gluster
12:05 anands joined #gluster
12:06 anands1 joined #gluster
12:10 setuid joined #gluster
12:10 setuid purpleidea, I missed any responses you might have had last night after my last comment, re: puppet module to deploy Gluster
12:13 haritsu joined #gluster
12:18 RedShift joined #gluster
12:21 kkeithley a2,avati: ping
12:23 vpshastry left #gluster
12:33 purpleidea setuid: i'm back
12:33 purpleidea setuid: if you're familiar with puppet, you can use my module to "completely" deploy a gluster setup for testing
12:33 purpleidea i'm actually hacking on the code right now to make this even easier!
12:33 purpleidea setuid: https://github.com/purpleidea/puppet-gluster
12:33 glusterbot Title: purpleidea/puppet-gluster · GitHub (at github.com)
12:48 setuid purpleidea, What are the base OS requirements for your module?
12:51 vpshastry joined #gluster
12:52 robo joined #gluster
12:53 vpshastry left #gluster
12:55 B21956 joined #gluster
12:57 awheeler joined #gluster
12:58 bulde joined #gluster
12:58 awheele__ joined #gluster
12:59 hagarth joined #gluster
13:00 awheeler joined #gluster
13:00 purpleidea setuid: centos 6+
13:01 jclift joined #gluster
13:02 awheeler joined #gluster
13:02 ProT-0-TypE joined #gluster
13:05 mmalesa joined #gluster
13:06 haritsu joined #gluster
13:06 DV__ joined #gluster
13:08 rcheleguini joined #gluster
13:18 lpabon joined #gluster
13:19 jcsp joined #gluster
13:24 stickyboy What's backwards compatibility like between client / server?  I have mixed downtime, and want to know if I can update one of my clients to 3.3.2 and leave the server at 3.3.1.
13:31 jdarcy joined #gluster
13:40 Norky joined #gluster
13:41 bulde joined #gluster
13:46 hagarth joined #gluster
13:48 bugs_ joined #gluster
13:55 kPb_in_ joined #gluster
13:55 kaptk2 joined #gluster
13:56 plarsen joined #gluster
14:01 _setuid joined #gluster
14:01 _setuid joined #gluster
14:01 ajha joined #gluster
14:03 mattf joined #gluster
14:05 anands joined #gluster
14:11 bennyturns joined #gluster
14:15 mattf joined #gluster
14:15 purpleidea i wrote a very small patch if anyone can merge it: https://bugzilla.redhat.co​m/show_bug.cgi?id=1005257
14:15 glusterbot <http://goo.gl/70zwPb> (at bugzilla.redhat.com)
14:16 glusterbot Bug 1005257: unspecified, unspecified, ---, kaushal, NEW , [PATCH] Small typo fixes
14:17 hagarth purpleidea: would it be possible for you to submit the patch on gerrit?
14:19 purpleidea hagarth: okay (purpleidea.accounts++)
14:20 purpleidea hagarth: actually, i don't have an openid provider that i want to use :P
14:20 hagarth purpleidea: not even gmail?
14:21 purpleidea openid == scary magic?
14:23 purpleidea hagarth: i'll add some more stuff to the patch and try to submit it another day. it's not important stuff, just string fixes atm.
14:23 hagarth purpleidea: sure, but will be good to have that in.
14:23 jcsp joined #gluster
14:32 robo joined #gluster
14:33 sprachgenerator joined #gluster
14:35 glusterbot New news from newglusterbugs: [Bug 1005257] [PATCH] Small typo fixes <http://goo.gl/70zwPb>
14:35 harish joined #gluster
14:42 bala joined #gluster
14:43 haritsu joined #gluster
14:43 jag3773 joined #gluster
14:54 zerick joined #gluster
15:04 je23 joined #gluster
15:07 bulde joined #gluster
15:11 robo joined #gluster
15:11 tryggvil joined #gluster
15:11 je23 Hi, Is there a known memory leak issue with gluster 3.3.1 ? I have tried the /proc/sys/vm/drop_caches technique but it made no difference
15:19 harish joined #gluster
15:27 nightwalk joined #gluster
15:27 davinder joined #gluster
15:28 dneary joined #gluster
15:31 gkleiman joined #gluster
15:36 johnbot11 joined #gluster
15:36 glusterbot New news from newglusterbugs: [Bug 892808] [FEAT] Bring subdirectory mount option with native client <http://goo.gl/wpcU0>
15:38 jporterfield joined #gluster
15:39 kkeithley @repos
15:39 glusterbot kkeithley: See @yum, @ppa or @git repo
15:39 kkeithley @yum
15:39 glusterbot kkeithley: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
15:49 haritsu joined #gluster
15:53 LoudNoises joined #gluster
15:55 mooperd_ left #gluster
16:00 haritsu joined #gluster
16:07 theekgb joined #gluster
16:07 theekgb hey hey
16:09 RobertLaptop joined #gluster
16:20 RobertLaptop joined #gluster
16:22 jporterfield joined #gluster
16:22 Mo_ joined #gluster
16:23 Mo__ joined #gluster
16:24 johnbot11 joined #gluster
16:24 mooperd_ joined #gluster
16:30 jporterfield joined #gluster
16:33 shylesh joined #gluster
16:37 bala joined #gluster
16:41 _pol joined #gluster
16:41 jcsp joined #gluster
16:48 zerick joined #gluster
16:48 jporterfield joined #gluster
16:52 sank joined #gluster
16:53 sank rdma.c:1079:gf_rdma_cm_event_handler
16:53 sank can anyone please help with the error  W [rdma.c:1079:gf_rdma_cm_event_handler] 0-gbits-client-0: cma event RDMA_CM_EVENT_ADDR_ERROR, error -110 (me: peer:)
16:54 \_pol joined #gluster
16:58 lalatenduM joined #gluster
16:58 Technicool joined #gluster
16:59 jcsp left #gluster
16:59 kaptk2 I am wanting to setup a glusterfs 3.4 storage pool within virtual machine manager 0.10.0. Is that possible?
17:06 aliguori joined #gluster
17:09 jcsp joined #gluster
17:10 mmalesa joined #gluster
17:32 ricky-ticky joined #gluster
17:36 diegows_ joined #gluster
17:37 glusterbot New news from newglusterbugs: [Bug 1005344] duplicate entries in volume property <http://goo.gl/Q53vF1>
17:39 nueces joined #gluster
17:40 rjoseph joined #gluster
17:41 bulde joined #gluster
17:43 theekgb curiosity...how difficult is setting up glusterd on a NAS unit?
17:43 theekgb debian based
17:43 jclift x86?
17:43 theekgb x64
17:43 jclift Shouldn't be hard.  There are Debian and Ubuntu packages for Gluster around.
17:43 theekgb lol wait yea that would be x86
17:44 jclift With luck, they'll "just work".  Else, you might just need to recompile them.
17:44 jclift semiosis: ^^^ Thoughts?
17:44 sank anyone has idea to run 3.4 over rdma ?
17:44 theekgb well i tried the dpkg method with the .deb files and i have the read only file system problem
17:44 sank I need working copy of server side and client side volume files.
17:44 theekgb which I dont think is a glusterd issue obviously
17:44 jclift sank: 3.4.0 isn't a good idea for RDMA
17:44 theekgb but I am not adept to working around read only file syste,s
17:45 jclift 3.4.1 *might* be better, but I haven't yet tried any of the pre-release stuff with RDMA
17:45 sank so it is still in making ?
17:45 jclift sank: Yeah.
17:45 sank so only alternate way is to use tcp for IPoIB with 3.4 then ?
17:45 jclift theekgb: Might be better to ask on the gluster-users mailing list. :)
17:46 jclift sank: Yeah.  It's not completely terrible using IPoIB though.
17:46 sank thanks for quick response. I will go with that.
17:46 theekgb @jclift cool ty, i got one of those px6-300d's from iomega/emc/lenova, and their nfs is killing me on ovirt
17:46 theekgb so i thought I would see if I could cheat with gluster ;)
17:47 theekgb pretty full os underneath, but pkg installation i havent figured out
17:47 jclift theekgb: Heh, well "good luck. :D
17:47 jclift theekgb: If/when you get it working, would you be up for writing an instruction guide onto the wiki?
17:48 theekgb loll def
17:48 jclift theekgb: Seriously, that kind of thing helps us out.
17:48 theekgb im serious too, i rarely ask for help, i do this crap all the time
17:49 theekgb and having a reason to do it to help others is an easy answer for me
17:49 jclift :)
17:49 theekgb its getting answers from alot of diffcult geeks like me thats hard lol
17:49 jclift Heh
17:49 sank jclift : I noticed that infiniband portion of gluster was not stable for long time. Please let me know if you guys need testing. I have resources to test that.
17:49 theekgb we are a difficult bunch we are
17:50 jclift sank: We do need testing, and bug filing for the RDMA stuff.
17:50 theekgb i got resources out the ying yang, its time I lack...i swear im the only one within miles that cares abouttesting lol
17:50 jclift sank: If you've got ****non-production**** stuff you can try RDMA out on, then file bugs whenever it screws up (it probably will), *please* do so. :D
17:52 sank I dont find good documentation of using/configuring gluster with the latest version - resulting waste of time on old docs and digging code to resolve issue with obsolete tricks/configurations.
17:53 sank devs should keep docs updated for testers. I look forward to explore test freamework. I hope it is not outdated.
17:54 y4m4 left #gluster
17:54 JoeJulian sank: what tricks?
17:55 cicero those are for kids
17:55 jclift sank: The test framework is kind of interesting.  We have one that's kept up to date in the code base, but it's not truly "multi-server" in the modern sense of it.
17:56 jclift sank: And we have a multi-node one on the list of features for this coming release
18:01 rcheleguini joined #gluster
18:04 sank There is no updated detail docs for gluster. If it is there then difficult to verify they are the latest working versions. Then user endup using trial and error methods to configure of real life setup.
18:05 sank for rdma support, there is no doc about configuring server and client. I came to know that 3.4.0 is not stable for rdma. Thanks to IRC !
18:06 sank I am eager to wait for the day when rdma is stable in gluster
18:06 sank I am eager waiting for the day when rdma is stable in gluster
18:06 jclift Same here
18:09 robo joined #gluster
18:11 mmalesa joined #gluster
18:15 dbruhn jclift, I have my RDMA test stuff setup, did anyone ever figure out what needed to be tested
18:16 dbruhn and sank 3.3.1 is functional for RDMA
18:16 jclift dbruhn: Not yet.  Kind of turned into a subproject: http://www.gluster.org/community/docume​ntation/index.php/Multi-Node_Test_Suite
18:16 glusterbot <http://goo.gl/qI035I> (at www.gluster.org)
18:19 dbruhn neato
18:22 avati kkeithley: pong?
18:24 zaitcev joined #gluster
18:30 dbruhn left #gluster
18:35 JoeJulian sank: We had someone that volunteered to update the documentation to markdown so it could easily be collaborated on. He worked on it for about a week and that seems to be dead now. Want to take that over?
18:36 sank for sure.
18:36 JoeJulian Let me find you the link to his github fork...
18:37 robo joined #gluster
18:37 JoeJulian There's the email (and thread) that contain the link: http://www.mail-archive.com/glust​er-devel@nongnu.org/msg09870.html
18:37 glusterbot <http://goo.gl/LjXO4z> (at www.mail-archive.com)
18:48 stickyboy joined #gluster
18:54 mbukatov joined #gluster
18:57 mbukatov joined #gluster
19:00 daMaestro joined #gluster
19:28 jskinner_ joined #gluster
19:40 \_pol joined #gluster
19:50 daMaestro joined #gluster
20:00 niximor joined #gluster
20:04 rcheleguini joined #gluster
20:06 andreask joined #gluster
20:06 masterzen joined #gluster
20:15 haritsu joined #gluster
20:18 haritsu_ joined #gluster
20:23 haritsu joined #gluster
20:28 nightwalk joined #gluster
20:39 daMaestro joined #gluster
21:04 tryggvil joined #gluster
21:12 It_Burns joined #gluster
21:13 jporterfield joined #gluster
21:44 ricky-ticky joined #gluster
21:59 jporterfield joined #gluster
22:20 robo joined #gluster
22:23 theron joined #gluster
22:32 nightwalk joined #gluster
22:40 _setuid joined #gluster
22:46 nueces joined #gluster
22:51 jporterfield joined #gluster
23:43 jporterfield joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary