Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 JoeJulian johnmark: Well that's weird. I was working on converting the admin guide to asciidoc when I discovered that the documentation is copyright Red Hat with no documentation license offered. I suppose I can keep editing this, but it looks like that unless I just write my own I can't re-publish this.
00:07 gprs1234 semiosis & haidz: thanks for all the tips!
00:07 smellis joined #gluster
00:17 duffrecords semiosis: thanks for the reassurance about self-healing earlier.  the volumes appear to be rebuilding with no problems
00:37 semiosis duffrecords: awesome
00:37 semiosis gprs1234: yw
00:37 * semiosis afk
00:41 plarsen joined #gluster
01:00 sjoeboo_ joined #gluster
01:02 rwheeler_ joined #gluster
01:02 mjrosenb_ joined #gluster
01:04 jmara_ joined #gluster
01:04 JoeJulian joined #gluster
01:07 JoeJulian joined #gluster
01:09 stopbit joined #gluster
01:12 plarsen joined #gluster
01:17 kevein joined #gluster
01:31 yinyin joined #gluster
01:31 nik_ joined #gluster
02:17 H__ joined #gluster
02:27 hagarth joined #gluster
02:45 FyreFoX semiosis: are you around?
02:45 semiosis sorta
02:45 semiosis whats up?
02:46 FyreFoX hey quick one what changes are in the precise ppa precise5 ?
02:46 FyreFoX is there a changelog I can review?
02:46 semiosis added a replaces relationship on the old gluster.org debian packages
02:47 FyreFoX dont spose you put the 887098 patch in ? :)
02:47 semiosis so if you have the old gluster.org package installed mine will gracefully replace it instead of installing alongside it
02:47 FyreFoX I built my own debs using your instructions, but I named mine 3.3.1-ubuntu1~precise5 .. sadly now that will be replaced with your new one :P
02:47 semiosis provided you stop/kill all the procs first as noted in the ppa description (UPGRADING...)
02:48 semiosis aw
02:49 semiosis well you can hold the package version using apt/aptitude
02:49 dustint joined #gluster
02:50 FyreFoX yea I was looking at that just now, not sure it will work given they have the same name
02:50 semiosis oh hm, same name?  i'd think it would leave things as is then
02:50 semiosis is it trying to reinstall?
02:50 FyreFoX na wants to upgrade/reinstall using the one from repo
02:50 semiosis weird
02:51 semiosis go to /etc/apt/sources.list (or maybe a file in sources.list.d) find the line for my ppa, comment it out, run apt-get update
02:51 semiosis that will make your system unaware of my ppa
02:51 semiosis easy fix for now
02:52 dustint joined #gluster
02:52 FyreFoX you could just include the tiny patch  its simply 'if (group_ce)' on the posix_acl.c
02:52 FyreFoX :)
02:57 JoeJulian bug 887098
02:57 glusterbot Bug http://goo.gl/QjeMP urgent, high, ---, vshastry, POST , gluster mount crashes
02:58 raven-np joined #gluster
03:00 JoeJulian FyreFoX: That bug wasn't backported. You might want to clone that bug and ask for a backport to 3.3.
03:01 JoeJulian Crap... now I'm wondering what other bugs aren't being backported.
03:03 FyreFoX JoeJulian: how do I do that?
03:03 semiosis JoeJulian: think there's going to be a 3.3.2?
03:03 FyreFoX do I need to clone it? I created the bug report to start with..
03:04 JoeJulian semiosis: I sure hope so. 3.4's got a lot of untested new features.
03:08 yinyin joined #gluster
03:14 JoeJulian FyreFoX: I don't know. That's the only way I know that you can get the status out of POST.
03:16 FyreFoX oic
03:16 FyreFoX will try. thanks
03:32 duffrecords left #gluster
03:47 semiosis @factoids rank
03:47 glusterbot semiosis: #1 options (223), #2 pasteinfo (173), #3 glossary (146), #4 extended attributes (131), #5 extended attributes (120), #6 repair (114), #7 hostnames (104), #8 node (101), #9 Joe's blog (81), #10 meh (73), #11 yum repo (57), #12 replace (56), #13 ext4 (51), #14 stripe (48), #15 targeted self heal (44), #16 ports (43), #17 semiosis tutorial (40), #18 volunteer (39), #19 split-brain (38),
03:47 glusterbot semiosis: #20 php (37)
04:07 overclk joined #gluster
04:13 dhsmith joined #gluster
04:13 Humble joined #gluster
04:22 bala1 joined #gluster
04:32 glusterbot New news from newglusterbugs: [Bug 872601] split-brain caused by %preun% script if server rpm is upgraded during self-heal <http://goo.gl/sZgPw>
04:38 sripathi joined #gluster
04:55 deepakcs joined #gluster
05:01 shylesh joined #gluster
05:03 lala joined #gluster
05:06 hagarth joined #gluster
05:09 vpshastry joined #gluster
05:12 bulde joined #gluster
05:21 rastar joined #gluster
05:29 yinyin joined #gluster
05:30 raghu joined #gluster
05:37 smellis ok gentlemen, I am getting really great performance on my replicate volume after turning on jumbo frames
05:37 smellis it's actually useable with multiple windows vms on kvm
05:37 smellis anyone see the same results (massively improved performance with jumbo frames) ?
05:44 sripathi joined #gluster
05:48 sripathi joined #gluster
05:53 yinyin joined #gluster
05:53 hagarth joined #gluster
05:56 greylurk joined #gluster
06:00 rastar1 joined #gluster
06:18 lala joined #gluster
06:23 emrah_ joined #gluster
06:30 sripathi joined #gluster
06:44 ramkrsna joined #gluster
06:44 ramkrsna joined #gluster
06:49 mnaser joined #gluster
06:59 dhsmith joined #gluster
06:59 vimal joined #gluster
07:00 cyr_ joined #gluster
07:06 mtanner joined #gluster
07:19 rastar joined #gluster
07:19 rgustafs joined #gluster
07:19 jtux joined #gluster
07:41 bauruine joined #gluster
07:44 Nevan joined #gluster
07:45 guigui1 joined #gluster
07:47 ngoswami joined #gluster
07:56 jtux joined #gluster
07:57 Azrael808 joined #gluster
07:59 rgustafs joined #gluster
08:09 andreask joined #gluster
08:18 puebele joined #gluster
08:30 sripathi joined #gluster
08:32 jtux joined #gluster
08:33 glusterbot New news from newglusterbugs: [Bug 878652] Enchancement: Replication Information in gluster volume info <http://goo.gl/dWQnM>
08:33 sripathi joined #gluster
08:34 deepakcs joined #gluster
08:35 sripathi1 joined #gluster
08:46 duerF joined #gluster
08:47 hagarth joined #gluster
08:48 tryggvil joined #gluster
09:00 bulde1 joined #gluster
09:04 tryggvil joined #gluster
09:10 dobber joined #gluster
09:28 m0zes joined #gluster
09:34 tjikkun_work joined #gluster
09:35 gbrand_ joined #gluster
09:38 gbrand_ joined #gluster
09:44 DaveS_ joined #gluster
09:58 Norky joined #gluster
10:01 alphacc joined #gluster
10:02 sripathi joined #gluster
10:03 sripathi1 joined #gluster
10:06 Azrael808 joined #gluster
10:09 hagarth joined #gluster
10:12 ramkrsna joined #gluster
10:15 jjnash joined #gluster
10:15 nightwalk joined #gluster
10:31 glusterbot New news from resolvedglusterbugs: [Bug 764888] Avoid logging Socket read failures in glusterd <http://goo.gl/7Qyrl>
10:43 shireesh joined #gluster
10:44 x4rlos joined #gluster
10:50 bulde joined #gluster
11:02 sripathi joined #gluster
11:04 shireesh joined #gluster
11:08 dbruhn joined #gluster
11:24 sripathi joined #gluster
11:27 bala joined #gluster
11:27 tjikkun joined #gluster
11:27 tjikkun joined #gluster
11:28 andreask joined #gluster
11:50 shireesh joined #gluster
11:54 smellis joined #gluster
11:56 ewilson32 joined #gluster
11:57 raghu joined #gluster
11:58 manik joined #gluster
11:59 ewilson32 joined #gluster
12:00 ewilson32 left #gluster
12:09 Norky joined #gluster
12:15 cyr_ joined #gluster
12:15 raghu joined #gluster
12:17 rastar1 joined #gluster
12:29 sripathi joined #gluster
12:33 raghu joined #gluster
12:39 kkeithley1 joined #gluster
12:45 kkeithley1 left #gluster
12:46 kkeithley1 joined #gluster
12:49 dustint joined #gluster
12:52 raven-np joined #gluster
12:57 bauruine joined #gluster
12:57 andrei__ joined #gluster
12:58 andrei__ hello! I was wondering if someone could help me with setting up the glusterfs storage. I need to start with a single glusterfs server and a few months later add a second replica server to mirror all the data
12:59 andrei__ i've tried doing a small PoC with a virtual machines
12:59 andrei__ and I can't figure out how to do the first glusterfs server
13:00 andrei__ when I do gluster create test-name server1:/brick i get a Distributed glusterfs volume
13:00 andrei__ i can't seems to add another server brick to it and make it replicate
13:01 andrei__ when I add another brick it is still a Distributed volume and not a replicated one which I want
13:01 andrei__ is there a way to convert a Distributed volume to a replicated one?
13:01 glusterbot New news from resolvedglusterbugs: [Bug 819444] for few directories, ls command is giving 'Invalid argument' when one of the server(brick, distributed volume) is down <http://goo.gl/ws6KR>
13:03 kkeithley1 try creating the first one with '... replica 1...' i.e. `gluster volume create test-name replica 1 server1:/brick`
13:06 shireesh joined #gluster
13:06 andrei__ kkeithley1: that doesn't work. I've tried it. It tells me: "replica count should be greater than 1"
13:06 twx_ gluster> volume add-brick
13:06 twx_ Usage: volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ...
13:07 twx_ possible to define there maybe?
13:08 kkeithley1 yes, that was my next suggestion
13:08 x4rlos I think kkeithley1: is bang one here.
13:09 kkeithley1 `gluster volume add-brick test-name replica 2 server2:/brick`
13:10 x4rlos i used: gluster volume remove-brick database-archive replica 1 client2:/mnt/database-archive
13:10 x4rlos recently.
13:10 twx_ x4rlos: for reducing the replica count ?
13:11 x4rlos yeah, so just s/remove-brick/add-brick
13:11 twx_ cool , nice to know that works
13:12 x4rlos andrei__: What version are you using btw?
13:12 andrei__ I am using 3.3.1
13:12 andrei__ i will try the suggested now
13:12 andrei__ thanks
13:14 x4rlos twx_: Yeah, was dropping a brick from another machine, and then re-adding for testing.
13:15 x4rlos andrei__: As your not actually creating a replication - have you tried omitting the replica 1 from the command?
13:16 alex88 joined #gluster
13:16 x4rlos Not sure the side effects of this. I assume you could then add bricks later by specifying replica 2
13:16 ninkotech_ joined #gluster
13:16 alex88 hi guys, I'm switching from 3.3 deb to ppa repository debs
13:17 alex88 and It said to stop the gluster service
13:17 alex88 and install then the ppa and glusterfs-server
13:17 andrei__ x4rlos: when I create a single server volume it automatically creates the volume as Distributed
13:17 alex88 I did that but it tries to overwrite /etc/glusterfs/glusterd.vol
13:18 andrei__ so, hopefully I can change it to a Replicated volume when I add the second server and specify replica 2
13:18 alex88 is it safe to uninstall the deb and reinstall with apt get? or i'll lost the volumes?
13:22 x4rlos andrei__: Hmmm. Well, first things first. "gluster create test-name server1:/brick" which you said is missing the volume argument.
13:23 andrei__ yes, just done that and i can successfully mount the volume
13:23 andrei__ i will now try adding a new server
13:23 x4rlos ah, cool.
13:27 alex88 3.3 clients are compatible with 3.2.5 server?
13:28 andrei__ thanks guys, that worked
13:28 andrei__ the new volume is shown as Type: Replicate
13:28 andrei__ and the number of bricks: 1x2 = 2
13:29 spn joined #gluster
13:29 andrei__ does that mean that I've got a Replicated volume or Distributed + Replicated one?
13:30 andrei__ let's say if I would like to add 2 more servers later on to increase the volume size
13:30 andrei__ how would I be able to do that?
13:30 bulde joined #gluster
13:30 andrei__ the same way:  gluster volume add-brick test-volume replica 2 server3:/gluster server4:/gluster ?
13:30 andrei__ would this work?
13:31 NashTrash joined #gluster
13:31 H__ alex88:  sadly no
13:31 alex88 H__, oh.. damn… clients has ubuntu 12.10 and server has 12.04… just seen ppa installed different versions!
13:31 alex88 need to fix fast!
13:31 alex88 :)
13:32 H__ that's why i build form source
13:32 x4rlos alex88: I think they will pretend to work on the face of it, but then just error.
13:33 * x4rlos wonders whether a version check should be part of the peer connect options of gluster.
13:33 H__ i never tried that. if only i had time for these things ;-)
13:34 glusterbot New news from newglusterbugs: [Bug 895528] 3.4 Alpha Tracker <http://goo.gl/hZmy9>
13:34 alex88 damn, where are the old debs? I need them asap… :/
13:34 x4rlos alex88: Which ones?
13:34 alex88 x4rlos, 3.3.0 for ubuntu 12.04
13:34 alex88 now there is only the readme
13:35 H__ i thought your server was 3.2.5 ?
13:35 alex88 H__, because I've switched from debs to ppa, which report 3.3 in the name but installs 3.2.5 on ubuntu 12.04
13:35 dustint joined #gluster
13:35 x4rlos alex88: gluster website is best option i think.
13:35 H__ aarrgh. the pain. which version to trust ?
13:35 alex88 x4rlos, nothing there
13:35 x4rlos It has debs.
13:35 x4rlos really?
13:36 alex88 http://download.gluster.org/pub/gl​uster/glusterfs/3.3/3.3.0/Ubuntu/
13:36 glusterbot <http://goo.gl/YvGjH> (at download.gluster.org)
13:36 alex88 just the readme
13:36 alex88 they've removed the debs
13:36 x4rlos ah. im a debian man. There's a readme :-)
13:37 alex88 mmhh..wait..now installing 3.3.1
13:37 alex88 what happened
13:38 x4rlos wizardry?
13:39 alex88 yeah.. I've added the repo before and with apt-get install glusterfs-server it was installing 3.2.5
13:39 x4rlos did you updated?
13:40 alex88 repos?
13:40 alex88 I think it did with the add-apt-repository
13:40 alex88 btw then I've removed with add-apt-repository --remove
13:40 alex88 added again just pasting deb deb-src lines, apt-get update && install
13:40 alex88 and it was installing 3.3.1
13:41 alex88 luckily it recognized my old volumes configuration
13:42 chirino joined #gluster
13:48 balunasj joined #gluster
13:56 bulde1 joined #gluster
13:58 theron joined #gluster
14:04 rastar joined #gluster
14:17 hagarth joined #gluster
14:20 plarsen joined #gluster
14:24 rwheeler joined #gluster
14:32 andrei__ does anyone know if there is a way to provide real time replication of data between the replica servers?
14:33 andrei__ i am planning to use glusterfs for vm images and I need to know if the replication would work properly between two glusterfs servers
14:33 andrei__ so that if there is a change to the vm image file, the change would automatically replicate to the second glusterfs server
14:37 x4rlos You want sync replication?
14:38 bennyturns joined #gluster
14:38 andrei__ yes. I don't think async replication would work with vm images.
14:40 x4rlos hmm. 3.4 is geared towards vm - not sure though tbh how well it works. I have thought about using before. I know a few people have accomplished vm's over gluster - but not sure the ins and outs.
14:41 x4rlos Anyone know why the gluster 3.3.1-1 deb package for debian doesnt appear to have the man docs? Or is it just me?
14:44 sripathi joined #gluster
14:46 DataBeaver JoeJulian: A month ago I reported a strange symlink, and you said you could look into it later, but I had already deleted the link.  Now it happened again.  Would you still be interested and willing to figure out the cause?
14:48 semiosis x4rlos: iirc the man pages were not updated for 3.3 when 3.3.0 and 3.3.1 were released
14:48 semiosis so they were left out
14:49 DataBeaver JoeJulian: What I did was move a directory away on the server and then run a command on the client which accesses the directory.  I'm not sure what kind of access it was though, since the command was a complex make build.
14:49 x4rlos semiosis: So these arent complete debs?
14:50 semiosis x4rlos: idk what complete debs means
14:50 semiosis they are what they are
14:50 bashtoni joined #gluster
14:51 semiosis glusterfs didnt have up to date man pages when 3.3.1 was released
14:51 semiosis so how could debs have them if the source did not?
14:53 bashtoni I have a split brain problem with glusterfs 3.3.0, got an entry 'background  entry self-heal failed' in the logs, is there anyway I can do an online fix?
14:53 bashtoni I don't care about the file in question, it can be re-generated
14:53 semiosis @repair
14:53 glusterbot semiosis: http://goo.gl/uA812
14:53 semiosis hmm, not that
14:54 semiosis @3.3 split brain
14:54 semiosis @split-brain
14:54 glusterbot semiosis: (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
14:54 stopbit joined #gluster
14:54 semiosis bashtoni: see #2
14:56 DataBeaver JoeJulian: I seem to have figured out a way to create these broken directory symlinks at will.  This may or may not be related to a situation where the client has lost the contents of some directories.
14:56 x4rlos semiosis: Ah, okay, apologies.
14:56 x4rlos semiosis: I'll quite happily add them and re-package if someone would like?
14:56 semiosis x4rlos: no worries :)
14:56 semiosis x4rlos: where did you get the packages?  from debian or from gluster.org?
14:57 x4rlos gluster.org.
14:57 x4rlos debian used to have 3.3 in experimental (with the mans) but then changed to 3.4
14:57 x4rlos main repo still has 3.2
14:57 x4rlos So i thought i would get from horses mouth :-)
14:57 semiosis hm ok
14:57 x4rlos finding missing man pages worried me.
14:58 semiosis kkeithley: ping
14:58 bashtoni semiosis: Thanks, worked fine :)
14:58 semiosis bashtoni: yw glad to hear it
14:58 x4rlos i assume then there is a 3.3.1-2 floating round the debian world.
14:58 semiosis doubt it
15:00 semiosis x4rlos: thx for pointing it out, you're right there were some man page updates in time for 3.3.1
15:00 semiosis https://bugzilla.redhat.com/show_bug.cgi?id=825906
15:00 glusterbot <http://goo.gl/NjeeC> (at bugzilla.redhat.com)
15:00 glusterbot Bug 825906: urgent, medium, ---, kaushal, ASSIGNED , man pages are not up to date with 3.3.0 features/options.
15:00 semiosis i'll chat with kkeithley about updating the deb packages on gluster.org when he's around today
15:02 DataBeaver JoeJulian: In case you don't remember the issue, I'm getting symlinks like this: lrwxrwxrwx  2 root  root          53 tammi 15 16:51 foo -> ../../79/d2/79d2db1c-6c20-4​695-bd59-deb0f0aa10f5/_foo
15:03 semiosis afk
15:03 x4rlos no problem. As i say, im happy to add+repackage if you want me to.
15:03 x4rlos I was just worried there were other things missing :-_)
15:05 Azrael808 joined #gluster
15:09 jim` joined #gluster
15:13 obryan joined #gluster
15:15 bugs_ joined #gluster
15:15 sjoeboo_ so, i've got a 5TB quota on a dir, with basicall nothing in it yet...BUT, quota list shows 16384PB of data there!
15:18 obryan left #gluster
15:22 hagarth joined #gluster
15:24 shireesh joined #gluster
15:26 jbrooks joined #gluster
15:27 x4rlos sjoeboo_: I want one :-)
15:27 sjoeboo_ yeah...the whole volume is only 137TB....
15:28 m0zes 16 Exabytes. hrm. How much would that cost.
15:29 m0zes ~$1,216,000,000
15:30 x4rlos semiosis: Which package should it be in? The common or server or client?
15:30 wushudoin joined #gluster
15:33 lala joined #gluster
15:35 andrei__ x4rlos: okay, but do you know if sync replication is possible with glusterfs?
15:35 andrei__ x4rlos: okay, but do you know if sync replication is possible with glusterfs?
15:37 * x4rlos ducks hoping someone will be able to confirm the ability of sync replication.
15:38 x4rlos http://www.gluster.org/community/doc​umentation/index.php/Gluster_3.2:_Re​plicated_Volumes_vs_Geo-replication
15:38 glusterbot <http://goo.gl/NRFiA> (at www.gluster.org)
15:38 bala joined #gluster
15:38 x4rlos So it suggests it is synchronous in this link.
15:39 andrei__ thanks
15:39 x4rlos and i hope 3.3 will be building upon it. But i am not sure how this can be controlled. ie: What if one of the bricks is unavailable? Will it still write to the other x bricks?
15:39 andrei__ you are right, it seems to be it should be sync replication
15:40 andrei__ however, it's strange as it doesn't seems to work for me.
15:41 andrei__ oh, I think I know what the problem is. i'll check first before posting
15:41 andrei__ sorry
15:45 x4rlos It's good. Im still learning too :-)
15:48 dbriggs54 joined #gluster
15:49 dbriggs54 I am have a problem with a server install, I uninstalled my gluser,
15:49 dbriggs54 tryed to reinstall, now my service will not start any ideas
16:00 x4rlos dbriggs54: What version from and to? errors in the log somewhere? How did you uninstall? :-)
16:01 spn joined #gluster
16:03 jtux joined #gluster
16:03 dbriggs54 i remove the rpm, prob not the right way,
16:04 gauravp joined #gluster
16:05 dbriggs54 verision is 3.27,os is centos 6.3
16:07 spn joined #gluster
16:07 dbriggs54 when i run service glusterd start it says it started
16:08 dbriggs54 how ever it crashes level a syslock and pid in place
16:13 guigui1 left #gluster
16:18 copec JoeJulian, I finally got 3.3.0 built on Slowaris 11.1... so I have three physical servers with six bricks with a replicated+distributed setup, and an Ubuntu client of which I built the same 3.3.0 on, I get an occasional error though:
16:19 copec server 10.70.129.219:24010 has not responded in the last 42 seconds, disconnecting.
16:19 copec It does that for all of the servers
16:19 copec How some input?
16:23 copec hrmm, I might have some of the ubuntu packaged gluster stuff hanging around still
16:28 x4rlos dbriggs54: hmmm. Sounds a bit fruity. Nothing in the error logs worth readng?
16:32 x4rlos Would you credit that. I can create a replication volume, and then geo-replicate from one of them to another site.
16:33 shireesh joined #gluster
16:39 aliguori joined #gluster
16:46 erik49 joined #gluster
16:54 ruissalo joined #gluster
16:57 ruissalo hi there! is it possible to use some kind of server-client encryption? like SSL?
17:09 x4rlos ruissalo: http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
17:09 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
17:09 x4rlos Section 8.2.5 -- not promising anything, but maybe a good start.
17:15 dbriggs54 here is what my messages log shows, Jan 15 08:50:57 linyv2 abrtd: Directory 'ccpp-2013-01-15-08:50:57-7034' creation detected
17:15 dbriggs54 Jan 15 08:50:57 linyv2 abrt[7077]: Saved core dump of pid 7034 (/usr/sbin/glusterfsd) to /var/spool/abrt/ccpp-2013-01-15-08:50:57-7034 (40009728 bytes)
17:15 dbriggs54 Jan 15 08:50:57 linyv2 abrtd: Package 'glusterfs' isn't signed with proper key
17:15 dbriggs54 Jan 15 08:50:57 linyv2 abrtd: 'post-create' on '/var/spool/abrt/ccpp-2013-01-15-08:50:57-7034' exited with 1
17:15 dbriggs54 Jan 15 08:50:57 linyv2 abrtd: Corrupted or bad directory /var/spool/abrt/ccpp-2013-01-15-08:50:57-7034, deleting
17:16 ndevos dbriggs54: you can disable that deletion in /etc/abrt/abrt-action-save-package-data.conf
17:16 ndevos set "OpenGPGCheck = no"
17:17 m0zes joined #gluster
17:22 hagarth joined #gluster
17:23 x4rlos How can i troubleshoot a faulty geo-replication volume?
17:24 x4rlos I just set one from client1 to dev machine and then cleaned out the old files from the dev gluster mount. Then i rsync'd everything across to the dev mount. And still says faulty.
17:24 x4rlos Can i ask gluster what's wrong somehow?
17:25 ndevos x4rlos: you'd want to check the logs under /var/log/gluster/geo-replication/ on both the master and slave
17:25 x4rlos thanks.
17:26 ndevos /var/log/glusterfs that is of course
17:29 x4rlos :-) I think i made a schoolboy error.
17:30 x4rlos [2013-01-15 17:29:27.470715] E [resource:194:logerr] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory
17:31 * x4rlos looks to glusterbot, as he usually gives me a link to show me why i messed up.
17:31 ndevos xlooks like there is no /usr/local/libexec/glusterfs/gsyncd on your slave
17:33 ndevos /usr/local suggests you built glusterfs yourself, you should probably install the same binaries in the same directories on your slave
17:34 x4rlos ndevos: It isnt on the slave or the masters.
17:34 x4rlos 3.3.1
17:34 x4rlos debian repo
17:34 x4rlos (except slave which is from gluster.org)
17:34 * ndevos doesnt know about debian
17:34 x4rlos I will check packages now for it.
17:35 ndevos you may be able to set the correct path to gsynd in /var/lib/glusterd/geo-rep/gsyncd.conf (or something like that)
17:36 x4rlos yeah -> /usr/lib/glusterfs/glusterfs/gsyncd
17:36 x4rlos looks like its looking in the wrong place :-/
17:36 ndevos right, the libexec directory is something fedora/rhel uses
17:38 ndevos sounds like you found a debian bug, the packager might be interested in fixing that
17:38 x4rlos I'll submit it.
17:38 semiosis o_O
17:38 Mo__ joined #gluster
17:39 x4rlos I'll happily re-submit the package. Spelling in man page yesterday, missing man page in gluster.org package this morning, another one now.
17:39 semiosis where are you submitting these packages?
17:39 x4rlos i meant submit the bug :-)
17:40 semiosis oh
17:40 x4rlos I sent the patch for the man page yesterday though :-)
17:40 semiosis to where?
17:42 x4rlos semiosis: https://bugzilla.redhat.com/show_bug.cgi?id=894355
17:42 glusterbot <http://goo.gl/soh8S> (at bugzilla.redhat.com)
17:42 glusterbot Bug 894355: low, unspecified, ---, vbellur, NEW , spelling mistake?
17:42 semiosis x4rlos: thanks!  when debian project builds packages they run some utility that spellchecks the whole source tree
17:43 semiosis i forget the name
17:43 x4rlos looks like one got missed :-)
17:43 semiosis heh
17:43 semiosis unlikely
17:46 x4rlos you blaming it on user error?
17:51 erik49 is there performance to be gained by first RAID0ing bricks?
17:56 johnmark sjoeboo_: ping
17:56 semiosis x4rlos: ok so looks like debian's linter (or whatever) that checks spelling ignored that man page... possibly because that man page was not included in the build, or possibly because it doesnt lint man pages, i dont know
17:56 semiosis but in any case, i double checked and afaict only the glusterd man page was updated for 3.3, and only after 3.3.0 but before 3.3.1
17:57 semiosis other man pages were actually excluded from the packages because they were not updated for 3.3
17:57 nueces joined #gluster
17:58 x4rlos yeah, bit mad-crazy :-)
17:59 semiosis x4rlos: man pages need a lot more than a spellcheck, if you're interested in helping out :)
18:00 semiosis looks like they've been updated in the last few months in git master
18:00 semiosis https://github.com/gluster​/glusterfs/tree/master/doc
18:00 glusterbot <http://goo.gl/UjCAx> (at github.com)
18:00 semiosis 1-3 months ago for most
18:01 semiosis oh noes, missing git tag for 3.3.1 release :(
18:04 manik joined #gluster
18:05 x4rlos https://bugzilla.redhat.com/show_bug.cgi?id=895656
18:05 glusterbot <http://goo.gl/ZNs3J> (at bugzilla.redhat.com)
18:05 glusterbot Bug 895656: unspecified, unspecified, ---, csaba, NEW , geo-replication problem (debian) [resource:194:logerr] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory
18:06 x4rlos I'm happy to help out :-)
18:06 x4rlos But right now, im going to go home before they lock me in work :-)
18:06 x4rlos Speak to ya tomorrow!
18:06 x4rlos :-D
18:07 x4rlos Hope that bug submit was okay and made sense, if not i will update tomorrow.
18:07 x4rlos bye.
18:07 semiosis x4rlos: later
18:15 jrossi joined #gluster
18:16 greylurk joined #gluster
18:19 jrossi I have been running into Input/Output Error and (xtime) failed on peer.  Full Cut-n-paste https://gist.github.com/4540675.  I have not had a luck fixing this.  I have stopped and restarted replication.  I have turned on and off geo-replication.indexing off
18:19 jrossi does anyone know how ti correct this issue?
18:28 sjoeboo_ johnmark: pong!
18:35 glusterbot New news from newglusterbugs: [Bug 895656] geo-replication problem (debian) [resource:194:logerr] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory <http://goo.gl/ZNs3J>
18:35 portante joined #gluster
18:37 schmidmt1 left #gluster
18:40 squizzi joined #gluster
18:41 tc00per joined #gluster
18:41 tc00per left #gluster
18:44 dbriggs54 I am still fighting with getting my service to start, anybody got any ideas
19:01 andreask joined #gluster
19:01 kkeithley1 joined #gluster
19:07 dbriggs54 dose anyone now how to do a clean uninstall of gluster
19:12 wN joined #gluster
19:14 kkeithley1 joined #gluster
19:24 andrei__ joined #gluster
19:25 greylurk joined #gluster
19:28 gbrand__ joined #gluster
19:29 chouchins joined #gluster
19:29 wN joined #gluster
19:31 bauruine joined #gluster
19:33 andrei__ joined #gluster
19:42 wN joined #gluster
19:48 hattenator joined #gluster
19:52 tryggvil joined #gluster
20:11 khushildep joined #gluster
20:15 gbrand_ joined #gluster
20:23 wN joined #gluster
20:26 greylurk joined #gluster
20:28 KrisAbsinthe42 joined #gluster
20:29 KrisAbsinthe42 Is there a way to specify which port a brick will use?
20:30 semiosis no
20:30 KrisAbsinthe42 I didnt think so. Thanks for the awnswer
20:34 semiosis yw
20:42 dbruhn joined #gluster
20:43 y4m4 joined #gluster
20:58 wN joined #gluster
21:00 andreask joined #gluster
21:03 squizzi joined #gluster
21:07 wdilly if I have 8 physical disks, 1 tb each, in which i would like to setup a 2x4 distributed replicated setup, does it make sense to break those 1tb disks into multiple bricks, or should i simply make each brick as large as is physically allowable?
21:08 chouchins personally we break them up into smaller bricks.  That way if you need to rsync due to a corrupt FS or any reason the brick size is smaller and faster to sync.
21:08 chouchins but there's no real reason to do that
21:12 elyograg wdilly: my plan is to go with brick sizes that will work with all physical disk sizes that I am likely to encounter.  With 6-drive raid5 volumes, two per server, that's 5TB.  works with 1TB, 2TB, 3TB, and 4TB disks.  Will likely work with any newer sizes that hit the market, too.
21:21 daMaestro joined #gluster
21:23 wdilly elyograg: thanks, thats a great point
21:25 jjnash joined #gluster
21:25 nightwalk joined #gluster
21:27 wdilly in a distributed replication setup, with two bricks on one physical disk, will gluster be smart enough not to setup the mirroring replication of one brick with the other brick on the same disk
21:27 wdilly or will this need to be specified in my command when creating the volume?
21:30 elyograg wdilly: for replica N, every N brick entries, in order, on the commandline will constitute a replica set.  you just have to specify them in the right order.
21:30 kkeithley2 joined #gluster
21:33 wdilly elyograg: can you help me out: would this work: "gluster volume create testvol replica 2 transport tcp svr1:/brick1 svr2:/brick1 svr1:/brick2 svr2:/brick2"
21:34 wdilly 2 bricks per svr, with the replicating pair with a brick per server?
21:35 elyograg that looks right to me.  one additional note: I would make the brick directory a subdirectory of the filesystem, not the root of the filesystem.  that does two things - it lets you put more than one volume on a set of bricks, but also protects your root filesystem from mistakes when a brick doesn't get mounted.  filesystem not mounted means the subdirectory doesn't exist and gluster will not fill up your root filesystem with data healed from the ot
21:36 wdilly so svr1:/export/brick1 etc?
21:36 elyograg svr1:/export/brick1/testvol ... if the mount point of the brick is /export/brick1
21:48 greylurk joined #gluster
21:50 gauravp elyograg: i'm just following along, and so i'm clear, you're saying that even if a brick is mounted at /export/brick1, it is a good idea to add-brick /export/brick1/testvol where /export/brick1/testvol is a subdir created on the mounted storage
21:50 elyograg gauravp: that's correct.
21:51 kkeithley2 left #gluster
21:53 gauravp hmm, interesting .. hadn't thought of that .. so far i've been thinking of using lvm volumes to carve up my bricks for multiple gluster volumes
21:56 semiosis elyograg: did you test that?  sure gluster doesn't just make the dirs it wants to see?
21:57 elyograg semiosis: no, i haven't tested it.  one more thing on the list of stuff i want to do but can never find time.  if it did create the dirs, I would call it a bug.
21:58 elyograg I know that it *will* create the dirs that don't exist when creating the volume, and probably when doing add-brick as well.
22:00 semiosis interesting, it won't create more that one level when creating a volume
22:00 semiosis if /var/tmp/foo doesn't exist the volume create foo 12.34.56.7:/var/tmp/foo/0 fails
22:01 elyograg I think I filed a bug on 'create volume' "helping" by making the dir for you.
22:02 semiosis i vaguely remember that
22:05 semiosis elyograg: [2013-01-15 22:04:56.397640] E [posix.c:4061:init] 0-foo-posix: Directory '/var/tmp/bar/0' doesn't exist, exiting.
22:05 semiosis looks like you're right!
22:06 semiosis if the brick's parent dir doesnt exist the brick process dies when it tries to start
22:06 semiosis bar/0 was the brick dir, i stopped the volume & deleted it's parent, bar, and when i started up the vol again the bar/0 brick process didnt survive
22:07 semiosis that's a good idea
22:07 semiosis testing this on 3.3.1 btw
22:17 ctria joined #gluster
22:18 dbruhn When running a rebalance, what does a force provide that the normal rebalance doesn't?
22:28 Azrael808 joined #gluster
22:40 wN joined #gluster
22:55 badone joined #gluster
23:16 joeto1 joined #gluster
23:54 tryggvil joined #gluster
23:55 nueces_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary