Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 elyograg homeward bound. will continue to idle. i'll look in later.
00:08 haidz elyograg, personally i would dump network manager
00:08 haidz also you should use arp instead of mii mon
00:08 haidz mii mon will monitor link status, however arp will monitor the upstream gateway and will handle switch failure better
00:09 haidz which is probably what you're looking for
00:09 haidz also, i would specify the mac instead of a uuid
00:09 haidz not sure why UUID is being added.. might be a new feature
00:09 elyograg it's added by the fedora/centos installer.
00:10 haidz ah
00:10 elyograg i just do a quick uuidgen from the commandline when i need to make a new one for the bond0 :)
00:10 haidz ive never used those tools.. i generally configure them by hand (or with my scripts)
00:10 haidz try without the uuids
00:11 haidz line 2 is odd to me also
00:11 haidz try "alias bond0 bonding"
00:15 elyograg I did try that, because I saw a guide for F17 that said the same.  didn't help.
00:16 elyograg I'll be working on it tomorrow.  Feel free to continue to offer suggestions.
00:18 haidz hm
00:18 haidz see what the status is in /proc/net/bonding/bond0
00:18 haidz did you modprobe bonding? just in case it didnt load? also reboot?
00:20 elyograg I have rebooted a lot.
00:24 elyograg the bonding module does not get loaded.
00:24 elyograg just verifying things.
00:24 elyograg if I modprobe it and then do ifup bond0, I get an error
00:25 elyograg Connection activation failed: Device not maanged by network manager or not available.
00:25 elyograg ifconfig -a does show bond0 exists after the modprobe.
00:25 elyograg em1 and em2 are not getting the SLAVE tag.
00:29 elyograg another reboot.  confirmed that 'modprobe bonding' is all that's required to create bond0.
00:31 elyograg ok, i got the module to load on boot.  created bonding.conf in /etc/modules-load.d and put 'bonding' in it.
00:31 elyograg still can't get it to work, but that's a little progress.
00:34 hchiramm_ joined #gluster
00:36 haidz yeah.. turn off network manager
00:37 haidz if its a server it shouldnt even have a gui
00:37 haidz elyograg, check "cat /proc/net/bonding/bond0"
00:44 raven-np joined #gluster
00:56 gomix joined #gluster
00:57 gomix o/ glusterfs people...
00:57 gomix i just switched to fedora 18 in the client side
00:57 gomix glusterfs server is on fedora 17
00:58 gomix its a very simple setup
00:58 gomix but now i cant make it work
00:58 gomix what should i post for getting your gentle help?
00:59 gomix http://fpaste.org/86Sd/ (client side errors)
00:59 glusterbot Title: Viewing errors glusterfs by Gomix (at fpaste.org)
00:59 gomix server has not changed its config
01:00 gomix # mount -t glusterfs spider.fedora-ve.org:test-volume /cloudfs/
01:00 gomix client side command line
01:02 gomix http://fpaste.org/ihR3/ (server sido gluster info)
01:02 glusterbot Title: Viewing gluster server side by Gomix (at fpaste.org)
01:41 VSpike joined #gluster
01:53 dhsmith joined #gluster
01:54 jbrooks joined #gluster
02:00 dustint_ joined #gluster
02:00 tryggvil joined #gluster
02:06 hagarth joined #gluster
02:23 kevein joined #gluster
02:38 bharata joined #gluster
02:48 wN joined #gluster
02:49 dhsmith_ joined #gluster
03:08 overclk joined #gluster
03:20 shylesh joined #gluster
03:28 bharata_ joined #gluster
03:47 sripathi joined #gluster
04:00 mohankumar joined #gluster
04:06 sgowda joined #gluster
04:25 mohankumar joined #gluster
04:40 wN joined #gluster
04:44 lala joined #gluster
04:47 theron joined #gluster
04:58 hagarth joined #gluster
05:01 JoeJulian gomix -> gomix-afk -> gomix -> gomix-afk: Maybe a clue in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the server? I thought port blocking would throw a more detailed error than that so I don't think it's iptables, but check your ,,(ports)... Maybe check selinux as well.
05:01 glusterbot gomix-afk: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
05:01 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
05:19 vpshastry joined #gluster
05:22 sgowda joined #gluster
05:22 deepakcs joined #gluster
05:26 bulde joined #gluster
05:41 glusterbot New news from resolvedglusterbugs: [Bug 802025] User complaining of memory errors during geo-rep <http://goo.gl/QDHlK>
05:43 vpshastry joined #gluster
05:46 bala joined #gluster
05:46 hagarth1 joined #gluster
05:53 rastar joined #gluster
05:55 bulde joined #gluster
05:59 shireesh joined #gluster
06:11 rastar1 joined #gluster
06:12 vpshastry joined #gluster
06:13 sgowda joined #gluster
06:26 bulde joined #gluster
06:27 hateya joined #gluster
06:27 rastar joined #gluster
06:35 layer3switch joined #gluster
06:37 raghu joined #gluster
06:43 sgowda joined #gluster
06:44 deepakcs joined #gluster
06:44 vimal joined #gluster
06:47 deepakcs joined #gluster
06:47 ramkrsna joined #gluster
06:50 vpshastry joined #gluster
06:53 zwu joined #gluster
06:57 ngoswami joined #gluster
06:59 cyr_ joined #gluster
07:02 dhsmith joined #gluster
07:07 jtux joined #gluster
07:09 puebele1 joined #gluster
07:13 Nevan joined #gluster
07:17 shireesh joined #gluster
07:21 pranithk joined #gluster
07:21 shylesh joined #gluster
07:27 puebele joined #gluster
07:28 vpshastry joined #gluster
07:29 rastar joined #gluster
07:39 hateya joined #gluster
07:41 glusterbot New news from resolvedglusterbugs: [Bug 852869] volume heal info shows mostly gfid not filenames <http://goo.gl/MjsoL>
07:43 theron joined #gluster
07:44 guigui1 joined #gluster
07:45 hateya joined #gluster
07:57 greylurk joined #gluster
08:04 bauruine joined #gluster
08:07 ctria joined #gluster
08:11 glusterbot New news from resolvedglusterbugs: [Bug 856070] mounting failed due to deadlock <http://goo.gl/BC6kT> || [Bug 823836] s3ql based brick added but no access to it <http://goo.gl/T2dnj>
08:15 masterzen joined #gluster
08:29 17WAAYSFI joined #gluster
08:29 tjikkun_work joined #gluster
08:32 gbrand_ joined #gluster
08:39 andreask joined #gluster
08:41 shireesh joined #gluster
08:42 rags__ joined #gluster
08:44 hagarth joined #gluster
09:11 sripathi joined #gluster
09:26 tjikkun_work joined #gluster
09:40 ekuric joined #gluster
09:50 smellis joined #gluster
09:53 mohankumar joined #gluster
09:55 tryggvil joined #gluster
10:14 smellis joined #gluster
10:21 sgowda joined #gluster
10:34 duerF joined #gluster
10:42 mohankumar joined #gluster
10:44 smellis joined #gluster
10:58 bharata__ joined #gluster
11:01 dobber joined #gluster
11:11 shireesh joined #gluster
11:19 vpshastry joined #gluster
11:21 rwheeler joined #gluster
11:32 shireesh joined #gluster
11:39 vimal joined #gluster
11:43 hagarth joined #gluster
12:03 puebele joined #gluster
12:14 cyr_ joined #gluster
12:17 mohankumar joined #gluster
12:19 vpshastry joined #gluster
12:28 edward1 joined #gluster
12:36 vpshastry joined #gluster
12:37 shireesh joined #gluster
12:37 ramkrsna joined #gluster
12:37 vimal joined #gluster
12:37 ngoswami joined #gluster
12:38 hchiramm_ joined #gluster
12:38 rastar joined #gluster
12:38 bala joined #gluster
12:39 hagarth joined #gluster
12:44 enseven joined #gluster
12:47 enseven Hi all! I've got a question concerning GlusterFS architecture: If I have a replicated GlusterFS volume, replicated on 2 servers, and I want to connect with a GlusterFS client only system. Do I have to connect to both servers? Or is it enough to connect to either one of them? How does replication work if my client can only see one of the servers on the network, because it has only network access to one of them? Thanx :)
12:51 ngoswami_ joined #gluster
12:51 vikumar joined #gluster
12:51 hchiramm__ joined #gluster
12:51 ramkrsna_ joined #gluster
12:51 shireesh_ joined #gluster
12:52 bala1 joined #gluster
12:53 hagarth1 joined #gluster
12:58 lorderr joined #gluster
13:00 dbruhn Enseven: I believe when using the gluster native fuse client all peers/bricks/servers need to be network accessible by the client. The gluster client handles multiple requests for data access and writes. If using the NFS client, I believe you can maintain a single point of access, and a private replication network. You might be better off utilizing something like Geo Replication if you are just really using the second copy as a backup.
13:00 dbruhn Geo Replication would facilitate a replicant copy of the data on the second server, and not need to be accessible by the clients.
13:04 manik joined #gluster
13:06 kkeithley Enseven: When you mount the volume you need only give one of the servers, it will notify the client of the other servers in the cluster and the client will use both/all of them.
13:06 tryggvil joined #gluster
13:07 enseven Yes, I used gluster the native client and tried it out. That's it! :-D Thanks a lot!
13:07 kkeithley With replication, if a server in the cluster fails, the client will continue to write to the remaining server(s). When the server comes back the client will resume writing to it. Automatic self heal (in 3.3.x) will sync the files that were written while it was down.
13:08 kkeithley In 3.2.x and earlier you have to trigger a self heal.
13:08 kkeithley trigger a self heal manually.
13:13 andreask joined #gluster
13:19 hagarth joined #gluster
13:21 tryggvil joined #gluster
13:26 plarsen joined #gluster
13:34 deepakcs joined #gluster
13:36 x4rlos can someone point me in the right direction? I have client1 and client2 and i have a volume going across both nodes (mirror). I want to drop client2 volume (keeping on client1). then i want to clean out client2 folders, and then re-add
13:36 x4rlos the brick on client2.
13:37 x4rlos So i have stopped the volume (ideally i wouldnt want to do this) - and what steps do i need to do to accomplish this?
13:40 kkeithley I haven't tried it myself, per se, but `gluster volume remove-brick $volname replica 1 $client2-hostname` ought to be the first step. You shouldn't need to stop the volume to do this.
13:41 kkeithley clean out the brick, then add it back with `bluster volume add-brick $volname replica 2 $client2-hostname`
13:42 kkeithley er, bluster, that's funny.
13:42 kkeithley s/bluster/gluster/
13:42 glusterbot What kkeithley meant to say was: er, gluster, that's funny.
13:42 kkeithley no glusterbot
13:42 kkeithley dwim
13:44 x4rlos hehe. I'll have a go now,
13:48 x4rlos here's how it went: gluster volume remove-brick database-archive replica 1 client2:/mnt/database-archive (success)
13:48 x4rlos and now gonna clean and re-add.
13:51 x4rlos got the dreaded: /mnt/database-archive or a prefix of it is already part of a volume
13:51 glusterbot x4rlos: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
13:53 x4rlos okay, its added.
13:55 x4rlos how long before a self heal?
13:57 ctria joined #gluster
14:01 jtux joined #gluster
14:04 aliguori joined #gluster
14:05 raven-np joined #gluster
14:09 x4rlos spelling mistake? :: "Perform a statedump of the brick or nfs-server processos of the volume"
14:09 x4rlos (in volume statedump <VOLNAME> [nfs] [all|mem|iobuf|callpool|priv|fd|inode|history]) :: man pages
14:09 ndevos x4rlos: your chance to file a bug an patch :)
14:09 glusterbot http://goo.gl/UUuCq
14:19 x4rlos ndevos: No probs :-)
14:20 dustint joined #gluster
14:27 x4rlos Done :-) >> Bug 894355 - spelling mistake?
14:27 glusterbot Bug http://goo.gl/soh8S low, unspecified, ---, vbellur, NEW , spelling mistake?
14:31 emrahnzm joined #gluster
14:32 ndevos x4rlos: nice! but you can go one step further and provide a patch -> http://www.gluster.org/community/docume​ntation/index.php/Development_Work_Flow
14:32 glusterbot <http://goo.gl/ynw7f> (at www.gluster.org)
14:36 guigui3 joined #gluster
14:39 semiosis @hack
14:39 glusterbot semiosis: The Development Work Flow is at http://goo.gl/ynw7f
14:47 glusterbot New news from newglusterbugs: [Bug 894355] spelling mistake? <http://goo.gl/soh8S>
15:00 stopbit joined #gluster
15:00 x4rlos ndevos: Booting up me dev box.
15:00 ndevos x4rlos: very cool :)
15:03 ctria joined #gluster
15:03 jvyas joined #gluster
15:06 vimal joined #gluster
15:06 bugs_ joined #gluster
15:11 theron joined #gluster
15:16 nueces joined #gluster
15:31 rodlabs joined #gluster
15:36 lorderr joined #gluster
15:38 gomix joined #gluster
15:39 chirino joined #gluster
15:39 hagarth joined #gluster
15:46 jbrooks joined #gluster
15:56 hagarth joined #gluster
16:02 lh joined #gluster
16:02 lh joined #gluster
16:02 x4rlos ndevos: I cannot see the man file in question any more. :-(
16:03 x4rlos Also, for the man pages i went through - it seems the heal option is now somewhere else?
16:03 ndevos x4rlos: hmm, I'd go for the quick+ulgy "git grep processos"
16:04 x4rlos i tried already :-)
16:04 x4rlos got from here: it clone ssh://xarlos@git.gluster.org/glusterfs.git glusterfs
16:05 ndevos x4rlos: that looks right to me
16:06 x4rlos then i cannot see it. I am looking in the /doc directory for it. I see the man pages, but no mention of heal etc.
16:06 daMaestro joined #gluster
16:07 cheftony joined #gluster
16:07 andreask joined #gluster
16:09 x4rlos interesting on the VM front.
16:10 ndevos x4rlos: found it in the release-3.3 branch: git grep processos origin/release-3.3
16:12 andreask joined #gluster
16:15 x4rlos ah. Im not very familiar with git. How can i grab that branch?
16:15 cheftony Hi! I'm trying to setup gluster fs on debian squeeze w/ kernel 3.2.35, the server is running and port listening but i'm unable to do anything with command. I got rdma module error when I start the service, here is the logs: http://dpaste.org/aoOrJ/
16:15 glusterbot Title: dpaste.de: Snippet #216463 (at dpaste.org)
16:15 x4rlos ndevos: (your wishing i am not doing this now, hehehe)
16:17 cheftony And rdma modules seems to be loaded
16:20 x4rlos blone -b
16:20 x4rlos s/blone/clone.
16:24 ndevos x4rlos: you can do something like 'git checkout -t -b bug-894355 origin/release-3.3'
16:24 x4rlos really? I just did a -b so lets see what i actually got :-D
16:25 ndevos not sure what 'clone -b' does...
16:25 x4rlos yeah - that didnt work what i wrote. haaha
16:25 x4rlos -b = branch.
16:25 * x4rlos thinks....
16:26 x4rlos aaah, i see now.
16:26 ndevos ah, I think that will clone the repository again, and checkout $branch
16:26 x4rlos yeah. Okay, so i have switched to that branch in the original clone.
16:27 ndevos git contains all the changes locally already, a checkout does not contact the server again
16:27 ndevos clone would contact the server, I guess
16:27 x4rlos exactly that.
16:27 elyograg haidz: Just got back to work.  what info do you want from /proc/net/bonding/bond0?
16:28 x4rlos Okay, so i have changed the file, and how can i then push that back on that branch? (sorry - free git lesson for me here :-))
16:28 ndevos so, when you have your repository, make sure to 'git config user.name "My Name"' and 'git config user.email me@example.com'
16:28 x4rlos k - 1 sec.
16:29 gprs1234 joined #gluster
16:29 ndevos sure :)
16:30 ndevos x4rlos: you have a $EDITOR set in your environment? if not, you may want to 'export EDITOR=vim'
16:31 x4rlos ahh, i see a commit coming :-) Do i not need to do a git add forst? (exporting var now as i wait for answer)
16:32 x4rlos s/forst/first
16:32 elyograg haidz: it says the mode is round-robin, the status is down, and all the numbers are 0.  that's very different from the centos systems where it's working.  those say fault-tolerance, and have slave info. the F18 system has none of that.
16:33 ndevos x4rlos: no, "git add" is not needed
16:33 x4rlos ah, cool.
16:33 ndevos x4rlos: "git commit -s $changedfiles" should do it
16:33 x4rlos ahh, cool, here we go.
16:33 ndevos x4rlos: the first line should be a summary, like "man: fix typo in self-heal something  stuff"
16:34 x4rlos okay.
16:34 ndevos x4rlos: after that, an empty line to separate the message itself, add a little description of some kind
16:35 ndevos x4rlos: then follows Signed-off-by, with that you accept that the project can have the change
16:35 ndevos x4rlos: save that and exit the $EDITOR, with 'git log -1 -p' you can see your message+patch
16:37 ndevos x4rlos: if you are happy with the result, and your name and email is correct in that output, you are ready to file it for review
16:37 ndevos all you've done now, is local only, nothing has been sent to the server yet
16:38 ndevos you can change your message with 'git commit --amend', add more patches and squash them into one with 'git rebase' and all sorts of fun
16:39 ndevos when you really are satisfied with the result, execute the ./rfc.sh script in the root of the repository and answer the questions there
16:39 elyograg haidz: I disabled NetworkManager, enabled network, took off the zeros from the address, prefix, and netmask entries, and presto, it's all working now.
16:40 ndevos x4rlos: after that, you should have a review.gluster.org link with your patch, add that in a comment to the bug you filed and move it to POST-status
16:41 x4rlos cool. Well, i have put the commit text in, and done the git log. So now i guess it's a rfc.sh command to come :-)
16:43 jdarcy Yay, new submitters FTW.  :)
16:43 x4rlos :-D
16:43 x4rlos Its been sent.
16:44 ndevos looks like its http://review.gluster.com/4379
16:44 glusterbot Title: Gerrit Code Review (at review.gluster.com)
16:44 x4rlos certainly is.
16:44 * x4rlos puffs out chest.
16:44 x4rlos wanna do a bunch more now. lol.
16:45 ndevos x4rlos: there is one more thing, currently that is a rfc review, but you really want to file that against the bug
16:45 x4rlos okay..
16:45 ndevos x4rlos: in order to change that, you will need to modify the description
16:45 ndevos x4rlos: you can do that with "git commit --amend"
16:46 ndevos x4rlos: just before the "Signed-off-by" line, add a line "BUG: 894355"
16:46 jdarcy Actually, rfc.sh should ask for a bug ID and insert the line itself if necessary.
16:47 ndevos yeah, I expected that, but something went wrong there
16:47 jdarcy Weird.
16:49 x4rlos :-o
16:49 x4rlos rfc did ask the bug id.
16:49 x4rlos shall i ammend and add the line?
16:50 neofob left #gluster
16:52 x4rlos how can a person tell if the rfc script worked for the bug entry?
16:52 tqrst left #gluster
16:54 ndevos the review should not mention 'rfc', but 'bug-$N', and the commit message includes a line 'BUG: $N'
16:55 ndevos x4rlos: whne your commit message has the BUG-line in it, you can execute ./rhc.sh again and it should update the review (don't change the Change-Id)
16:58 theron joined #gluster
17:00 x4rlos sorry - one sec. People ringing me. Work always gets in the way.
17:01 andreask joined #gluster
17:01 bala joined #gluster
17:16 DaveS_ joined #gluster
17:21 DaveS___ joined #gluster
17:22 x4rlos okay, back.
17:24 zaitcev joined #gluster
17:24 jdarcy 22 minutes from submission to merging.  Not quite a record, but might be #2.
17:24 x4rlos Would you like me to re-submit?
17:25 x4rlos Or you done and dusted it? :-)
17:25 jdarcy It's done.
17:25 x4rlos ah, cool.
17:25 ndevos x4rlos: looks like you took to long and it was already merged, CONGRATS :D
17:25 jdarcy Congratulations.  Here's your rockstar/ninja pin.
17:25 * x4rlos beems with pride upon the reqard bestowed :-D
17:26 x4rlos s/reqard/reward
17:26 jdarcy Looks like just a rockstar pin.  Oh, wait.
17:26 ndevos thats a nice way to start your weekend ;)
17:27 hagarth :O
17:27 x4rlos yay me :-)
17:27 ndevos x4rlos: just remember to do the 'git checkout ... ' for each patch you want to work on, that makes things easiest (well, thats my opinion)
17:28 x4rlos rather than just cloning the individual branches? No problem :-)
17:29 jdarcy Definitely agree.  Patches are like fragile glasses.  They shouldn't be stacked unless they were designed to do so.
17:30 x4rlos i shall commit -a that to memory.
17:31 Mo__ joined #gluster
17:32 longsleep joined #gluster
17:33 theron joined #gluster
17:34 ctria joined #gluster
17:36 longsleep Hi guys, i am trying to mount a gluster 3.3 node from an old Ubuntu 8.04 64bit install and the client is segfaulting always on mount in mem_get - gdb trace at http://pastebin.com/s3BvUc4a - anyone can help?
17:36 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
17:43 JoeJulian well I don't see anything obvious... jdarcy? ^^
17:56 hagarth joined #gluster
18:10 tryggvil joined #gluster
18:15 gbrand_ joined #gluster
18:17 bauruine joined #gluster
18:24 neofob joined #gluster
18:26 johnmark longsleep: ubuntu 8.04? I'm going to assume that the fuse module is way too old
18:26 johnmark try mounting wiht NFS and see if that's better
18:27 semiosis doesnt matter how hard anyone beats the LTS drum, i'm not going to make a package for 8.04 :/
18:27 * semiosis nips that one in the bud
18:28 kkeithley but, but, but.... It's just linux.
18:28 kkeithley If you know one, you know them all
18:28 kkeithley ;-)
18:46 lh joined #gluster
18:59 johnmark semiosis: lulz... dear lord, I wouldn't wish tha ton anyone
19:02 sjoeboo_ joined #gluster
19:03 JoeJulian hehe
19:04 jvyas joined #gluster
19:20 gomix JoeJulian: never mind... i upgraded to f18 and glusterfs pkg was missing (dunno why it was pulled during fedup)
19:20 gomix ;)
19:20 gomix (it was not pulled)
19:24 theron joined #gluster
19:40 64MACJZMQ joined #gluster
19:48 eightyeight joined #gluster
19:49 jjnash joined #gluster
19:50 nightwalk joined #gluster
19:59 gprs1234 left #gluster
20:13 andreask joined #gluster
20:49 sjoeboo so, on my bricks, which are all xfs bricks mounted rw,noatime,inode64, i'm getting lots of "operation not supported" in the brick logs for xattrs
20:49 sjoeboo and a smattering of Extended attributes not supported (try remounting brick with 'user_xattr' flag)
20:50 sjoeboo but....user_xattr IS on in xfs, and a test remount of one birck with the specifically in the mount options has 0 effect...
20:58 semiosis selinux?
21:06 sjoeboo_ joined #gluster
21:39 andreask joined #gluster
22:14 m0zes joined #gluster
22:22 dustint joined #gluster
22:29 emrahnzm left #gluster
22:40 raven-np joined #gluster
22:45 rags_ joined #gluster
23:25 raven-np joined #gluster
23:48 hattenator joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary