Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 lh joined #gluster
00:12 lh joined #gluster
01:16 gbrand_ joined #gluster
01:30 palmtown joined #gluster
01:31 palmtown Trying to remove a datastore from gluster
01:31 palmtown it keeps failing and hanging
01:31 palmtown is there a way to forcefully remove it?
01:43 blendedbychris joined #gluster
01:43 blendedbychris joined #gluster
01:45 palmtown figured it out
02:07 bharata joined #gluster
02:54 blendedbychris semiosis: you around? I keep getting "invoke-rc.d: initscript glusterfs-server, action "start" failed."
02:55 blendedbychris http://pastie.textmate.org/p​rivate/lowgmsnhzgejb3h2ys5q
02:55 glusterbot <http://goo.gl/lUIQW> (at pastie.textmate.org)
03:06 blendedbychris crap
03:28 trigger joined #gluster
03:30 trigger hello, I'm getting an error trying to mount a glusterfs as follows:  failed to fetch volume file (key:/storage)
04:16 zzyybb joined #gluster
04:16 blendedbychris trigger: what command are you using to mount?
04:17 zzyybb for a replicate volume, I got this status: Number of Bricks: 0 x 3 = 2
04:17 zzyybb is this normal?
04:17 zzyybb 3.3
04:17 zzyybb gluster 3.3
04:20 blendedbychris zzyybb: pretty sound math to me
04:20 blendedbychris :)
04:20 blendedbychris zzyybb: can you pastie your full output?
04:22 zzyybb http://pastebin.com/bAw8gMKN
04:22 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
04:23 zzyybb http://fpaste.org/0AkC/
04:23 glusterbot Title: Viewing Paste #249211 (at fpaste.org)
04:23 ika2810 joined #gluster
04:24 trigger Using the following command to mount: 192.168.100.10:/storage /storage           glusterfs defaults,_netdev 0 0
04:24 trigger using that in fstab
04:25 trigger one thing to note is that on one version of gluster I am running 3.3.1
04:26 trigger on the server side, it is running 3.2.7
04:26 trigger trip is, on servers that were mounted previously
04:26 trigger they will mount fine
04:26 JoeJulian ding, ding, ding!
04:26 trigger but for new servers running 3.3.1 it will not mount
04:26 JoeJulian 3.3 and 3.2 are not rpc compatible.
04:27 trigger what's the easiest way to upgrade to 3.3.1 without breaking anything?
04:27 trigger the 3.2. is in production
04:28 JoeJulian It's going to require downtime. The smartest way is to unmount all your clients, stop your volume(s) and upgrade everything.
04:28 trigger is there a doc online with the process?
04:28 trigger like do I simply download the packages from http://bits.gluster.com/glus​ter/glusterfs/3.3.1/x86_64/ and use rpm -i to install?
04:28 glusterbot <http://goo.gl/eC1pE> (at bits.gluster.com)
04:29 JoeJulian What I'm expecting has happened is that your 3.2 clients connected to the 3.2 servers but erred on the 3.3 servers.
04:29 JoeJulian No, bits is built by jenkins. Use the ,,(yum repo)
04:29 glusterbot kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
04:30 trigger I've upgraded most of my servers the hard way
04:30 trigger by basically uninstalling and re-installing
04:30 trigger I'm down to my last farm
04:30 trigger and the servers are running 3.2.7
04:30 JoeJulian My fear is that you had a mix of 3.2 and 3.3 servers. If you did you may have created some split-brain.
04:30 trigger no...
04:30 JoeJulian Whew
04:30 trigger I have 3 different farms, 2 of them I've upgraded to 3.3.1
04:30 trigger by uninstalling and re-installing
04:30 trigger the last farm
04:31 trigger I need to connect a few clients to it
04:31 trigger the clients are 3.3
04:31 trigger but the last farm servers is 3.2.7
04:31 JoeJulian The yum upgrade should be sufficient. It'll take care of moving /etc/glusterd and rebuilding .vol files correctly.
04:31 trigger ok.. so to recap the upgrade process:
04:32 trigger 1. Dismount the clients, 2. stop the volume, 3. stop glusterd service, 4. run yum update gluster*, 5. start and re-mount clients?
04:33 JoeJulian 1,2,4,5
04:33 JoeJulian skip 3
04:33 trigger ok... let me give that a try
04:33 trigger hopefully it will go smooth, otherwise, I'm in the dog house :)
04:34 trigger thanks joejulian for your help
04:34 JoeJulian You're welcome
04:35 trigger hmm, one quick question, know this may be low grade
04:36 trigger can I download:  http://repos.fedorapeople.org/repos/k​keithle/glusterfs/epel-glusterfs.repo and it would have the updates I need to update to 3.3.1 ?
04:36 glusterbot <http://goo.gl/Yuv7R> (at repos.fedorapeople.org)
04:38 JoeJulian Assuming you're using an EL based distro, that goes in /etc/yum.repos.d and then yes, you can then just 'yum upgrade gluster\*'
04:39 JoeJulian Good luck. I'm off to spend some family time.
04:40 trigger thanks again
04:57 bfoster_ joined #gluster
04:58 jdarcy_ joined #gluster
04:59 kkeithley1 joined #gluster
05:18 blendedbychris weird
05:18 bfoster joined #gluster
05:19 kkeithley joined #gluster
05:19 jdarcy joined #gluster
05:32 badone joined #gluster
05:35 lh joined #gluster
05:35 lh joined #gluster
05:53 zzyybb it seems gluster would hang when doing volume rebalance, this is 3.3.1, not seem to be stable
06:00 Humble joined #gluster
06:00 ika2810 left #gluster
06:44 erik49_ joined #gluster
07:06 ika2810 joined #gluster
07:35 blendedbychris http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ << I tried this and still get that it was part of a brick before
07:35 glusterbot <http://goo.gl/YUzrh> (at joejulian.name)
07:35 blendedbychris it's a brand new dir though
07:49 kshlm1 joined #gluster
08:51 blendedbychris I'm confused… i have two bricks that had mildly different data in them when they were added to this volume
08:51 blendedbychris replication...
08:51 blendedbychris gluster seemed to know how to self heal… but when i do info it's stuck
08:51 blendedbychris (on one directory)
08:52 blendedbychris at least i think it is…. it lists just that one file whne i do heal vol info
08:52 blendedbychris I wish the documentation on what's going on during a heal was a bit more clear
09:07 blendedbychris what's this mean… nfs must have at least one child subvolume < ?
09:24 mdarade joined #gluster
09:44 mdarade left #gluster
10:19 badone joined #gluster
10:57 manik joined #gluster
11:17 kkeithley1 joined #gluster
11:22 faizan joined #gluster
11:32 TSM2 joined #gluster
12:31 lh joined #gluster
12:31 lh joined #gluster
12:43 hagarth joined #gluster
14:10 wushudoin joined #gluster
14:21 lh joined #gluster
14:21 deepakcs joined #gluster
14:21 sunus joined #gluster
14:27 lh joined #gluster
14:27 lh joined #gluster
15:11 TSM2 joined #gluster
15:15 Nr18 joined #gluster
15:39 stefanha ndevos: I found a way to reproduce the wireshark GlusterFS dissector crash
15:39 stefanha ndevos: Is there a bug tracker you'd like me to update the details into?
15:39 stefanha Otherwise I can just share a link to the pcap file and show you how to trigger the crash
16:19 usrlocalsbin joined #gluster
16:19 usrlocalsbin left #gluster
16:20 blendedbychris joined #gluster
16:20 blendedbychris joined #gluster
16:29 TSM2 joined #gluster
16:31 trigger trying to get the latest version of gluster
16:31 trigger an anyone assist me in where I could find the repo?
16:33 trigger nm, got it
16:37 hagarth joined #gluster
16:50 Nr18 joined #gluster
16:58 lh joined #gluster
16:58 lh joined #gluster
17:02 DMooring joined #gluster
17:11 elyograg joined #gluster
17:12 deepakcs joined #gluster
17:15 sirius joined #gluster
17:15 Nr18 joined #gluster
17:23 sirius left #gluster
17:49 stefanha ndevos: Here is the bug report with pcap file to reproduce the segfault: https://github.com/nixpanic/gl​uster-wireshark-1.4/issues/21
17:49 glusterbot <http://goo.gl/ZZcpI> (at github.com)
17:51 Nr18 joined #gluster
18:18 joeto joined #gluster
18:50 gbrand__ joined #gluster
18:53 gbrand_ joined #gluster
19:12 nightwalk joined #gluster
19:15 erik49_ joined #gluster
20:21 mtanner_ joined #gluster
21:01 vincent_vdk joined #gluster
21:23 cyberbootje hi, is there already a good way to know if the gluster replica is in sync?
21:38 badone joined #gluster
21:46 H__ cyberbootje: the only way I know is an ls of all files and watching the logs during that
21:48 cyberbootje so if you miss that entry...
22:23 jiffe1 that's really not a good way either though, at least not a very good point-in-time way
22:23 jiffe1 because if there's a lot of files, there's a long time between the possibility of something getting out of sync'
22:26 H__ i know, do you know a better way for us ?
22:37 TSM2 mabey there should be a 'watchdog' service that can be hooked off the clients/servers that can do the monitoring
23:42 TSM2 during a rebalance after adding new bricks, is the time to do this proportial to the size of the array?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary