Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 Maitre Yeah, rc.local isn't the "best" solution, it's just the easiest.  Note in fstab is a clever idea, I'll have to do that as well.  :P
00:11 jobewan joined #gluster
00:14 javi404 joined #gluster
00:33 aulait joined #gluster
00:35 theron joined #gluster
00:56 topshare joined #gluster
00:56 firemanxbr joined #gluster
01:03 vxitch joined #gluster
01:03 vxitch hallo
01:03 vxitch is it possible to upgrade gluster versions across a cluster without bringing it down?
01:21 coredump joined #gluster
01:25 David_H_Smith joined #gluster
01:30 _Bryan_ joined #gluster
01:38 vxitch how do i figure out why a peer is being rejected from the rest of the cluster? and/or how can i resolve this?
01:56 johndescs_ joined #gluster
01:56 javi4041 joined #gluster
01:56 uebera|| joined #gluster
01:56 uebera|| joined #gluster
01:57 Bardack joined #gluster
01:58 VeggieMeat joined #gluster
01:59 mator_ joined #gluster
02:01 haomaiwa_ joined #gluster
02:02 uebera|| joined #gluster
02:02 uebera|| joined #gluster
02:06 doekia joined #gluster
02:08 uebera|| joined #gluster
02:08 capri joined #gluster
02:08 javi404 joined #gluster
02:08 uebera|| joined #gluster
02:08 uebera|| joined #gluster
02:08 rastar_afk joined #gluster
02:09 radez_g0` joined #gluster
02:10 radez_g0` joined #gluster
02:10 marbu joined #gluster
02:11 churnd- joined #gluster
02:14 hagarth joined #gluster
02:14 churnd joined #gluster
02:21 JoeJulian Durzo: The documentation still says it's supported: https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_geo-replication.md
02:21 glusterbot Title: glusterfs/admin_geo-replication.md at master · gluster/glusterfs · GitHub (at github.com)
02:22 Durzo JoeJulian, Im getting a strange error after following docs, 'One or more nodes do not support the required op version' all my back & front ends are 3.5.2.. any idea?
02:23 JoeJulian My guess would be that something is stull running the old version - like a brick didn't get restarted or a client hasn't remounted.
02:23 JoeJulian s/stull/still/
02:23 glusterbot What JoeJulian meant to say was: My guess would be that something is still running the old version - like a brick didn't get restarted or a client hasn't remounted.
02:24 Durzo JoeJulian, both servers have been rebooted, frontends are EC2 autoscales that have since been replaced, 100% nothing old left
02:24 vxitch is it possible to upgrade gluster versions across a cluster without bringing it down?
02:24 Durzo vxitch, i did it lastnight from 3.4 to 3.5
02:24 JoeJulian vxitch: If it's replicated, yes.
02:25 vxitch our replica count matches the nr of bricks, one brick per host
02:25 Durzo JoeJulian, congrats on position btw :)
02:25 JoeJulian Thanks
02:26 JoeJulian vxitch: how many bricks?!?!
02:26 vxitch 6
02:26 vxitch not my design..
02:26 JoeJulian holy cow! replica 6?
02:26 vxitch yeeep
02:26 Durzo holy cow indeed
02:27 vxitch haha
02:27 JoeJulian yuck. bbiab, gotta go set the table for dinner.
02:27 Durzo vxitch, i followed the upgrade guide for 'rolling upgrade' and it worked in my 2 brick replica
02:27 Durzo that is, one server at a time, then frontends
02:27 vxitch oh, where is that guide? i couldnt find it
02:28 Durzo http://www.gluster.org/community/doc​umentation/index.php/Upgrade_to_3.5
02:28 vxitch thanks
02:28 vxitch we're going from 3.5 to 3.6
02:28 vxitch also, that returns 404
02:28 Durzo works for me
02:29 vxitch hrm...
02:29 vxitch all of gluster.org/community 404s for me
02:29 vxitch which makes finding docs useless via google
02:29 David_H_Smith joined #gluster
02:29 Durzo special dns?
02:29 Durzo gluster.org has address 198.61.169.132
02:29 vxitch what. the fuck. works in safari, not in chrome
02:29 vxitch er
02:29 vxitch whatever
02:30 russoisraeli joined #gluster
02:31 vxitch so, would it be possible to avoid downtime with a brick count of 6 and replica 6?
02:32 Durzo if its one brick per host, sure
02:33 vxitch stop glusterd on a server, upgrade, restart it? do that one by one till all servers are complete?
02:33 Durzo perform a heal after each server upgrade and wait for completion, as per doc
02:34 Durzo servers first, then clients
02:34 DV joined #gluster
02:35 vxitch thanks
02:37 David_H_Smith joined #gluster
02:38 vxitch we're getting some really strange behavior. one of the peers (the 'head' node) shows all peers in cluster and connected
02:39 vxitch one of the peers, call it 05, shows only the head node as in cluster and connected, the others are shown rejected and connected
02:39 vxitch is there any visibility into why those are rejected? or what that means? time for a heal?
02:39 Durzo yeah thats a bit weird
02:40 Durzo have you upgraded any of them yet?
02:40 vxitch yeah, they've all been upgraded i see
02:40 sjohnsen joined #gluster
02:40 Durzo did you kill the glusterd and glusterfsd's after stopping the service?
02:40 vxitch i didnt do the upgrade, but i dont think so
02:40 Durzo atleast on ubuntu, the init script doesnt fully shut everything down
02:40 vxitch oh, thats great to know :(
02:41 vxitch no, i doubt that was done
02:41 Durzo thats why doc says to kill them
02:45 David_H_Smith joined #gluster
02:45 pradeepto joined #gluster
02:45 vxitch stopped glusterd, killed all glusterfs procs, started glusterd, peer is now rejected from all other peers
02:45 vxitch and head node rejects that one peer only, out of all the others
02:45 vxitch and yeah, the rpm versions all match
02:48 Durzo did you do the restart on all nodes?
02:49 bala joined #gluster
02:52 vxitch no, just 05
02:52 vxitch should i go around and boot them all?
02:52 vxitch they were restarted after they were upgraded
02:52 vxitch in case that was the question
02:54 David_H_Smith joined #gluster
03:25 David_H_Smith joined #gluster
03:26 mojibake joined #gluster
03:30 DV joined #gluster
03:30 David_H_Smith joined #gluster
03:34 rejy joined #gluster
03:36 Durzo hmm
03:36 plarsen joined #gluster
03:37 Durzo tracking down my problem of 'One or more nodes do not support the required op version' - i found this doc: http://www.gluster.org/community/docum​entation/index.php/Features/Opversion and according to /var/lib/glusterd/glusterd.info my gluster 3.5.2 server has operating-version of 1, which is NOT supposed to happen...
03:37 Durzo does anyone know if i can simply bump the version in glusterd.info ?? performing a peer probe does not bump
03:40 vxitch glusterd.info on my head node says opver is 1 as well
03:40 vxitch despite being peered and connected with 4 others
03:40 Durzo do you have functioning geo-replication ?
03:41 vxitch oh, i don't have geo-replication on this cluster
03:42 vxitch well im out for the night, thanks for your help. gl with your problem. i'll check the scrollback tomorrow, maybe something helpful will appear :)
03:43 daMaestro joined #gluster
03:43 Durzo http://www.gluster.org/community/docu​mentation/index.php/OperatingVersions heh.. missing 3.5.2
03:43 kanagaraj joined #gluster
03:44 vxitch gluster is an awesome project...documentation leaves a lot to be desired though :)
03:44 Durzo yeah.. im guessing its 30502
03:44 vxitch that would make sense
03:44 vxitch it's interesting how i'm also at opver 1
03:44 vxitch meh
03:44 RameshN joined #gluster
03:45 vxitch well if you do change the values by hand report back, i'm interested in knowing what happened
03:45 Durzo prod server, not going to
03:45 Durzo trying a volume set instead
03:53 hagarth joined #gluster
03:57 Durzo success. running 'volume set <VOL> network.compression off' set my operating-version to 3
03:57 shubhendu joined #gluster
04:00 Durzo sigh, even after bumping operating-version, trying to enable file:// based geo-repl gives 'Invalid slave name' in the logs
04:01 Durzo "Staging failed on localhost" via CLI
04:02 itisravi joined #gluster
04:03 haomai___ joined #gluster
04:04 nbalachandran joined #gluster
04:11 bharata-rao joined #gluster
04:14 kumar joined #gluster
04:14 shylesh__ joined #gluster
04:17 dusmant joined #gluster
04:19 ndarshan joined #gluster
04:21 ppai joined #gluster
04:24 dusmant joined #gluster
04:27 nishanth joined #gluster
04:34 atinmu joined #gluster
04:34 rafi joined #gluster
04:38 daMaestro joined #gluster
04:39 anoopcs joined #gluster
04:41 SOLDIERz joined #gluster
04:45 rjoseph joined #gluster
04:47 jiffin joined #gluster
04:49 Durzo ok JoeJulian FYI according to https://github.com/gluster/glusterfs​/blob/master/doc/admin-guide/en-US/m​arkdown/admin_distributed_geo_rep.md geo-replication in 3.5 can no longer be a file:// and mut be a gluster volume.
04:49 glusterbot Title: glusterfs/admin_distributed_geo_rep.md at master · gluster/glusterfs · GitHub (at github.com)
04:50 Durzo this sucks big time, our gluster volume is 500GB and we had a 1TB SSD attached to each brick server just for geo-repl.. now it seems we have to create a whole new server and accomodate syncing half a TB over the wire now
04:51 Durzo sounds like a pretty giant step backward for gluster
04:59 soumya__ joined #gluster
05:01 calisto joined #gluster
05:02 pradeepto joined #gluster
05:03 spoxaka joined #gluster
05:09 David_H_Smith joined #gluster
05:10 David_H_Smith joined #gluster
05:12 pp joined #gluster
05:16 spandit joined #gluster
05:25 calisto joined #gluster
05:27 _1_Moe joined #gluster
05:31 calisto joined #gluster
05:31 aravindavk joined #gluster
05:35 _Bryan_ joined #gluster
05:35 DV joined #gluster
05:38 ababu joined #gluster
05:38 x-only joined #gluster
05:39 bala joined #gluster
05:40 x-only Hello. Ive setup glusterfs and it works. The only issue I have when I reboot both gluster nodes at the same time. None of them gets mounted back after boot. It seems one node needs to be online in order for the second one to mount the superdisk after boot. Is this as designed, can it be change or did I make an error?
05:41 x-only (Im using replicated volumes on two servers)
05:43 Durzo i dont think clients should auto remount
05:43 overclk joined #gluster
05:44 sahina joined #gluster
05:45 kshlm joined #gluster
05:45 atalur joined #gluster
05:46 lalatenduM joined #gluster
05:47 kdhananjay joined #gluster
05:50 x-only Durzo: well, Im mounting localhost:/superdisk0 on each gluster node (the point of gluster here is to create shared storage between two servers), but unless the second node is also up, it wont get mounted automatically
05:52 ramteid joined #gluster
05:53 Durzo x-only, it should mount with only 1 server up, bot not with both down
05:53 bala joined #gluster
05:53 x-only so if one is down (permanently) and one gets rebooted, it wont mount it automatically after boot
05:53 Durzo can you be more specific when you say "one"
05:54 Durzo and "it"
05:55 x-only mmm. Not sure how. Ive got two servers, each with 1TB partition. Ive created shared glusterfs storage between them (replicated), and mounted it to some path, on both servers
05:55 x-only so there is no third client, just these two nodes with replicated storage
05:56 Durzo and when you mount the gluster volume, are you using localhost:VOL ?
05:56 x-only yes
05:56 Durzo what version of gluster?
05:56 x-only 3.5.2-1
05:57 x-only with Centos 64bit
05:57 kdhananjay left #gluster
05:57 x-only (centos 6.5)
05:58 Durzo yeah thats a bit iffy, iv never had the servers mounting themselves.. in theory it should work but the problem is the timing of glusterd starting and the mount being called
05:58 Durzo if both of your glusterd's are down, then you boot a server up and it tries to call mount before starting glusterd, then it will fail
05:59 x-only well there two servers are identical in every aspect, so the chances are they are booting also simultaneously
05:59 Durzo likewise, if you try calling mount too fast after glusterd has started, it will also fail
05:59 meghanam joined #gluster
05:59 meghanam_ joined #gluster
05:59 x-only I see
05:59 Durzo you could try moving your mount command out of fstab and into rc.local, put some sleep before it to give glusterd time to start up
05:59 Durzo the other thing is to check the logs to see what gluster is doing when it rejects the mount
05:59 x-only fair point, I can try that
06:00 Durzo it may reject due to the volume being bonked
06:00 Durzo cant really say without logs
06:08 anoopcs joined #gluster
06:09 ricky-ticky joined #gluster
06:18 saurabh joined #gluster
06:21 shubhendu joined #gluster
06:25 ppai joined #gluster
06:31 haomaiwang joined #gluster
06:32 x-only Durzo: putting delay in rc.local did indeed help :) Thanks!
06:32 Durzo np
06:37 SOLDIERz joined #gluster
06:45 soumya_ joined #gluster
06:55 badone joined #gluster
06:56 ctria joined #gluster
07:03 dusmantkp_ joined #gluster
07:12 haomaiw__ joined #gluster
07:15 ricky-ticky joined #gluster
07:16 elico joined #gluster
07:17 aravindavk joined #gluster
07:17 RameshN joined #gluster
07:18 guntha_ joined #gluster
07:18 atinmu joined #gluster
07:28 glusterbot New news from newglusterbugs: [Bug 1162479] replace-brick doesn't work fine . <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162479>
07:29 stigchri1tian joined #gluster
07:29 foster_ joined #gluster
07:29 davemc joined #gluster
07:35 soumya_ joined #gluster
07:38 Philambdo joined #gluster
07:38 atinmu joined #gluster
07:50 haomaiwang joined #gluster
07:56 uebera|| joined #gluster
07:58 Andreas-IPO joined #gluster
08:00 capri joined #gluster
08:05 ppai joined #gluster
08:18 sjohnsen joined #gluster
08:21 harish joined #gluster
08:24 Arrfab joined #gluster
08:24 [Enrico] joined #gluster
08:26 fsimonce joined #gluster
08:28 karnan joined #gluster
08:41 glusterbot New news from resolvedglusterbugs: [Bug 1058204] dht: state dump does not print the configuration stats for all subvolumes <https://bugzilla.redhat.co​m/show_bug.cgi?id=1058204> || [Bug 1061685] [RFE] Need support for taking snapshot of live (online) Gluster Volume <https://bugzilla.redhat.co​m/show_bug.cgi?id=1061685> || [Bug 1065654] nfs-utils should be installed as dependency while installing glusterfs-server <https://bug
08:42 deepakcs joined #gluster
08:45 vikumar joined #gluster
08:52 kdhananjay joined #gluster
08:54 elico joined #gluster
08:58 ppai joined #gluster
09:05 liquidat joined #gluster
09:07 haomaiwang joined #gluster
09:12 [Enrico] joined #gluster
09:12 glusterbot New news from resolvedglusterbugs: [Bug 1159253] GlusterFS 3.6.1 tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1159253>
09:13 dusmant joined #gluster
09:19 RameshN joined #gluster
09:23 kdhananjay left #gluster
09:31 atalur joined #gluster
09:38 dusmant joined #gluster
09:41 haomaiw__ joined #gluster
09:47 leochill joined #gluster
09:48 kdhananjay joined #gluster
09:49 ppai joined #gluster
09:58 kdhananjay left #gluster
09:59 glusterbot New news from newglusterbugs: [Bug 1095179] Gluster volume inaccessible on all bricks after a glusterfsd segfault on one brick <https://bugzilla.redhat.co​m/show_bug.cgi?id=1095179> || [Bug 917901] Mismatch in calculation for quota directory <https://bugzilla.redhat.com/show_bug.cgi?id=917901>
09:59 haomaiwa_ joined #gluster
10:02 [Enrico] joined #gluster
10:18 ppai joined #gluster
10:20 T0aD joined #gluster
10:26 haomai___ joined #gluster
10:32 dusmant joined #gluster
10:58 xandrea joined #gluster
10:58 xandrea hi everyone
10:58 xandrea I was trying gluster with two centos servers
11:00 xandrea I wanna use KVM as Virtualization cluster, and I ask you what are the best way to set up correctly gluster for kvm
11:01 Humble http://www.gluster.org/documentat​ion/use_cases/Virt-store-usecase/ xandrea
11:01 glusterbot Title: Virt-store-usecase Gluster (at www.gluster.org)
11:03 mucahit joined #gluster
11:04 xandrea thanks guys
11:06 lalatenduM joined #gluster
11:06 jiffin1 joined #gluster
11:09 spoxaka left #gluster
11:11 soumya_ joined #gluster
11:11 rafi1 joined #gluster
11:21 calum_ joined #gluster
11:25 dusmant joined #gluster
11:28 anti[Enrico] joined #gluster
11:33 RameshN joined #gluster
11:36 diegows joined #gluster
11:37 calisto joined #gluster
11:41 delhage joined #gluster
11:42 kkeithley1 joined #gluster
11:48 jvandewege hi guys, anyone who does know how to get ovirt and gluster to use the glusterfs libgf way to connect the disks to qemu? Followed http://www.ovirt.org/Featur​es/GlusterFS_Storage_Domain but starting a VM and using virsh dumpxml still shows its using file mode
11:48 glusterbot Title: Features/GlusterFS Storage Domain (at www.ovirt.org)
11:50 ppai joined #gluster
11:55 ndevos jvandewege: you are looking for this: http://www.ovirt.org/Featur​es/GlusterFS_Storage_Domain
11:55 glusterbot Title: Features/GlusterFS Storage Domain (at www.ovirt.org)
11:55 ndevos oh, thats the same link...
11:55 calisto joined #gluster
11:56 ndevos jvandewege: do you have a storage domain with vfsType glusterfs? like in the user-interface starting at http://www.ovirt.org/Features/Glust​erFS_Storage_Domain#User_interface
11:56 glusterbot Title: Features/GlusterFS Storage Domain (at www.ovirt.org)
11:57 ndevos REMINDER: Gluster Community Bug triage starting in 2 minutes in #gluster-meeting
11:59 rafi joined #gluster
12:00 haomaiwang joined #gluster
12:02 kshlm joined #gluster
12:03 bennyturns joined #gluster
12:04 jvandewege hi niels, tried that yes at two different installs both are not working. Current install is F20 with ovirt-3.5 and glusterfs-3.5.2 and qemui-1.6.x, libvirt-1.2.9
12:04 jvandewege ndevos: yes storage type is glusterfs.
12:24 itisravi joined #gluster
12:30 sjohnsen joined #gluster
12:33 jiffin joined #gluster
12:38 ndevos jvandewege: hmm, that looks good for all I can tell, you may want to check with the ovirt guys
12:39 jvandewege ndevos: thanks will do.
12:40 ndevos jvandewege: we'd like to hear about any progress (or problems) you hit with that, could you send an email to gluster-users@gluster.org when you can share something?
12:41 ndevos or, blog about it, and send an email to the list with the URL, the admins can then add it to blog.gluster.org too
12:43 sahina joined #gluster
12:44 jvandewege ndevos: will summarize if found
12:45 rafi joined #gluster
12:45 soumya_ joined #gluster
12:45 dusmant joined #gluster
12:46 smohan joined #gluster
12:48 haomaiwa_ joined #gluster
12:49 RameshN joined #gluster
12:52 edward1 joined #gluster
12:53 ababu joined #gluster
12:54 lpabon joined #gluster
12:54 rafi joined #gluster
12:56 partner oh, the meeting went already, i even have calendar entry for it.. thought i wonder if it was done during the daylight saving..
13:00 glusterbot New news from newglusterbugs: [Bug 1158622] SELinux denial when mounting glusterfs nfs volume when using base-port option <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158622> || [Bug 1158654] [FEAT] New Style Replication (NSR) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1158654>
13:04 rafi1 joined #gluster
13:07 Philambdo joined #gluster
13:08 xandrea joined #gluster
13:08 kkeithley_ sometimes we forget to announce the meetings here. Sorry about that. Don't forget tomorrow's Gluster Community Meeting @ 12h00 UTC in #gluster-meeting.
13:09 * kkeithley_ thinks it's 12h00 UTC. better check
13:09 kkeithley_ yes, it's 12h00 UTC
13:10 kkeithley_ And next week's bug triage meeting is also at 12h00 UTC.  I think daylight savings has ended everywhere that does it.
13:13 LebedevRI joined #gluster
13:17 RameshN joined #gluster
13:19 kshlm joined #gluster
13:20 kshlm joined #gluster
13:20 bala joined #gluster
13:28 bene2 joined #gluster
13:34 Maitre Is 12h00 a reasonable hour, anywhere in the civilized world?  :P
13:41 smohan joined #gluster
13:43 hagarth joined #gluster
13:46 harish joined #gluster
13:48 Pupeno joined #gluster
13:48 Pupeno joined #gluster
13:49 haomaiw__ joined #gluster
13:51 joakim_24 joined #gluster
13:52 the-me lalatenduM: yes
13:53 coredump joined #gluster
13:54 pp joined #gluster
13:56 lalatenduM the-me, cool :)
14:01 morse_ joined #gluster
14:02 B21956 joined #gluster
14:03 stickyboy joined #gluster
14:03 al joined #gluster
14:10 partner i'll tune my reminder, correct time would be 2pm, now it runs on 3pm :o
14:10 jobewan joined #gluster
14:11 partner oh, not sure if its even bi-weekly, i'll just throw in enough reminders :D
14:11 Philambdo1 joined #gluster
14:14 dusmant joined #gluster
14:16 glusterbot New news from resolvedglusterbugs: [Bug 1161034] rename operation doesn't work <https://bugzilla.redhat.co​m/show_bug.cgi?id=1161034>
14:23 nbalachandran joined #gluster
14:26 virusuy joined #gluster
14:34 liquidat joined #gluster
14:38 xandrea joined #gluster
14:41 joakim_24 Hi, I am having trouble with a production glusterfs and I hope it is ok to ask about it in this channel. We are running KVM hosts on glusterfs 3.5.2 and we had a minor netowrk disturance on one node and that resultet in a number ov VM's loosing connection to their disks.
14:42 joakim_24 Some files are just IO error and when to you try to access other files the process just hangs and you can't even kill it.
14:42 _dist joined #gluster
14:44 joakim_24 Most VM's are running fine there are about 10 servers that faulted. Is there anyway to clean this mess up without restarting the volume and having to take all other VM's offline?
14:45 joakim_24 We have a number of "backgroung data self heal  failed" in the logs
14:47 pranithk joined #gluster
14:48 plarsen joined #gluster
14:49 julim joined #gluster
14:54 hagarth joakim_24: can you provide logs of the client that yields an IO error?
15:00 joakim_24 All clients have more or less error, we see a lot of "[client-rpc-fops.c:984:client3_3_fsync_cbk] 0-gv_nova-client-1: remote operation failed: Bad file descriptor" and "W [afr-inode-read.c:1927:afr_readv] 2-gv_nova-replicate-0: Failed on f55b437e-b023-412f-b499-1f071d0435f8 as split-brain is seen. Returning EIO."
15:02 pranithk joakim_24: Could you get 'getfattr -d -m. -e hex <brickpath>/.glusterfs/f5/5b/f55b​437e-b023-412f-b499-1f071d0435f8' on both the bricks?
15:03 jdarcy joined #gluster
15:04 joakim_24 root@opsskucmp001:/var/log/glusterfs# getfattr -d -m. -e hex /gv_nova/brick/.glusterfs/f5/5b/f5​5b437e-b023-412f-b499-1f071d0435f8
15:04 joakim_24 getfattr: Removing leading '/' from absolute path names
15:04 joakim_24 # file: gv_nova/brick/.glusterfs/f5/5b/f55​b437e-b023-412f-b499-1f071d0435f8
15:04 joakim_24 trusted.afr.gv_nova-client-0​=0x000000000000000000000000
15:04 joakim_24 trusted.afr.gv_nova-client-1​=0x000000000000000000000000
15:04 joakim_24 trusted.gfid=0xf55b437eb023412fb4991f071d0435f8
15:04 joakim_24 root@opsskucmp002:/var/log/glusterfs# getfattr -d -m. -e hex /gv_nova/brick/.glusterfs/f5/5b/f5​5b437e-b023-412f-b499-1f071d0435f8
15:04 joakim_24 getfattr: Removing leading '/' from absolute path names
15:04 joakim_24 # file: gv_nova/brick/.glusterfs/f5/5b/f55​b437e-b023-412f-b499-1f071d0435f8
15:04 joakim_24 trusted.afr.gv_nova-client-0​=0x000000000000000000000000
15:04 joakim_24 trusted.afr.gv_nova-client-1​=0x000000000000000000000000
15:04 joakim_24 trusted.gfid=0xf55b437eb023412fb4991f071d0435f8
15:04 pranithk joakim_24: interesting, the file is not in split-brain :-/
15:05 pranithk joakim_24: what is the version of gluster you are using?
15:05 joakim_24 3.5.2
15:06 julim joined #gluster
15:06 SOLDIERz joined #gluster
15:06 pranithk joakim_24: what are the operations done before this issue happened?
15:06 bennyturns joined #gluster
15:08 pranithk joakim_24: will it be possible to give me the logs to analyze? most probably a remount of the volume will fix this, but I am not sure how it went into this state to begin with.
15:09 pranithk joakim_24: Lets decide about remount once we analyze logs?
15:09 joakim_24 Yes, I will send you the logs
15:10 pranithk JustinClift: Any server where joakim_24 can copy the logs?
15:12 lmickh joined #gluster
15:14 redbeard joined #gluster
15:15 jobewan joined #gluster
15:16 wushudoin joined #gluster
15:16 pranithk hagarth: ^^?
15:17 hagarth pranithk: what is the size of the tarball?
15:17 pranithk hagarth: joakim_24? ^^
15:20 ricky-ticky joined #gluster
15:21 joakim_24 53MB
15:21 pranithk hagarth: ^^
15:21 hagarth joakim_24: can you upload that to dropbox or something similar and provide a link to pranithk?
15:22 joakim_24 Yes, I will fix something
15:27 fsimonce joined #gluster
15:34 sjohnsen joined #gluster
15:36 daMaestro joined #gluster
15:37 tdasilva joined #gluster
15:45 fsimonce joined #gluster
15:49 mdavidson joined #gluster
15:52 markd_ joined #gluster
15:53 mdavidson joined #gluster
15:53 fsimonce joined #gluster
16:00 jobewan joined #gluster
16:03 jobewan joined #gluster
16:06 nshaikh joined #gluster
16:07 bene joined #gluster
16:07 Antitribu joined #gluster
16:13 anoopcs joined #gluster
16:18 jobewan joined #gluster
16:18 fsimonce joined #gluster
16:37 neofob joined #gluster
16:50 fsimonce joined #gluster
16:55 RameshN joined #gluster
16:55 meghanam joined #gluster
16:56 meghanam_ joined #gluster
17:00 RameshN_ joined #gluster
17:00 _Bryan_ joined #gluster
17:01 glusterbot New news from newglusterbugs: [Bug 1162767] DHT: Rebalance- Rebalance process crash after remove-brick <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162767>
17:04 fsimonce joined #gluster
17:11 calisto joined #gluster
17:23 anoopcs joined #gluster
17:25 nshaikh joined #gluster
17:28 fsimonce joined #gluster
17:36 lalatenduM joined #gluster
17:39 * semiosis back
17:40 semiosis heh, no messages from glusterbot
17:43 zerick joined #gluster
17:49 zerick joined #gluster
17:54 sjohnsen joined #gluster
18:01 glusterbot New news from newglusterbugs: [Bug 1162805] A disperse 2 x (2 + 1) = 6 volume, kill two glusterfsd program, ls mountpoint abnormal. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162805>
18:02 calisto joined #gluster
18:09 rwheeler joined #gluster
18:38 glusterbot semiosis: I missed you.
18:40 rafi joined #gluster
18:46 sage_ joined #gluster
18:57 semiosis glusterbot: thx
18:57 glusterbot semiosis: you're welcome
18:59 _Bryan_ joined #gluster
19:05 failshell joined #gluster
19:29 zerick joined #gluster
19:31 neofob joined #gluster
19:49 coredump joined #gluster
19:51 Philambdo joined #gluster
19:53 zerick joined #gluster
19:53 lalatenduM joined #gluster
19:55 johndescs joined #gluster
20:29 Philambdo joined #gluster
20:33 Pupeno joined #gluster
20:45 the-me semiosis: ping
20:45 glusterbot the-me: Please don't naked ping. http://blogs.gnome.org/mark​mc/2014/02/20/naked-pings/
20:45 semiosis pong
20:45 Philambdo joined #gluster
20:46 the-me semiosis: I am working on 3.6.1 in trunk currently. are there open changes by yourself?
20:46 semiosis i will submit them tonight
20:46 the-me .. but it could not enter debian jessie, it is targeted for experimental!
20:46 semiosis i just got back from a camping trip so haven't had a chance to work on 3.6.1, but i will later today
20:47 the-me ok could you check the debian/patches/ dir and submit both patches to upstream git? just spelling error fixes and manpage faults, yet
20:47 Pupeno joined #gluster
20:47 redbeard joined #gluster
20:48 firemanxbr joined #gluster
20:49 semiosis yes I will submit them for master & release branches where they apply
20:55 the-me ah and I forgot.. someone made a joke and included contrib/argp-standalone/config.status and contrib/argp-standalone/config.log in the release tarball
20:56 the-me AND contrib/argp-standalone/autom4te.cache/ !
20:59 the-me both patches: http://nopaste.linux-dev.org/?318917  http://nopaste.linux-dev.org/?318918
20:59 glusterbot Title: Perl Nopaste (at nopaste.linux-dev.org)
21:07 firemanxbr joined #gluster
21:14 andreask joined #gluster
21:15 andreask joined #gluster
21:37 _Bryan_ joined #gluster
21:39 anastymous joined #gluster
21:39 anastymous hi everyone
21:39 DV joined #gluster
21:41 anastymous For a mail storage volume handling lots of small files, Am I better to use distribute across 4 volumes for performance, or would that be somewhat detrimental because of the overhead?
21:42 anastymous I've tested using distribute+replica 2x2 .. now testing out my 2nd iteration of glusterfs
21:51 mator joined #gluster
21:56 badone joined #gluster
22:12 MugginsM joined #gluster
22:15 daMaestro joined #gluster
22:18 longshot902 joined #gluster
22:22 the-me semiosis: just uploaded and tagged 3.6.1-1, but I did not test it (target experimental), so if you encounter bugs or if you miss changes, go ahead please :)
22:22 semiosis ok thank you
22:22 the-me hope it is working xD I have to mess up with python foo
22:22 anastymous is it available as an apt-get package now ?
22:23 the-me just uploaded, in the next hours, yes
22:23 anastymous I built 3.6.1 from source last night
22:23 the-me https://packages.qa.debian.org/g/glusterfs.html
22:23 glusterbot Title: Debian Package Tracking System - glusterfs (at packages.qa.debian.org)
22:26 * MugginsM wonders if he can be bothered trying to build 3.6.1 for lucid
22:27 the-me ubuntu °_°
22:29 anastymous please ubuntu
22:32 the-me I hate it :D
22:49 ws2k3 is that for debian wheezy?
22:52 harish_ joined #gluster
22:55 anastymous is distribution over 4 nodes going to give me reasonably good performance with maildirs?
22:59 ws2k3 anastymous realy depends howmany times that that mail is accessed and howmany mailboxes you have ? 4 or 4000 also a small difference
22:59 ws2k3 the-me is that for debian wheezy ?
23:02 anastymous 79 mailboxes, totalling 37GB, i know its a piece of string. Just trying to work out best scenario
23:04 the-me ws2k3: build for debian jessie a ka testing, unstable a ka sid and experimental
23:06 ws2k3 the-me what is the current stable for debian wheezy(stable) ?
23:07 unwastable joined #gluster
23:07 the-me 3.2.7-3+deb7u1, which will not change, since it is a *release* ;)
23:07 unwastable hello, i wonder if someone can help me
23:07 the-me .. but I may create backportds
23:07 the-me currently based on 3.5.2-1
23:07 ws2k3 unwastable dont ask to ask just ask
23:08 ws2k3 the-me thanks
23:08 the-me *may*, nobody presses my flattr donation button xD
23:08 unwastable I have a replication server1 + server 2, and one of the 32bit NFS that mounted to server1 is showing "NFS: Buggy server - nlink == 0!"
23:08 unwastable anyone?
23:08 the-me just fun.. I would welcome testers for the backported packages before I may upload them :)
23:09 ws2k3 unwastable patiant if anyone knows an answer they will say it here
23:10 unwastable ok
23:10 ws2k3 the-me question if i have 2 servers with both 3 disks(1 os 2 data) would you recommand software raid(linux) and make it 2 bricks gluster or leave out the software raid and just make it 4 bricks in gluster ?
23:11 the-me unwastable: so if you are a Wheezy user and interested in testing backported packages for it, you would be welcome :)
23:13 unwastable ws2k3: i would do a server1: OS + 2 bricks  server2: OS + 2 bricks replication
23:13 ws2k3 unwastable so no raid? just 4 bricks?
23:13 the-me oO the only system I personaly know without any *hardware* RAID 1 (or raid >= 10) is my private notebook, but just because it can not fetch two harddisks. HW RAID >= 1 is mandatory in my business
23:14 ws2k3 the-me yeah but in my first use senario i dont have raid controllers in the machines i wanne start using glusterfs
23:15 the-me .. also my personal workstation has got a RAID 10 with BBWC from HP :D
23:15 ws2k3 i can do software raid
23:15 ws2k3 but dont know what is the best to go software raid or just 4 bricks
23:16 unwastable ws2k3: depending on what you trying to achieve, avoid SPOF, data scalability ot data redundancy
23:16 the-me IMO RAID is mandatory
23:16 unwastable the 4 bricks is RAID-0 equavalent
23:17 unwastable and you wont have SPOF
23:17 ws2k3 SPOF ? what does that mean
23:17 ws2k3 i wanted 4 bricks in replication so a 4 mirror
23:18 unwastable if you do software RAID, means it will be one brick at all time, you can't add one more block device on the same machine in the future
23:18 unwastable Single point of failure
23:19 unwastable that means server1:/brick1 = server2:/brick2, server1:/brick3 = server2:/brick4
23:20 ws2k3 how i would have a single point of failure if have 2 machines doing a software raid(mirror) and then create a 2 brick glusterfs volume) mirror
23:20 ws2k3 eatch server one brick
23:21 unwastable if you sure it wont be any addition of harddrive on the same machine, then you can software/hardware RAID and then glusterfs
23:21 ws2k3 then raid does local mirroring and glusterfs does not have to do that
23:21 ws2k3 you mean i am sure i dont wanne add disks to a machine later one ?
23:22 unwastable software/hardware RAID is still a SPOF, once its crash its crashes the whole volume, gluster replication giving you a RAID-0 equivalent to avoid SPOF
23:24 unwastable correction, hw/sw RAID does not crashes the volume, it will give you a assurance of data integraty
23:24 ws2k3 thats exacly what i wanne reach full data redundacy
23:25 unwastable the difference is glusterfs allow you to RAID-0 or RAID-1 across network on multiple machine with true adavantage of SPOF prevention, while sw/hw RAID is localized assurance
23:26 unwastable back to square one: if you wanted to have sw/hw RAID on each server, you got to make sure it wont be any more harddrive added later on
23:27 ws2k3 yeah i understand
23:27 unwastable what was your question?
23:27 anastymous heres another one :)    ZFS or XFS     for a single drive VPS
23:27 ws2k3 but my most inportant question is a local raid + 2 bricks better/faster then just a 4 brick mirror ?
23:27 ws2k3 anastymous for glusterfs brick ? XFS
23:28 anastymous yeh cool thats what I've done so far
23:28 unwastable whats your setup? tcp, 10gE, infiniband?
23:28 ws2k3 tcp 1 gbps
23:28 _Bryan_ joined #gluster
23:28 David_H__ joined #gluster
23:28 calisto joined #gluster
23:29 unwastable small files access will be slow with 4 bricks
23:29 ws2k3 cause it has to check all the bricks right ?
23:29 unwastable adding local RAID will be even slower
23:30 ws2k3 hmm that does not sounds logic to me
23:30 unwastable the writing will be slower than reading
23:30 ws2k3 i would say adding local raid would make it faster
23:30 PeterA joined #gluster
23:30 ws2k3 i think i have 90 % read and 10 % write
23:31 PeterA with a replica 2 volume, should we see the gfid under .glusterfs to have 2 hard links?
23:31 unwastable that make sense, your local RAID will be 1 brick
23:31 ws2k3 exacly
23:31 ws2k3 so i was thinking for my setup local raid would be a bad idea
23:31 ws2k3 would *not* be a bad idea
23:32 PeterA i still keep seeing 1024 heal-failed on my replica 2 gfs
23:32 unwastable what kind of system you have
23:32 unwastable PeterA have you checked the afr changelog?
23:32 ws2k3 16 gb ram quadcore 3 ssd's eatch machine
23:33 ws2k3 latency between machines will be very low they are plugged in the same switch
23:33 unwastable xeon with ECC or i-core?
23:33 ws2k3 xeon with ECC
23:33 PeterA Yes I did and only one line of on one of the node
23:33 PeterA glfsheal-sas02.log.1:[2014-11-08 01:31:41.878418] W [client-rpc-fops.c:2774:client3_3_lookup_cbk] 0-sas02-client-0: remote operation failed: No such file or directory. Path: 30abef10-f92c-47e1-a8b6-e9f273251714 (30abef10-f92c-47e1-a8b6-e9f273251714)
23:34 PeterA for a particular gfid entry
23:34 unwastable on the phone.. hang on
23:34 PeterA sure hanging
23:41 unwastable please hang on.. still on the line
23:42 skippy what is the recommended pattern for upgrading Gluster RPMs?  Should I `systemctl stop glusterd`; then forcibly kill any remaining Gluster processes; and then `yum update glusterfs-*` ?
23:43 skippy i'd like to upgrade from 3.5.2 to 3.6.1
23:45 unwastable quick question: NFS: Buggy server - nlink == 0! on 32bit client why?
23:46 unwastable anyone?
23:49 anastymous a quick search on google suggests that FSCK may fix your issue
23:50 anastymous I'm no expert, but perhaps look at that?
23:51 anastymous http://www.backupcentral.com/phpBB2/two-way-​mirrors-of-external-mailing-lists-3/backuppc​-21/kernel-nfs-buggy-server-nlink-0-68852/
23:53 Durzo joined #gluster
23:54 Durzo anyone know the best way to file a bug for semiosis' ppa debs? the repo itself says not to contact him via email or launchpad :/
23:54 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
23:59 skippy glusterbot: help
23:59 glusterbot skippy: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary