Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 diegows joined #gluster
00:25 vpshastry joined #gluster
00:30 sarkis joined #gluster
00:57 daMaestro|isBack joined #gluster
01:07 tokik joined #gluster
01:09 bala joined #gluster
01:16 elyograg joined #gluster
01:16 elyograg good evening, gluster folks.
01:17 elyograg gluster's distributed hash thing ... does it hash on the entire pathname or just the basename?
01:18 JoeJulian basename
01:19 elyograg thanks.  so I can move directories around and hashes won't require link files.
01:19 JoeJulian right
01:20 glusterbot New news from newglusterbugs: [Bug 1023191] glusterfs consuming a large amount of system memory <https://bugzilla.redhat.com/show_bug.cgi?id=1023191>
01:35 vpshastry joined #gluster
01:48 cp0k joined #gluster
01:50 diegows joined #gluster
01:59 semiosis :O
02:00 JoeJulian O:
02:00 semiosis built a file change monitor over the weekend.  now time to write the unit tests.  it was too big & gnarly to do TDD for this part
02:00 semiosis hey joe
02:00 harish joined #gluster
02:01 JoeJulian Someday I'll do TDD...
02:02 semiosis right
02:02 semiosis i made a deal with myself that if i was going to do this ,,(java) thing i had to do it TDD all the way, so even if the project failed I would gain a skill
02:02 glusterbot https://github.com/semiosis/libgfapi-jni & https://github.com/semiosis/glusterfs-java-filesystem
02:02 semiosis not that it was ever really going to fail of course
02:03 semiosis but you know
02:03 JoeJulian of course
02:03 semiosis it was hard at first, but now i'm writing tests without even thinking
02:03 semiosis so winning
02:04 semiosis but yeah, tdd is an uphill climb
02:07 sroy joined #gluster
02:10 cp0k joined #gluster
02:14 srsc re: my problem of gluster mounts over ipoib not mounting on boot in debian, i've resorted to an LSBInitScript which polls /sys/class/net/ib0/operstate until the ib interface comes up, which is working but is pretty ugly: http://fpaste.org/78471/39277597/
02:14 glusterbot Title: #78471 Fedora Project Pastebin (at fpaste.org)
02:15 srsc i realize this is now a debian/infiniband problem and not a gluster problem, but if anyone's dealt with this before i'm open to suggestions
02:16 srsc Matthaeus: thanks for the help earlier, unfortunately executing "mount -a -O _netdev" in /etc/rc.local didn't work as ib wasn't available yet. it also didn't work when specified in /etc/network/interfaces, although it seems like it should have...
02:18 semiosis srsc: mount failures at boot were a common problem with gluster but have been solved in most distros, for most networking... it still comes up once in a while with odd equipment, usually a NIC slow to init
02:18 semiosis srsc: quick & dirty fix is to just add a 'sleep 10' or similar to the top of /sbin/mount.glusterfs
02:19 semiosis sometimes that works
02:19 srsc semiosis: heh, yeah, that posted LSBInitScript which starts on $remote_fs waits about 70 seconds for ib0 to come up...
02:20 semiosis well if it works that sounds a little bit better than editing /sbin/mount.glusterfs
02:20 semiosis hopefully your systems are not booting up often
02:20 srsc yeah, this might be my best bet. thanks for looking :)
02:20 semiosis yw
02:21 cp0k joined #gluster
02:21 semiosis might want to ping dbruhn or jclift_ about it, iirc they've used infiniband
02:21 semiosis @seen dbruhn
02:21 glusterbot semiosis: dbruhn was last seen in #gluster 4 hours, 17 minutes, and 20 seconds ago: <dbruhn> You got it, I have some bricks that are 100% full, and some that are almost not full and I had a rebalance go wonky, so I restarted with a force. Kind of surprised on how little data moved around so just wanted to make sure it wasn't due to the layout being wonky.
02:42 RameshN joined #gluster
03:01 nightwalk joined #gluster
03:12 cp0k joined #gluster
03:14 MugginsM joined #gluster
03:19 bharata-rao joined #gluster
03:20 rfortier joined #gluster
03:33 rwheeler joined #gluster
03:37 cp0k joined #gluster
03:45 psyl0n joined #gluster
03:46 kanagaraj joined #gluster
03:47 shubhendu joined #gluster
03:52 cp0k joined #gluster
03:53 daMaestro joined #gluster
04:18 shylesh joined #gluster
04:22 ppai joined #gluster
04:28 satheesh joined #gluster
04:32 cp0k joined #gluster
04:38 hagarth joined #gluster
04:40 zwevans joined #gluster
04:45 prasanth joined #gluster
04:48 aquagreen greetings all, could someone answer this question? Do you know of anyone who successfully runs samba/CTDB on top of a 2 node gluster cluster (replica 2) which runs on VMs each of runs on different physical hosts? I have everything working except ... ctdb fails to start on the second node complaining that it can't lock the needed split-brain lock file.
04:49 aquagreen i.e., I just want windows VM client to continue (with a slight pause) to be able to access a volume if its partner dies.
04:49 aquagreen Doesn't seem like much to ask ;-)
04:49 aquagreen but it's caused me 5-6 days of fits to get there
04:49 aquagreen including trying to use cman/pacemaker/drdb (i.e., linux ha)
04:50 aquagreen even though glusters performance for small files seems pretty uninspiring, I could deal with it one of the nodes could just handle the loss of the other.
04:51 ajha joined #gluster
04:53 davinder joined #gluster
04:58 kdhananjay joined #gluster
05:01 ndarshan joined #gluster
05:06 lalatenduM joined #gluster
05:10 bala joined #gluster
05:15 spandit joined #gluster
05:18 saurabh joined #gluster
05:18 bala joined #gluster
05:18 raghug joined #gluster
05:21 bharata-rao joined #gluster
05:22 cp0k joined #gluster
05:26 RameshN joined #gluster
05:27 nshaikh joined #gluster
05:27 raghu joined #gluster
05:40 sas joined #gluster
05:40 cp0k joined #gluster
05:45 rjoseph joined #gluster
05:49 zwevans hi folks.  Running gluster 3.3.2 and having trouble getting glustershd running.  Healing commands fail as a result.  Each of 2 nodes have been rebooted and glusterfs/d services restarted.  Is this a familiar problem to anyone?  Based on glustershd.log, it hasnt been running for quite some time now.
05:51 vpshastry joined #gluster
05:52 glusterbot New news from newglusterbugs: [Bug 1066778] Make AFR changelog attributes persistent and independent of brick position <https://bugzilla.redhat.com/show_bug.cgi?id=1066778>
05:56 rastar joined #gluster
05:56 mohankumar joined #gluster
06:01 dusmant joined #gluster
06:04 haomaiwa_ joined #gluster
06:08 ktosiek joined #gluster
06:09 pk joined #gluster
06:12 rossi_ joined #gluster
06:18 hagarth rjoseph: ping
06:19 itisravi joined #gluster
06:20 benjamin_____ joined #gluster
06:22 nshaikh joined #gluster
06:22 cp0k joined #gluster
06:27 bharata-rao joined #gluster
06:36 meghanam joined #gluster
06:36 meghanam_ joined #gluster
06:37 vimal joined #gluster
06:41 Philambdo joined #gluster
06:48 cp0k joined #gluster
06:50 elyograg left #gluster
07:12 haomai___ joined #gluster
07:14 dusmant joined #gluster
07:15 davinder joined #gluster
07:22 shyam joined #gluster
07:22 jtux joined #gluster
07:22 rossi_ joined #gluster
07:23 glusterbot New news from newglusterbugs: [Bug 1066798] Hard link migration fails in remove-brick start operation <https://bugzilla.redhat.com/show_bug.cgi?id=1066798>
07:26 cp0k joined #gluster
07:28 mohankumar joined #gluster
07:32 micu joined #gluster
07:36 cp0k joined #gluster
07:39 nightwalk joined #gluster
07:40 glusterbot` joined #gluster
07:40 mohankumar joined #gluster
07:42 JoeJulian joined #gluster
07:44 ctria joined #gluster
07:46 mohankumar joined #gluster
08:01 rgustafs joined #gluster
08:01 jtux joined #gluster
08:03 eseyman joined #gluster
08:09 dusmant joined #gluster
08:10 nightwalk joined #gluster
08:18 kevein joined #gluster
08:19 spiekey joined #gluster
08:22 shapemaker joined #gluster
08:23 cjanbanan joined #gluster
08:23 franc joined #gluster
08:23 franc joined #gluster
08:24 glusterbot New news from newglusterbugs: [Bug 1066837] Create of file fails on cifs mount of a gluster volume <https://bugzilla.redhat.com/show_bug.cgi?id=1066837>
08:36 ProT-0-TypE joined #gluster
08:40 VeggieMeat joined #gluster
08:41 markuman_ left #gluster
08:46 nightwalk joined #gluster
08:47 fsimonce joined #gluster
08:52 RameshN joined #gluster
08:52 shubhendu joined #gluster
08:54 glusterbot New news from newglusterbugs: [Bug 1066837] File creation on cifs mount of a gluster volume fails <https://bugzilla.redhat.com/show_bug.cgi?id=1066837>
08:56 dusmant joined #gluster
08:56 kanagaraj joined #gluster
08:56 ndarshan joined #gluster
08:59 aravindavk joined #gluster
09:00 satheesh joined #gluster
09:07 andreask joined #gluster
09:13 liquidat joined #gluster
09:25 nightwalk joined #gluster
09:26 raghug joined #gluster
09:30 satheesh joined #gluster
09:39 hchiramm__ joined #gluster
09:40 ccha2 hello, heal info take too long. what I can do ?
09:42 Slash__ joined #gluster
09:43 ccha2 what happend when I add a new replicate brick
09:45 andreask joined #gluster
09:45 andreask joined #gluster
09:54 raghug joined #gluster
09:54 glusterbot New news from newglusterbugs: [Bug 1066860] GlusterFS version - client and server <https://bugzilla.redhat.com/show_bug.cgi?id=1066860>
09:55 olisch joined #gluster
09:55 NuxRo ndevos: hi, so you have a working gluster patch for cloudstack 4.3?
09:57 ndevos NuxRo: yeah, a "should be working" version is on https://forge.gluster.org/cloudstack-gluster/cloudstack/commits/wip/4.3/gluster
09:57 glusterbot Title: Commits in cloudstack-gluster/cloudstack:wip/4.3/gluster - Gluster Community Forge (at forge.gluster.org)
09:58 NuxRo ndevos: cool, I'll try to apply it to latest 4.3 RC and see how that goes
09:59 ndevos NuxRo: feedback is very much welcome!
10:00 NuxRo ack
10:00 calum_ joined #gluster
10:00 NuxRo btw, from your pov, would there be any advantage in using gluster this way vs the "shared mount point" option?
10:01 ndevos NuxRo: if you use it this way, qemu uses libgfapi and does not go through the fuse-layer, way better for performance
10:02 NuxRo ndevos: hmm, even on EL6?
10:02 NuxRo i thought EL6 qemu is not libgfapi-capable
10:03 ndevos NuxRo: qemu-kvm in 6.5 has libgfapi support, I think
10:03 NuxRo that'd rock
10:04 ndevos NuxRo: you can check which dependencies the rpm has: rpm -q --requires qemu-kvm | grep gfapi
10:05 ProT-0-TypE joined #gluster
10:05 NuxRo rpm -q --requires qemu-kvm | grep gfapi
10:05 NuxRo libgfapi.so.0()(64bit)
10:06 NuxRo awesome
10:06 NuxRo I'm building the patched cloudstack 4.3 now
10:07 NuxRo i thought libvirt would use this via fuse, otherwise i could have given this more importance
10:07 NuxRo we could have pushed for it to be part of 4.3 :)
10:08 ndevos well, libvirt (creation of images) is done through fuse, but the disk attaching is done over the gluster:// urls
10:08 NuxRo that's really great news
10:09 ndevos so, on a cloudstack server, you will see a glusterfs mountpoint (automatically done by libvirt when needed), but qemu does not use the mountpoint
10:09 NuxRo wait, what's the minimum server version that i need to make this work?
10:09 ndevos I guess rhel-6.5?
10:09 nightwalk joined #gluster
10:09 NuxRo glusterfs-server that is
10:10 ndevos 3.4 iirc
10:10 qdk joined #gluster
10:11 NuxRo cool, I'm on glusterfs-server-3.4.0-8.el6.x86_64
10:11 NuxRo shuld plan an upgrade these days though
10:11 ndevos well, test first :)
10:12 NuxRo obviously :)
10:12 bharata_ joined #gluster
10:13 satheesh4 joined #gluster
10:14 davinder joined #gluster
10:18 shyam joined #gluster
10:20 NuxRo http://tmp.nux.ro/cloudsnap430_4440/ rpms here should anyone else feel adventurous
10:20 glusterbot Title: Index of /cloudsnap430_4440 (at tmp.nux.ro)
10:31 harish joined #gluster
10:35 kanagaraj joined #gluster
10:35 shubhendu joined #gluster
10:35 NuxRo ndevos: do I need to add anything into the DB or smth? I can't seem to be able to add gluster primary storage http://fpaste.org/78538/28060541/
10:35 glusterbot Title: #78538 Fedora Project Pastebin (at fpaste.org)
10:36 rgustafs joined #gluster
10:36 ndevos NuxRo: uh... that looks broken
10:36 ndarshan joined #gluster
10:37 ndevos NuxRo: I've tested almost the same patches on 4.4, and did not hit this error :-/
10:37 dusmant joined #gluster
10:38 NuxRo how usable is 4.4 at this point?
10:39 ndevos well, works for me, mostly - but I'm no real CloudStack user
10:39 ndevos I'll try 4.3 now and see whats up, maybe I'v pushed a wrong patch to the forge or something...
10:40 NuxRo if you can get it to work with 4.3 that'd be great
10:40 NuxRo ping me and I'll rebuild/reinstall the rpms
10:40 ndevos I'll have a look and see what I can do
10:41 ndevos I'll rebase the 4.3 branch to the current upstream 4.3 too, maybe that helps
10:42 NuxRo cool
10:42 NuxRo I'm on rev 4440 now (RC6)
10:45 ndevos NuxRo: how does that map to a git commit?
10:57 NuxRo ndevos: no idea, maybe this email has the info you seek: http://www.mail-archive.com/dev@cloudstack.apache.org/msg23330.html
10:57 glusterbot Title: [VOTE] Apache CloudStack 4.3.0 (sixth round) (at www.mail-archive.com)
10:58 nightwalk joined #gluster
11:03 RameshN joined #gluster
11:04 kanagaraj joined #gluster
11:07 jmarley joined #gluster
11:07 jmarley joined #gluster
11:09 ngoswami joined #gluster
11:21 raghug joined #gluster
11:23 mohankumar joined #gluster
11:26 jag3773 joined #gluster
11:28 mohankumar joined #gluster
11:28 edward2 joined #gluster
11:34 ndevos NuxRo: hmm, I've rebased my patches for 4.3 and I can add a glusterfs primary pool without problem
11:35 aravindavk joined #gluster
11:36 RameshN joined #gluster
11:43 edward3 joined #gluster
11:45 ndevos NuxRo: I'm uploading the rpms I've build to http://devos.fedorapeople.org/cloudstack-gluster/, they should be there in some minutes
11:48 meghanam joined #gluster
11:48 spandit joined #gluster
11:48 meghanam_ joined #gluster
11:51 ndarshan joined #gluster
11:51 ppai joined #gluster
11:51 nightwalk joined #gluster
11:56 pk left #gluster
11:59 nixpanic joined #gluster
11:59 nixpanic joined #gluster
11:59 burn420 joined #gluster
12:01 vpshastry1 joined #gluster
12:07 ira joined #gluster
12:07 itisravi joined #gluster
12:19 khushildep joined #gluster
12:24 X3NQ joined #gluster
12:27 verdurin joined #gluster
12:30 hagarth joined #gluster
12:40 olisch joined #gluster
12:41 psyl0n joined #gluster
12:41 psyl0n joined #gluster
12:46 diegows joined #gluster
12:48 vkoppad joined #gluster
12:50 nightwalk joined #gluster
12:53 NuxRo ndevos: awesome, is that rev 4440?
12:55 spandit joined #gluster
12:56 RameshN joined #gluster
12:56 ndarshan joined #gluster
13:04 aravindavk joined #gluster
13:23 X3NQ joined #gluster
13:26 rwheeler joined #gluster
13:27 glusterbot New news from newglusterbugs: [Bug 1066996] Using sanlock on a gluster mount with replica 3 (quorum-type auto) leads to a split-brain <https://bugzilla.redhat.com/show_bug.cgi?id=1066996>
13:30 sroy joined #gluster
13:42 mojorison joined #gluster
13:45 zwevans joined #gluster
13:47 ccha2 my replicated volume got some files with same md5 but different trusted.gfid
13:48 ccha2 on 1 server, trusted.gfid got the hard link.
13:49 RameshN joined #gluster
13:50 benjamin_____ joined #gluster
13:53 nightwalk joined #gluster
13:57 glusterbot New news from newglusterbugs: [Bug 1066997] geo-rep package should be dependent on libxml2-devel package <https://bugzilla.redhat.com/show_bug.cgi?id=1066997>
14:04 hagarth @fileabug
14:04 glusterbot hagarth: Please file a bug at http://goo.gl/UUuCq
14:05 aquagreen joined #gluster
14:07 jag3773 joined #gluster
14:08 bennyturns joined #gluster
14:11 johnmilton joined #gluster
14:14 ndevos NuxRo: that is the 4.3 branch from git... there dont seem to be any tags in it, so I can't really say...
14:26 ira joined #gluster
14:27 jag3773 joined #gluster
14:27 ira joined #gluster
14:27 glusterbot New news from newglusterbugs: [Bug 1067011] Logging improvements <https://bugzilla.redhat.com/show_bug.cgi?id=1067011>
14:33 theron joined #gluster
14:34 psyl0n joined #gluster
14:37 plarsen joined #gluster
14:39 B21956 joined #gluster
14:40 primechuck joined #gluster
14:40 dbruhn joined #gluster
14:42 lalatenduM joined #gluster
14:44 rpowell joined #gluster
14:47 benjamin_____ joined #gluster
14:48 japuzzo joined #gluster
14:57 glusterbot New news from resolvedglusterbugs: [Bug 1065383] rpm: glusterfs-server pulls in glusterfs-geo-replication, merge them into one <https://bugzilla.redhat.com/show_bug.cgi?id=1065383>
14:59 vpshastry joined #gluster
15:00 lpabon joined #gluster
15:00 kkeithley gluster community meeting in #gluster-meeting right now
15:00 kanagaraj joined #gluster
15:01 zaitcev joined #gluster
15:06 psyl0n joined #gluster
15:10 lmickh joined #gluster
15:11 danny__ joined #gluster
15:13 ndk joined #gluster
15:18 danny__ anybody here from Minneapolis area?
15:19 bugs_ joined #gluster
15:20 nightwalk joined #gluster
15:22 kaptk2 joined #gluster
15:26 danny__ If anyone here is from the Minneapolis area and would like to work on a Gluster installation for a few days send me a msg
15:28 wushudoin joined #gluster
15:30 jobewan joined #gluster
15:32 haomaiwang joined #gluster
15:37 haomai___ joined #gluster
15:38 nightwalk joined #gluster
15:41 glusterbot joined #gluster
15:41 ktosiek joined #gluster
15:44 P0w3r3d joined #gluster
15:44 mohankumar joined #gluster
15:49 bennyturns joined #gluster
15:53 sarkis joined #gluster
15:55 daMaestro joined #gluster
15:56 glusterbot New news from resolvedglusterbugs: [Bug 1065383] rpm: glusterfs-server pulls in glusterfs-geo-replication, merge them into one <https://bugzilla.redhat.com/show_bug.cgi?id=1065383>
16:02 nightwalk joined #gluster
16:06 lpabon joined #gluster
16:07 ndk` joined #gluster
16:09 quique joined #gluster
16:10 sputnik13 joined #gluster
16:10 glusterbot New news from newglusterbugs: [Bug 1023191] glusterfs consuming a large amount of system memory <https://bugzilla.redhat.com/show_bug.cgi?id=1023191> || [Bug 1060259] 3.4.3 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1060259> || [Bug 1065551] Unable to add bricks to replicated volume <https://bugzilla.redhat.com/show_bug.cgi?id=1065551> || [Bug 1066837] File creation on cifs mount of a gluster volume fails <http
16:14 psyl0n joined #gluster
16:15 jmarley joined #gluster
16:31 mkzero joined #gluster
16:31 xymox joined #gluster
16:33 shyam joined #gluster
16:33 vpshastry joined #gluster
16:38 zerick joined #gluster
16:38 vpshastry left #gluster
16:39 jag3773 joined #gluster
16:42 rotbeard joined #gluster
16:42 glusterbot New news from newglusterbugs: [Bug 1065639] Crash in nfs with encryption enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1065639>
16:53 davinder joined #gluster
16:55 spiekey left #gluster
16:57 psyl0n joined #gluster
17:01 LoudNoises joined #gluster
17:04 NuxRo ndevos: what patch did you use for the 4.3 rpms? I'd like to try it against their RC6
17:05 ndevos NuxRo: 1. https://forge.gluster.org/cloudstack-gluster/cloudstack/commit/5a932be042afb7ca3af67a4f31a3818aee533866?format=patch
17:05 ndevos NuxRo: 2. https://forge.gluster.org/cloudstack-gluster/cloudstack/commit/e42c91dd5b491720d065b446403dedbb1cf73ef0?format=patch
17:05 glusterbot ndevos: You've given me 5 invalid commands within the last minute; I'm now ignoring you for 10 minutes.
17:06 ndevos glusterbot: huh?!
17:06 NuxRo lol
17:07 NuxRo is there a right order to apply them or it doesnt matter?
17:07 NuxRo I'll jusr follow the order you have me :)
17:08 cp0k_ joined #gluster
17:09 sputnik13 joined #gluster
17:10 KyleG joined #gluster
17:10 KyleG joined #gluster
17:11 ndevos NuxRo: the order does not really matter, it's just that 2 does not make any sense without 1
17:12 cp0k joined #gluster
17:14 khushildep joined #gluster
17:15 NuxRo ndevos: ack
17:16 tziOm joined #gluster
17:18 Philambdo joined #gluster
17:22 khushildep joined #gluster
17:24 srsc left #gluster
17:33 cp0k JoeJulian: Hey, so I got down to the bottom of why I was seeing that exit code 196, it turned out to be one of the clients having itself probed at one point...just as was the case here: http://www.gluster.org/pipermail/gluster-users/2013-June/036186.html
17:33 glusterbot Title: [Gluster-users] held cluster lock blocking volume operations (at www.gluster.org)
17:33 natgeorg joined #gluster
17:38 rwheeler joined #gluster
17:39 keytab joined #gluster
17:39 steved_ joined #gluster
17:40 steved_ hello, I'm experiencing a heal-failed state but it looks like the files exist on both nodes, and time stamps and file sizes are the same
17:41 steved_ gluster volume heal rep1 info heal-failed Gathering Heal info on volume rep1 has been successful  Brick 10.0.10.2:/mnt/storage/lv-storage-domain/rep1 Number of entries: 6 at                    path on brick ----------------------------------- 2014-02-14 17:25:50 <gfid:2d619e4d-6823-4fb3-ad9d-f33459833f85> 2014-02-14 17:25:50 <gfid:0708b030-3445-4015-a6dd-c28446e81e67> 2014-02-14 17:25:50 <gfid:ce29e947-bebd-4b6a-a619-010931b5f9a8>
17:41 steved_ yikes..
17:41 steved_ [root@ovirt001 rep1]# gluster volume heal rep1 info heal-failed
17:41 steved_ Brick 10.0.10.2:/mnt/storage/lv-storage-domain/rep1
17:42 steved_ Number of entries: 6
17:42 Mo_ joined #gluster
17:42 steved_ Brick 10.0.10.3:/mnt/storage/lv-storage-domain/rep1
17:42 steved_ Number of entries: 0
17:42 ndevos ~split-brain | steved_
17:42 glusterbot steved_: To heal split-brain in 3.3+, see http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ .
17:43 steved_ split brain shows 0 / 0
17:44 ndevos well, still looks like a split-brain...
17:44 steved_ ya, nodes are in quorum auto
17:44 steved_ so not sure how it split
17:44 cp0k you think that is a scary split-brain? take a look at where I currently am: http://fpaste.org/78673/92831849/
17:44 glusterbot Title: #78673 Fedora Project Pastebin (at fpaste.org)
17:44 ndevos can you check 'getfattr -ehex -m. -d /path/to/file' and see if the gfid is the same, and there are no afr xattrs (or are set to 0)
17:45 elyograg joined #gluster
17:45 ndevos do the getfattr for both copies if the file, on each brick :)
17:46 steved_ k
17:46 ndevos if it does not list the filename, but only a gfid, the /path/to/file is like /path/to/brick/.glusterfs/<part-gfid>/<2nd-part-gfid>/<full-gfid>
17:47 elyograg I'm having trouble with a gluser NFS mount.  There's a bunch of info to relay.
17:47 neurodrone__ joined #gluster
17:47 steved_ jebas, k
17:48 elyograg eight servers in the cluster.  two of them are used as network access points, they do not have any bricks.  Four of them have bricks for our main volume.  two of them were recently added and have bricks for a new volume.
17:48 cp0k ndevos: I just checked on a file that comes up in the split-brain result and 2 of 4 of my storage nodes show a 'afr' value
17:49 cp0k ndevos: is it good or bad to have a afr value?
17:49 elyograg Everything is fine with the main volume.    the two servers for the newer volume have a skyrocketing load average.
17:50 ndevos cp0k: if those files are not accessed at the moment (a split-brain prevents that), then teh afr values can be used to identify what kind of split-brain it is (data, meta-data or entry)
17:50 elyograg This happened yesterday.  I was able to get it working again by completely stopping all gluster processes on those two machines.  Now it's happening again, and I can't see any reason for it.  Nothing's been happening on the volume since yesterday.
17:50 JoeJulian cp0k: And it stops counting at 1023... :( When I've had a volume that bad before, I just chose a brick to call healthy and wiped the other.
17:51 steved_ @ndevos same gfid, afr.<vol>-client-0 / 1 = 0x000...
17:51 ndevos elyograg: not using ,,(ext4), right?
17:51 glusterbot elyograg: The ext4 bug has been fixed in 3.3.2 and 3.4.0. Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
17:51 elyograg xfs.
17:51 cp0k JoeJulian:  how do you determine which brick was healthy?
17:51 elyograg centos 6.
17:51 elyograg gluster 3.3.1 from kkeithley's repo.
17:52 ndevos steved_: hmm, then there should not be a reason to heal them... you have noticed that the log dates are from a couple of days ago?
17:52 JoeJulian cp0k: depends.. in my case I had a spare drive so I just punted and hoped for the best and archived the drive.
17:52 cp0k 2 of my 4 storage nodes have been under very heavy load since the upgrade to 3.4.2, I also found that I have a old '.glusterfs.broken' in the root of my volume...is it possible that Gluster is picking up on this dir as well?
17:53 elyograg upgrading to 3.4.2 is planned, but we've learned that we can't do it without taking things offline -- the rolling upgrade instructions cause things to become inaccessible via the NFS mount.
17:53 elyograg we did the upgrade process on a testbed.
17:53 ndevos steved_: iirc, the 'heal-failed' command shows a log, maybe heal was successful afterwards...
17:53 steved_ @ndevos yep its from a few days ago, does this mean its healed then -- ah ok
17:54 cp0k JoeJulian: gotcha, I am hoping that the old .glusterfs.broken dir is confusing gluster to think there is far more split brain happening than really is
17:54 steved_ @ndevos so if I run heal info and it comes back clean then I should be ok?
17:54 JoeJulian cp0k: could be. Can you just mv it out of the brick dir?
17:54 cp0k elyograg: I did the 3.4.2 upgrade in a staging env as well, then did it live via the "downtime" route
17:54 ndevos steved_: if heal keeps on failing, you would not be able to access the files that are in split-brain, and you would have recent log entries in /var/log/glusterfs/<mount-point>.log
17:55 elyograg to fix this immediate problem, I can probably do the same process stop/restart ... but I'd like to know why it's happening, and whether upgrading to 3.4.2 will fix things.
17:55 cp0k elyograg: Luckily in my case this is a CDN origin, and the cache efficiency is currently >90%, which allowed me the luxury of taking the mount offline for the upgrade
17:55 daMaestro joined #gluster
17:55 daMaestro joined #gluster
17:55 elyograg we never had any problems until we added the new volume.  I actually had two volumes before - a test one and the main one.  Since we started having problems I completely eliminated the test one.
17:56 elyograg the test volume and main volume were using different subdiretories on the same brick filesystems.
17:57 steved_ ndevos: are you familiar with hosting ovirt on glusterfs? I'm getting live migration failures on certain vm's, and wondered if the heal-failed issues were related
17:57 cp0k elyograg: ah, I like to keep my testbed completely separate on a xen instance
17:58 elyograg gluster has proven problematic  for us.  Bugs in the rebalance code caused a small amount of data loss.  Tests on 3.3.1 duplicated the rebalance problem, and after upgrading to 3.4.2, that problem disappeared.
17:58 cp0k JoeJulian: are your instructions the only way to fix a split-brain? or is there a way I can attempt it natively via Gluster inself?
17:58 cp0k JoeJulian: re: http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
17:58 glusterbot Title: Fixing split-brain with GlusterFS 3.3 (at joejulian.name)
17:59 ndevos steved_: not very familiar, but a split-brain could cause that, or you need to configure a base-port like http://review.gluster.org/6147 , bug 987555
17:59 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=987555 medium, urgent, 3.4.2, ndevos, VERIFIED , Glusterfs ports conflict with qemu live migration
17:59 JoeJulian cp0k: I haven't written that blog article yet. Until then, there is only that way.
18:00 steved_ ndevos: intresting, ok thanks for your help
18:01 ndevos steved_: good luck!
18:01 cp0k JoeJulian: are you hinting at the fact there is another way to do it in which you have not documented yet on your site?
18:01 * ndevos leaves for the day
18:02 cp0k JoeJulian:  I am surprised Gluster does not have a built in way of cleaning things up
18:02 JoeJulian cp0k: There is, but it involves building distribute vol files, one for each replica and mounting those.
18:03 JoeJulian cp0k: Yeah, that's been a bone of contention between me and the devs for a looong time. :D
18:03 cp0k JoeJulian: sounds very involved, like opening up the engine of a car
18:03 cp0k JoeJulian:  heh
18:03 jobewan joined #gluster
18:04 JoeJulian I'll be making it easy, just need to get this last bit of puppet-neutron fixed so my in-house openstack deploy is perfect.
18:05 cp0k JoeJulian: awesome, looking forward to it! and of course much appreciate your dedication in helping the community
18:05 cp0k I need to step my Openstack + puppet game up
18:12 cp0k JoeJulian: Do you see any harm in me adding my new bricks while having split brain in place?
18:12 elyograg here's another problem, doesn't seem to be directly related to the high load average problem.  http://fpaste.org/78690/28335541/
18:12 glusterbot Title: #78690 Fedora Project Pastebin (at fpaste.org)
18:12 JoeJulian I don't know. My preference is always to ensure the volume is error free before making any changes.
18:23 RedShift joined #gluster
18:29 cp0k JoeJulian: I apologize in advance if this question has been asked a ton before but - what causes a split brain to happen?
18:30 cp0k JoeJulian: 95% of the split-brain files listed in my case are coming up as 'gfid' and the remaining 5% are actual paths to files being listed
18:31 cp0k JoeJulian: any idea what the difference would be in gluster listing gfid's and actual paths to files?
18:33 NuxRo ndevos: it almost worked, the rpms built fine, i could add the gluster volume as primary stor, the volume got mounted on the hosts, but could not deploy vm: http://fpaste.org/78694/83477813/
18:33 glusterbot Title: #78694 Fedora Project Pastebin (at fpaste.org)
18:34 nightwalk joined #gluster
18:34 mrfsl joined #gluster
18:34 mrfsl For an environment with a tremendous amount of small files, will Geo-Replicate work better than replicate?
18:37 NuxRo ndevos: and this is what i see on the agent/hypervisor http://fpaste.org/78698/92834955/
18:37 glusterbot Title: #78698 Fedora Project Pastebin (at fpaste.org)
18:42 neurodrone__ joined #gluster
18:43 JonnyNomad joined #gluster
18:47 calum_ joined #gluster
18:50 rossi_ joined #gluster
18:51 Matthaeus joined #gluster
18:53 mrfsl I am doing a self-heal and my 8 Cores are pegged. Is there a tunable setting that can help alleviate this?
18:58 NuxRo mrfsl: try this http://www.andrewklau.com//controlling-glusterfsd-cpu-outbreaks-with-cgroups/
18:58 glusterbot Title: Controlling glusterfsd CPU outbreaks with cgroups | (at www.andrewklau.com)
18:58 mrfsl thank you
18:58 mrfsl the cpu usage isn't as big an issue as how slow the process is taking
18:59 samppah do you have any recommendations for backup solution to use with glusterfs?
18:59 dbruhn samppah, what are your requirements? Number of generations? Frequency? etc..
19:01 KaZeR_ joined #gluster
19:03 nightwalk joined #gluster
19:03 samppah dbruhn: daily backups at least and keep them for 30 days and monthly backup should be available for 3-6 months
19:04 dbruhn How much data?
19:04 jbrooks left #gluster
19:04 B21956 joined #gluster
19:05 dbruhn and what kind of data?
19:06 samppah amount of data is growing :) i think that amount of backups we are storing is somewhere between 10-20 terabytes
19:07 samppah we are using Idera ServerBackup (http://www.idera.com/productssolutions/serverbackup) on other systems but it lacks support for XFS :(
19:07 glusterbot Title: Enterprise Linux and Windows Server Backup Software (at www.idera.com)
19:09 dbruhn Sorry for all the questions, is your data broken out into easily digestible blocks?
19:13 edward2 joined #gluster
19:14 samppah dbruhn: we are using glusterfs to store VM images.. we would also like to use it to serve files for webserver.. ideal backup solution would take snapshot of underlying storage and store it in safe place and it would be possible to restore individual files but also the whole storage system in case of total disaster
19:15 samppah dbruhn: but currently it would be enough to just backup the files server for webservers
19:15 Matthaeus samppah, here's what I did:
19:15 Matthaeus 1) Bricks are all LVM volumes on VGs that have 10%ish empty space.
19:16 Matthaeus 2) When backup is triggered, use LVM to snapshot all bricks.
19:16 Matthaeus 3) Assemble the snapshotted bricks in another gluster volume
19:16 Matthaeus 4) Mount the gluster volume and back up what I want to back up
19:16 Matthaeus 5) Unmount gluster volume, destroy it, and release the lvm snapshots
19:17 samppah Matthaeus: cool, i have planned something like that too for disaster recovery
19:17 Matthaeus Once you have the data off of gluster, it's a standard backup problem and you can use standard methods.
19:18 Matthaeus Nota bene:  You will -murder- your performance while this is going on.  If you're only looking to back up the web stuff and you have a different method or don't care about the vm images, I'd recommend keeping them in separate gluster volumes.
19:18 Matthaeus And test it -hard- before you promote it to production.
19:19 samppah yeah, data is on separate volumes
19:22 samppah Matthaeus: are you happy with your solution and have you run into any problems? :)
19:23 Matthaeus samppah: I don't work there anymore, and we had problems where we were trying to back up the entire data set.  The standard methods I was doing once I did the snapshot didn't scale well, but the snapshot stuff worked great.
19:27 Oneiroi joined #gluster
19:31 cp0k Does Gluster offer paid support?
19:32 JoeJulian @commercial
19:32 glusterbot JoeJulian: Commercial support of GlusterFS is done as Red Hat Storage, part of Red Hat Enterprise Linux Server: see https://www.redhat.com/wapps/store/catalog.html for pricing also see http://www.redhat.com/products/storage/ .
19:36 abyss^ If I had 2x2 Distributed Replicated volume and all bricks have 500GB (so 1TB volume) can I add another pair of 200GB brick or it have to be 500GB?
19:37 dbruhn abyss^, it's suggested you add bricks of uniform size
19:37 jbrooks joined #gluster
19:37 dbruhn you can add bricks of smaller size, but it's not recommended, and may lead to issues down the road
19:39 abyss^ yes kkeithley explain me that but I can't get it. If new bricks are just replicated pair, so why it can influence on whole volume? As far I get it when I do replica 2 server1:/brick server2:/brick server3:/brick server4:/brick then server1 and server2 are replica the same for server3 and 4...
19:39 dbruhn abyss^, the reason being that gluster distributes the files between the bricks based on the hash of the file name, these hashes are distributed based on the number of members of the distributed portion of the volume
19:40 wushudoin left #gluster
19:40 abyss^ so server1 brick and server2 brick should be the same size but why server2 and server3?:)
19:40 dbruhn So if so in theory files hashed at 0-4 get put on one member of the distributed volume, 5-9 get put on the other pair.
19:41 dbruhn Once the smaller pair fills up, it will start writing back to the pair that isn't as full
19:41 dbruhn and from there it will have to link from the bricks it expects to find the files, which will slow down lookups and writes
19:41 dbruhn Glusters distribution of files is not based on the size of the data on the bricks
19:43 dbruhn So like I said it can be done, it's just not ideal, and can create other issues down the road.
19:44 abyss^ dbruhn: OK. Thank you for you explanation. Now I have to read this couple of time. I'm not english man ;)
19:45 dbruhn abyss^, no worries, ask as many questions as you need. I explained it horribly.
19:46 aixsyd joined #gluster
19:46 aixsyd hey guys - are there any halfway decent Nagios glusterfs plugins out there?
19:46 abyss^ As far as I understand this it can happen that the space is running out but gluster save the link in .glusterfs folder but file is write on another gluster not in pair?
19:46 dbruhn aixsyd, watch out for anything that port checks, that can cause weird connection issues.
19:47 dbruhn abyss^, lets pull replication out of the picture for a second, in the conversation. It's non consequential.
19:47 abyss^ ok
19:48 dbruhn With a distributed volume, when you add bricks to the distribution they are basically dividing a table of hashes up. So Gluster knows where to do it's default lookups at.
19:49 dbruhn so Brick A will have all files 0 - 2.3333333, Brick B will have 2.3333334 - 5.66666663 and Brick C will have 5.6666664 - 9.
19:50 dbruhn Gluster gets this number by creating a hash of the name of the file
19:50 dbruhn it places the file based on this table
19:50 dbruhn it's not based on size, or anything else.
19:50 abyss^ aixsyd: did you check http://exchange.nagios.org? ;)
19:50 glusterbot Title: Nagios Exchange (at exchange.nagios.org)
19:51 dbruhn When gluster encounters a full brick, it then finds space on another brick, and it creates a pointer to that other brick.
19:51 aixsyd abyss^: I did - but they all seem to blow and/or not work
19:51 dbruhn so the client connects to the brick it expects to find the file on, and then is directed to connect to the other brick.
19:51 dbruhn this causes your look ups to double
19:52 dbruhn and causes gluster to have to find space to write the files if there isn't space on the brick it wants to write to
19:52 abyss^ dbruhn: Oh that's clear! Thank you very much.
19:52 dbruhn This will slow down your reads and writes from the system
19:52 dbruhn NP, glad I could help
19:53 dbruhn Obviously with a replication set, you are targeting a pair of servers instead of a single server. Same concept though.
19:53 abyss^ yeah, now it's clear even with my english :)
19:54 abyss^ thank you
19:54 dbruhn np
19:56 KyleG left #gluster
19:56 abyss^ aixsyd: so correct this and put your new plugin on exchange.nagios.org:) New Glusterfs have a lot of tools that you can check a lot of things even split-brains with one command
19:57 aixsyd abyss^:  i'm not a coder :(
19:57 abyss^ aixsyd: but you know bash?
19:58 aixsyd i should =\
19:58 aixsyd all of these plugins return UNKONWN because my volume is in TB and theyre all programmed for GB
19:58 dbruhn aixsyd, you can script nagios to do pretty much anything with simple bash scripts.
19:58 dbruhn What kind of things do you want to monitor for?
19:59 aixsyd ha - i got it. i just deleted a line.
19:59 aixsyd just wanna see that its ok, how many bricks, and how much free
19:59 aixsyd and I got it ^_^
20:00 dbruhn Are you using cacti with nagios? or just nagios
20:00 Danny__ joined #gluster
20:00 aixsyd Opsview
20:00 dbruhn Not familiar with Opsview, does it let you graph system utilization?
20:01 aixsyd Yep
20:02 dbruhn What I've done in the past with Cacti, and maybe you can do the same in Opsview, it to create aggregate graphs of the bricks file systems.
20:02 dbruhn For total volume utilization
20:03 dbruhn I really need to step up my monitoring game these days, I've had to disassemble a bunch of it because of some other issues.
20:03 dbruhn And I've been looking at zabbix, which has slowed any changes
20:04 abyss^ ;)
20:09 sroy joined #gluster
20:10 psyl0n joined #gluster
20:13 glusterbot New news from newglusterbugs: [Bug 1063832] add-brick command seriously breaks permissions on volume root <https://bugzilla.redhat.com/show_bug.cgi?id=1063832>
20:27 cp0k is it safe to copy /var/lib/glusterd/peers from one client to another?
20:27 dbruhn cp0k, you'll have an extra peer, and be missing a peer
20:27 cp0k dbruhn: yes, if I were to manually fix that
20:28 dbruhn then yes
20:28 sulky joined #gluster
20:28 cp0k dbruhn: thanks!
20:28 dbruhn You'll want to do it with glusterd stopped
20:28 cjanbanan joined #gluster
20:45 pdrakeweb joined #gluster
20:46 diegows joined #gluster
20:51 REdOG I tried to do a replace-brick on a replica 2 and the command says replace commit successful but there are no files and glusterd log shows remote operation failed no such file or directory gfid errors
20:56 REdOG is there a way to get more info from volume heal info?
20:56 REdOG Its just 1 file :/
20:58 abyss^ REdOG: what version of gluster?
21:01 REdOG 3.4.2
21:04 REdOG hmm that's odd as soon as I tried to touch the file the heal seems to have started
21:04 REdOG is it delayed?
21:05 nightwalk joined #gluster
21:06 semiosis REdOG: before gluster 3.3.0 that was the *only* way to heal a file
21:06 REdOG ok
21:06 semiosis REdOG: now there's a self heal daemon which proactively heals files, but touching (or stating) files still triggers an immediate heal like it always has
21:07 REdOG interesting, does it buffer the file before putting it in place?
21:07 REdOG like a tmp file in .glusterfs or something?
21:08 REdOG was trying to watch the transfer speed
21:09 mrfsl Is it possible to reduce replica from 2 to none?
21:09 Matthaeus joined #gluster
21:09 REdOG I don't think so
21:10 REdOG could be wrong
21:10 mrfsl thank you. Anyone else?
21:10 jag3773 joined #gluster
21:13 semiosis mrfsl: i think you can do that with remove-brick replica 1, but whyyyyyy?
21:13 mrfsl because I have a bizzilion files
21:13 mrfsl and everything I do crushes my environemt
21:13 REdOG hmm maybe you can
21:14 REdOG seemed to work for me
21:14 mrfsl and self-heal is broken
21:14 mrfsl and rebalance is broken
21:14 mrfsl and I need to add capacity like yesturday
21:15 diegows joined #gluster
21:15 LoudNoises we just did this and are still working through it, but we basically removed one set of replicated bricks from the volume, removed the volume and made the volume again with just one set then added back the removed bricks after reformating
21:15 LoudNoises it is kinda working, but the rebalance is a bit wonky at the moment
21:16 mrfsl ouch. Sounds scary removing the volume and readding it
21:16 LoudNoises it's somewhat supported, JJ has a post about it
21:16 REdOG I just did it with a test replica 2 and it seemed to work
21:17 LoudNoises http://www.gluster.org/pipermail/gluster-users/2013-February/035576.html
21:17 glusterbot Title: [Gluster-users] Peer Probe (at www.gluster.org)
21:17 LoudNoises is what we cribbed off of
21:17 andreask joined #gluster
21:18 LoudNoises it's a bit simpler than that though because if you format before you re-add, you don't have to remove the extended attribs
21:19 mrfsl so is remove-brick replica 1 not supported then?
21:19 REdOG that's what I did
21:19 LoudNoises what happens to all the extra files underneath in that case?
21:20 LoudNoises or i guess that just removes the bricks entirely then you can re-add them?
21:20 mattappe_ joined #gluster
21:20 mrfsl so if I have 12 bricks replica 2 how would that work. I thought you could not have a mix of replica 0 and replica 2 bricks?
21:20 ira joined #gluster
21:21 REdOG I only tried with 2 bricks
21:21 mrfsl I see.
21:22 mrfsl @LoudNoises - so this volume remove and re-add procedure --- you were able to retain your original data?
21:22 LoudNoises heh it appears to be all there
21:22 elyograg left #gluster
21:23 LoudNoises so far so good, although we did have to remove the attrs on the bricks that stayed around because the volume was remade
21:23 LoudNoises we just did that to be safe, not sure if that's necessary or not
21:23 LoudNoises all of our data was re-obtainable though :)
21:23 mrfsl I guess I will test with the brick remove replica 1
21:24 mrfsl @LoudNoises - yeah mine isn't. Due to the size and number of files I have - I can't back the data up - I have tried hitting up here and searched for answers. All I have left is migrating off of gluster. I do have to increase capacity prior to that though.
21:24 semiosis when i need to add capacity i just take a brick offline (by killing its process) and expand the underlying storage
21:25 semiosis works ok if you can add more block storage to your existing servers
21:30 mrfsl @semiosis - Hum.... let me think about that one....
21:30 * semiosis will allow it
21:31 mrfsl well I have replica pairs so I can take a node completely offline. - expand - bring up - heal - take other side offline - expand....thoughts?
21:31 LoudNoises that's what i would do if you have the ability to expand the volumes with more disk
21:32 mrfsl well... I would have to add another shelf - I.E. another RAID 6 array.
21:32 mrfsl I don't feel happy about spanning LVM across but its better than nothing right?
21:33 LoudNoises yea i'd think that's better than breaking your replica
21:33 LoudNoises but i guess it depends on what your performance requirements are
21:33 mrfsl well the replica might help me migrate data off (speed up by reading from multiple nodes)
21:42 semiosis mrfsl: i do that every few months
21:43 semiosis do one brick at a time, just in case
21:43 semiosis though it's gone fine for me so far
21:47 Danny__ joined #gluster
21:47 Danny4378 .
22:00 XpineX_ joined #gluster
22:01 james__ joined #gluster
22:04 KaZeR joined #gluster
22:05 james__ hi do glusterfs clients and servers require the same package versions?  i'm having trouble mounting, and the only thing I'm getting in the logs are "W" warnings
22:05 james__ "W [socket.c:514:__socket_rwv] 0-gfs1-client-1: readv failed (No data available)"
22:05 dbruhn They should be the same version, what version are you running james
22:05 james__ 3.4.1 on the server and 3.4.2 on the client
22:06 dbruhn 3.4.x is supposed to introduce backwards compatibility, but only if new features aren't used.
22:06 dbruhn Ahh yeah, I've seen a lot of reports of that combo not working.
22:06 james__ oh...
22:07 social huh worked fine here to have different versions :/
22:07 REdOG replacing a brick on a replica causes my vms to die
22:08 REdOG the odd part is they're using libgfapi locally and im removing the remote brick
22:09 REdOG and its not their root drive
22:09 REdOG D:
22:09 dbruhn james__, are any clients able to connect? or is the client able to connect to itself locally?
22:09 james__ yes there are several clients connected
22:09 james__ just the one im currently working on is not connecting :/
22:10 dbruhn Is IP tables blocking anything from the server side?
22:10 james__ i dont have iptables or selinux enabled on any of our servers
22:11 james__ even did the telnet to the port thing
22:11 james__ Brick gfs1:/br149184
22:12 dbruhn can you downgrade the client?
22:12 james__ i guess i can give that a try, but does this warning mean anything:
22:12 james__ "W [glusterfsd.c:1002:cleanup_and_exit] (-->/lib64/libc.so.6(clone+0x6d) [0x39c6ee8b6d] (-->/lib64/libpthread.so.0() [0x37844079d1] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x40533d]))) 0-: received signum (15), shutting down"
22:12 mrfsl left #gluster
22:13 JoeJulian sig 15 means something killed it.
22:15 social run the mount process manually and pass it --debug ?
22:22 ktosiek joined #gluster
22:26 cjanbanan joined #gluster
22:37 jobewan joined #gluster
22:39 nightwalk joined #gluster
22:45 ktosiek joined #gluster
22:50 mattappe_ joined #gluster
22:53 mattappe_ joined #gluster
22:55 dbruhn left #gluster
22:55 mattapperson joined #gluster
23:00 mattappe_ joined #gluster
23:02 cjanbanan joined #gluster
23:22 primechuck joined #gluster
23:24 primechuck joined #gluster
23:28 diegows joined #gluster
23:31 mattappe_ joined #gluster
23:34 rwheeler joined #gluster
23:43 mattapperson joined #gluster
23:49 james__ wow this is crazy i can mount the glusterfs volume on a vm running CentOS 6.5 but not my physical servers running Oracle Linux 6.5
23:51 james__ does anyone know what causes this warning?  W [socket.c:514:__socket_rwv] 0-gfs1-client-12: readv failed (No data available)
23:52 semiosis james__: do does the oracle linux have fuse loaded?
23:52 semiosis iptables, selinux, and name resolution are other things to check
23:52 semiosis sorry not going to be around to help more, on my way out
23:52 semiosis good luck!
23:52 james__ i do have fuse loaded
23:53 semiosis s/do does/does/
23:53 glusterbot What semiosis meant to say was: james__: does the oracle linux have fuse loaded?
23:53 semiosis gotta run
23:53 james__ np ty though
23:54 james__ i dont have iptables or selinux running if anyone else wants to jump in
23:54 james__ i dont think name resolution is an issue either...
23:55 james__ just getting those odd warnings but no errors or criticals
23:55 james__ and like i said multiple machines have the volume mounted, just not the OL 6.5 servers
23:56 mattappe_ joined #gluster
23:59 mattapperson joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary