Camelia, the Perl 6 bug

IRC log for #gluster, 2012-12-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 greylurk joined #gluster
00:52 JoeJulian Lol! Who wan't a hard drive: http://fpaste.org/300y/
00:52 glusterbot Title: Viewing Paste #257890 (at fpaste.org)
00:52 JoeJulian s/wan't/wants/
00:52 glusterbot What JoeJulian meant to say was: Lol! Who wants a hard drive: http://fpaste.org/300y/
01:05 chirino joined #gluster
01:08 yinyin joined #gluster
01:12 arusso that's probably the coolest bot feature i have seen.
01:12 arusso s/i/I/
01:12 glusterbot What arusso meant to say was: that's probably the coolest bot feature I have seen.
01:13 noob2 joined #gluster
01:16 tryggvil joined #gluster
01:17 nightwalk joined #gluster
01:25 JoeJulian :)
01:29 kevein joined #gluster
01:32 nightwalk joined #gluster
02:17 noob21 joined #gluster
02:19 hchiramm_ joined #gluster
02:39 sgowda joined #gluster
02:43 bulde joined #gluster
02:48 saz joined #gluster
02:49 sunus joined #gluster
03:00 bulde1 joined #gluster
03:03 avati joined #gluster
03:12 bharata joined #gluster
03:18 sgowda joined #gluster
03:18 bulde1 JoeJulian: thanks for bug update :-) that helps for me to get it tested from QE
03:27 shylesh joined #gluster
03:37 berend Just recompiled gluster 3.3.1, trying to mount again I get: nknown option noauto (ignored)
03:37 berend Not trying to be pedantic, but noauto isn't a particularly unknown option :-)
03:39 bambi2 joined #gluster
03:40 sripathi joined #gluster
03:42 JoeJulian bulde1: Sorry, I got started on that a long time ago but I never got a response when I asked what version of office they were using.
03:43 berend Should a gluster 3.3.1 client be able to mount a gluster 3.2.5 server?
03:44 berend oops, no say the docs
03:44 JoeJulian berend: It's ignored by mount.glusterfs. that script is especially noisy. noauto (and _netdev) are actually init options and are always ignored by mount.
03:45 JoeJulian And no. 3.3 and 3.2 are not rpc compatible.
03:56 bulde joined #gluster
04:45 sgowda joined #gluster
04:49 hagarth joined #gluster
04:51 rastar joined #gluster
04:51 rastar left #gluster
04:51 rastar joined #gluster
04:54 bala joined #gluster
05:06 manik joined #gluster
05:08 mohankumar joined #gluster
05:25 vpshastry joined #gluster
05:27 manik1 joined #gluster
05:35 glusterbot New news from resolvedglusterbugs: [Bug 765473] [glusterfs-3.2.5qa1] glusterfs client process crashed <http://goo.gl/4fZUW>
05:55 glusterbot New news from newglusterbugs: [Bug 884381] Implement observer feature to make quorum useful for replica 2 volumes <http://goo.gl/rsyR6>
05:55 sripathi joined #gluster
05:59 harshpb joined #gluster
06:02 Hymie joined #gluster
06:07 ankit9 joined #gluster
06:10 raghu joined #gluster
06:11 elyograg finally got that bug files.
06:11 elyograg s/files/filed/
06:11 glusterbot What elyograg meant to say was: finally got that bug filed.
06:34 mooperd joined #gluster
06:36 vijaykumar joined #gluster
06:40 ramkrsna joined #gluster
06:40 ramkrsna joined #gluster
06:54 bharata joined #gluster
06:58 rgustafs joined #gluster
07:12 sripathi joined #gluster
07:28 mooperd joined #gluster
07:34 rastar left #gluster
07:43 ngoswami joined #gluster
07:44 ekuric joined #gluster
07:45 mooperd joined #gluster
07:47 vikumar joined #gluster
07:48 ankit9 joined #gluster
07:51 mooperd_ joined #gluster
07:55 dobber joined #gluster
08:02 guigui1 joined #gluster
08:12 bauruine joined #gluster
08:13 unlocksmith joined #gluster
08:18 jmara joined #gluster
08:19 VisionNL joined #gluster
08:19 johnmark joined #gluster
08:19 xavih joined #gluster
08:19 nhm joined #gluster
08:19 genewitch joined #gluster
08:19 jiffe1 joined #gluster
08:20 _Bryan_ joined #gluster
08:20 mnaser joined #gluster
08:35 ctria joined #gluster
08:38 sripathi joined #gluster
08:41 rastar joined #gluster
08:45 sripathi1 joined #gluster
08:46 tryggvil joined #gluster
08:49 duerF joined #gluster
08:54 mdarade1 joined #gluster
08:55 mdarade1 left #gluster
08:59 mooperd_ joined #gluster
09:01 Nr18 joined #gluster
09:03 bharata joined #gluster
09:19 lkoranda joined #gluster
09:20 Shdwdrgn joined #gluster
09:20 khushildep joined #gluster
09:23 shireesh joined #gluster
09:25 Kmos joined #gluster
09:25 vijaykumar joined #gluster
09:32 vijaykumar left #gluster
09:34 joeto joined #gluster
09:56 rastar joined #gluster
09:58 17WAASTXZ joined #gluster
10:07 sripathi joined #gluster
10:07 Nr18_ joined #gluster
10:12 mooperd joined #gluster
10:15 webwurst joined #gluster
10:21 hagarth joined #gluster
10:27 Staples84 joined #gluster
10:34 clag_ joined #gluster
10:48 Nr18 joined #gluster
10:53 ctria joined #gluster
10:57 puebele1 joined #gluster
11:01 sripathi joined #gluster
11:03 Oneiroi possible to modify a REPLICATE 2 volume to add a third node such that it becomes a REPLICA 2 dist 1? or si there a better way?
11:05 bauruine joined #gluster
11:06 mooperd Anyone put KVM machines on Gluster?
11:06 Oneiroi mooperd: working on that atm, using openstack
11:06 mooperd I hear that its supported in the new RHEV?
11:07 mooperd http://www.theregister.co.uk/2012/12/0​5/redhat_rhev_3_1_cloudforms_bundles/
11:07 glusterbot <http://goo.gl/Nm8Co> (at www.theregister.co.uk)
11:08 mooperd Which is about the most exciting news that I have heard since my grandmother got her tits stuck in the mangle
11:08 samppah_ :O
11:09 mooperd It was a common problem before the advent of the washing machine
11:09 mooperd Oneiroi: What is the performance like?
11:10 samppah_ i have used glusterfs a little with ovirt 3.1.. unfortunately performanxe wasn't good enough for our needs
11:10 Oneiroi mooperd: I'm using a glusterfs mount point _could_ be better imo, have my concerns on the overhead, but facilitates easy live migration so it's a trade off imo
11:10 samppah_ waiting for qemu to get glusterfs support so it could bypass fuse
11:11 Oneiroi that would be awesome imo
11:11 mooperd Oneiroi: Can I have some specifics about your setup? Network? Brick size?
11:11 mooperd how many replications?
11:11 Oneiroi mooperd: dual nic gigabit bonded tlb, atm 2 nor replica trying to add a third node right now working through that this morning
11:11 Oneiroi s/nor/node/
11:11 glusterbot What Oneiroi meant to say was: mooperd: dual nic gigabit bonded tlb, atm 2 node replica trying to add a third node right now working through that this morning
11:12 mooperd Oneiroi: hmm, I might give that a go with my infiniband stuff.....
11:12 Oneiroi mooperd: indeed, something I want to do also :)
11:12 Oneiroi but don't have the kit at my present $work
11:13 * mooperd waits for gluster on ZFS
11:14 Oneiroi not saying it's going to stay atm, fusefs is adding a lot of overhead :-/
11:15 mooperd Oneiroi: It should be in kernel sometime soon I think
11:15 mooperd Oneiroi: Cant you mount it as NFS?
11:17 Oneiroi mooperd: yes _but_ NFS <3.3 does not distribute locks, shouldn't be an issue with KVM as should  not be doing multinode writes I suppose … also I have not tested 3.3 but it has been reported to me that distributed locking for NFS exists
11:17 joet joined #gluster
11:17 Oneiroi s/multinode writes/multinode writes to the same file/
11:17 glusterbot What Oneiroi meant to say was: mooperd: yes _but_ NFS <3.3 does not distribute locks, shouldn't be an issue with KVM as should  not be doing multinode writes to the same file I suppose … also I have not tested 3.3 but it has been reported to me that distributed locking for NFS exists
11:18 mooperd Oneiroi: I dont think that would be a problem so much
11:19 Oneiroi mooperd: indeed as in theory only one node at any given time should be writing the guest image :)
11:20 mooperd Oneiroi: Weeeeel
11:21 Oneiroi _unless_ you opt for attached volumes and these are addressed as shared storage
11:21 mooperd Actually, Ur
11:21 Oneiroi then you're asking for trouble
11:21 mooperd There should be locking for that
11:21 Oneiroi not if the underlying FS is NFS it would appear (Again unless running gluster 3.3 which I've not tested myself but _should_ distribute locks)
11:22 mooperd Underlying filesystem?
11:22 Oneiroi i.e. I've had webapps use nfs in 3.2 and end up split braining all over the place in testing, so 3.2 installs mount using glusterfs
11:22 mooperd You mean the filesystem that NFS is exporting?
11:22 Oneiroi the filesystem from which kvm is loading the guest volume
11:23 Oneiroi i.e. 127.0.0.1:/NOVA on /var/lib/nova/instances type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default​_permissions,allow_other,max_read=131072)
11:26 Oneiroi anyway, back to figuring out how to move replica 2 -> replica 2 distribute 1 ;D
11:27 venkat joined #gluster
11:29 mooperd Oneiroi: Im confused...
11:31 Oneiroi mooperd: yeh I can be confusing ;-) what's up ?
11:31 mooperd Oneiroi: Just NFS locking
11:32 mooperd I need to read up on it
11:32 mooperd in a KVM context
11:33 Oneiroi mooperd: ah, ok say you have 3 programs that all update one file, said program will try to get a lock on the file it wants to write to, if nfs does not tell all the nodes about this lock then all 3 nodes can write the same file at the same time using different data, this leads to a split brain as the file changes can not replicate across the cluster.
11:33 Oneiroi mooperd: however this should not be an issue with gluster 3.3 onward as I understand
11:34 Oneiroi mooperd: in a KVM context this should not matter; as the guest should only be running on one node at a time, as such writes to the volume should always occur from one node
11:34 mooperd *should*
11:34 Oneiroi mooperd: indeed
11:34 mooperd I have KVM working really well on NFS now
11:34 Oneiroi there are of course exceptions, such as a shared volume
11:35 mooperd but I didnt try and break anything yet
11:35 Oneiroi mooperd: using gluster?
11:35 harshpb joined #gluster
11:35 mooperd Oneiroi: no, Oracle ZFS appliance
11:36 mooperd Its the fastest NFS in the west
11:37 venkat joined #gluster
11:38 Oneiroi hehe
11:39 ndevos Oneiroi: note that nfs (or in fact any posix) locking is advisory, meaning that all parties need to check for locks and obey the rules
11:39 duerF joined #gluster
11:40 khushildep joined #gluster
11:40 ndevos Oneiroi: mandatory locks (like on MS platforms) behave different and can not just be ignored
11:42 ndevos but well, if you have a program designed to be running on multiple nodes, the locking should be implemented correctly
11:45 shireesh joined #gluster
11:46 hagarth joined #gluster
11:46 Oneiroi ndevos: true and as I undertsnad it this is the case in gluster 3.3
11:46 Oneiroi s/undertsnad/understand/
11:46 glusterbot What Oneiroi meant to say was: ndevos: true and as I understand it this is the case in gluster 3.3
11:46 ndevos Oneiroi: yeah, the NLM protocol was added for that
11:46 * ndevos will have lunch now, back later
11:59 zerthimon joined #gluster
12:00 zerthimon left #gluster
12:06 guigui1 joined #gluster
12:22 gbr joined #gluster
12:28 kkeithley1 joined #gluster
12:30 andreask joined #gluster
12:43 tjikkun joined #gluster
12:43 tjikkun joined #gluster
12:44 noob2 joined #gluster
12:46 noob21 joined #gluster
12:47 rgustafs joined #gluster
12:49 aliguori joined #gluster
12:54 FredSki joined #gluster
12:56 khushildep joined #gluster
12:59 FredSki Hello, where (in what file) to put the option 'assert-no-child-down=yes' to make sure the volume goes offline when a sub-volume is down?
13:02 lkoranda_ joined #gluster
13:05 lkoranda joined #gluster
13:07 puebele1 joined #gluster
13:07 khushildep joined #gluster
13:13 shireesh joined #gluster
13:14 lkoranda joined #gluster
13:26 mohankumar joined #gluster
13:40 khushildep joined #gluster
13:46 shireesh joined #gluster
13:47 hagarth joined #gluster
13:56 jdarcy FYI, the lack of locking across multiple NFS "heads" does not lead to split brain (as we use the term).  The replicas will all hold the same data; it will just be inconsistent data containing the result from writes that should have been serialized but weren't.
13:57 jdarcy I wonder if we could have the NFS forward lock requests for one file to a designated master for that file, e.g. via hashing across the current membership.  That would at least avoid the inconsistency in the normal case, though failures etc. would still be ugly.
14:02 Oneiroi jdarcy: so in 3.3 what would happen if say 3 nodes all wrote the same file at the same time, could all three perhaps obtain an exclusive lock at the same time, or has that been addressed?
14:04 jdarcy AFAIK there's no plan for a distributed NLM, so "exclusive" lock would indeed be a misnomer.
14:04 Oneiroi indeed, which leads to issues with say large php webapps using nfs mounted replicas
14:05 jdarcy That's why I'm thinking about the forwarding/hashing/etc. trick.  It's much easier than a full-out distributed NLM, and I know at least some large proprietary NASes get away with it just fine.
14:07 jiffe1 there's no exclusive locking at all with gluster?
14:10 jdarcy There is exclusive locking for everyone connected to the same NFS server/proxy.
14:10 jdarcy However, if you want *both* strong locking and scalability across multiple NFS servers then that becomes rather problematic.
14:10 jiffe1 what about through the gluster client?
14:14 jdarcy jiffe1: We have POSIX locks on the underlying file, if that's what you mean.
14:15 Oneiroi glusterfs mounts via fuse I can confiurm in my testing excluisve locks did block other replica nodes from obtaining the same lock at the same time
14:15 Oneiroi s/confiurm/confirm/
14:15 glusterbot What Oneiroi meant to say was: glusterfs mounts via fuse I can confirm in my testing excluisve locks did block other replica nodes from obtaining the same lock at the same time
14:18 * jdarcy has to go.  Plane to catch.
14:21 Oneiroi glusterfsd[1322]: [2012-12-06 13:45:43.962318] C [glusterfsd.c:1220:parse_cmdline] 0-glusterfs: ERROR: parsing the volfile failed (No such file or directory) this is post 3.2 -> 3.3.1 rpm update :-/
14:25 mohankumar joined #gluster
14:26 gbrand_ joined #gluster
14:26 noob21 i know the answer to this is no, but does anyone know of a possible way to do this.  I was wondering if we could restrict server ip address to certain folders on a volume in gluster? So i have 1 big volume to be efficient with space, is there a way to restrict certain ip's from access certain folders.  Kinda like posix acl's
14:27 Oneiroi not afaik … but I'd also be interested in seeing this
14:29 johnmark jdarcy: enjoy :)
14:30 noob21 yeah that would be really useful to us here
14:35 gbrand_ joined #gluster
14:37 badone joined #gluster
14:38 gbrand__ joined #gluster
14:38 nightwalk joined #gluster
14:44 wN joined #gluster
14:56 stopbit joined #gluster
14:57 hagarth joined #gluster
14:58 chouchins joined #gluster
15:00 chouchins we had a disk array fail entirely that was the second half of a mirrored glusterfs volume.  Is there a way to remove the mirroring until we can get that hardware replaced and just run off one node?  I can't remove the bricks, must be missing a step.
15:03 obryan joined #gluster
15:03 obryan left #gluster
15:04 khushildep joined #gluster
15:07 bambi2 joined #gluster
15:18 __Bryan__ joined #gluster
15:22 wushudoin joined #gluster
15:30 gbr joined #gluster
15:32 nightwalk joined #gluster
15:36 andreask joined #gluster
15:37 puebele1 joined #gluster
15:38 Tekni joined #gluster
15:45 Oneiroi what would you suggest the best configuration for a 3 node gluster install to be ?
15:46 Oneiroi in terms of volume configuration
15:50 tqrst I just had a peek through /var/lib/glusterd/vols/myvol and noticed that there are a lot of Vol.old files (notice the upper case V, too). Is it safe to remove these?
15:51 kkeithley1 chouchins: Tell us more about your setup? Was the array on a separate node from the first half? Are you using native (fuse) mounts or NFS? In the common scenarios you clients would just continue on as if nothing happened. When you replace the array and bring up the node then trigger a self heal (3.2.x) or let auto self heal repopulate the second half.
15:52 kkeithley1 Oneiroi: what do you have for disk storage on the three nodes?
15:52 tqrst likewise, there's a bunch of old pid files in the run folder too
15:53 chouchins we have two disk arrays in raid5 split into 5 bricks an array.  one array with 5 bricks going through one glusterfs server node and one through another.  So the glusterfs volume has 10 bricks 5 x 2
15:53 Oneiroi kkeithley1: raid10 on each, ~500GB available for the purposes I want to use it for :)
15:53 kkeithley1 tqrst: I'd say it's safe to delete the old pid files. Those Vol.old files can't be very big. What's it hurt to leave them? But it's probably safe to delete them.
15:54 tqrst kkeithley1: btw is it normal that I'm seeing pid files for different machines? e.g. node ml54 has ml51-mnt-localb.pid and ml54-[...].pid
15:54 tqrst I just like having things be as clean as they can be
15:55 kkeithley1 tqrst: I can't begin to fathom why there'd be pid files from another machine. That's very strange.
15:55 chouchins @kkeithley1 also we're using the glusterfs fuse mount
15:56 tqrst kkeithley1: that only seems to be the case on the server I happened to be on - the others all have only their own
15:56 tqrst kkeithley1: ah! I think I know why. A while ago, while upgrading from 3.2.6 to 3.3.1, I ended up having to rsync the glusterd folder from another machine.
15:57 kkeithley1 chouchins: okay, so a pretty typical setup. The clients will have stopped using the failed server. They'll carry on until you replace the failed array and bring the node up. I'd say there's no particular reason to remove the replication (mirroring).
15:57 kkeithley1 tqrst: makes sense
15:57 chouchins we unmounted the mount and remounted and for some reason it thinks the 13TB mount is 9G  still trying to figure that one out
15:58 chouchins hmm maybe I should point the mount command to the glusterfs node that is actually working :)
15:59 kkeithley1 chouchins: yep
15:59 kkeithley1 can't hurt
15:59 semiosis :O
15:59 chouchins yep that did it
15:59 tqrst :0
16:00 chouchins it was trying to use the local root filesystem as the bricks on the failed node since its iscsi mounts to the array were down
16:00 Nr18 joined #gluster
16:00 chouchins we're trying to replace the iscsi setup with local storage in lots of Dell R520s soon.  Will be much better.
16:11 chouchins joined #gluster
16:13 keaw joined #gluster
16:18 Alpinist joined #gluster
16:23 _Bryan_ joined #gluster
16:28 nueces joined #gluster
16:33 keaw left #gluster
16:35 FredSki left #gluster
16:36 keaw joined #gluster
16:36 andreask left #gluster
16:37 keaw left #gluster
16:44 purpleidea joined #gluster
16:44 GomoX joined #gluster
16:44 GomoX Hey
16:50 chirino joined #gluster
16:51 layer3switch joined #gluster
16:53 zaitcev joined #gluster
16:55 nullsign joined #gluster
16:56 _Bryan_ joined #gluster
16:57 aliguori joined #gluster
16:57 bdperkin joined #gluster
16:58 nightwalk joined #gluster
17:01 nick5 joined #gluster
17:02 __Bryan__ joined #gluster
17:02 bfoster joined #gluster
17:05 neofob joined #gluster
17:09 khushildep joined #gluster
17:11 mooperd left #gluster
17:12 mooperd_ joined #gluster
17:20 AK6L joined #gluster
17:21 AK6L hey folks, couple quick questions
17:21 AK6L of the packages: glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server
17:21 AK6L which are required for a machine to mount a glusterfs filesystem?
17:21 AK6L i.e. a client machine that just wants to use the storage.
17:22 AK6L and the 2nd question: if i have a gluster fs composed of 3 nodes, replicated, and i mount against, say, the first one; if that first server goes down, does the filesystem become inaccessibel?
17:22 AK6L inaccessible*
17:22 AK6L or does the filesystem driver have the smarts to use other servers insteaad?  i suspect the answer is yes.
17:22 nueces joined #gluster
17:23 Gualicho joined #gluster
17:23 Gualicho hey guys
17:23 Gualicho qq
17:23 Gualicho is there any difference in setting trasnport-type to tcp or tcp/client?
17:24 GomoX AK6L: if you mount using the native client then yes
17:24 GomoX AK6L: over nfs, won't work
17:24 AK6L i've been doing 'mount -t glusterfs ...'
17:24 Gualicho we have one brick set to "tcp" and the other to "tcp/client" by accident, and we don't know if we should set both the same
17:24 GomoX Should work
17:24 AK6L cool
17:24 GomoX That's the whole point :)
17:24 GomoX Well, not the whole point, but a large part of it
17:24 AK6L yeah i kinda realized that as i asked the question ;)
17:25 GLHMarmot AK6L: On Ubuntu 12.04, if you want to use the native mounting (sounds like you do) then you need:
17:25 GLHMarmot glusterfs-client and glusterfs-common
17:25 AK6L ok
17:25 AK6L i'm on Amazon's custom AMI
17:26 GLHMarmot not sure how the Ubuntu packages map across, but my guess would be glusterfs and glusterfs-fuse
17:26 AK6L yeah
17:26 GLHMarmot You definitely won't need the server or the geo-replication piece
17:27 GomoX Hey GLHMarmot - i'm still struggling with migration OpenVZ containers - http://forum.proxmox.com/threads/12149-On​line-migrations-fails-for-CT-on-GlusterFS (in case you can help)
17:27 glusterbot <http://goo.gl/Qtc9Z> (at forum.proxmox.com)
17:28 13WAACU1H joined #gluster
17:28 GLHMarmot GomoX: I saw that. :-) I upgraded to pvetest and the migration "works" for me. It just takes about 15 minutes because of the gluster backend.
17:28 GLHMarmot Running in to the issue of "stat" calls being expensive on gluster.
17:29 GLHMarmot It seems that the initialization of the quota (vzquota) calls "stat" on every file in the container.
17:30 GomoX GLHMarmot: so you are not running stable?
17:30 GLHMarmot Not anymore.
17:30 GomoX Hmm
17:30 GLHMarmot I mostly wanted some new features that are supposed to be in QEMU 1.3.
17:30 GomoX At some point we should get glusterfs support as a native backend for qemu right?
17:30 GLHMarmot Unfortunately, they aren't there yet. At least I can't find any evidence of it.
17:31 GLHMarmot That is the theory.
17:31 GomoX How's pvetest for you, is it stable?
17:31 GomoX I'm running production servers here, not for customers but for internal use and they are the kind of thing where "server down" equals 200 eyeballs on you staring
17:31 GLHMarmot Very stable. I only have a 2 node cluster, not enterprise level at all.
17:31 GLHMarmot So YMMV
17:32 GLHMarmot We should get native gluster
17:32 GomoX OK i'll give it some thought
17:32 Mo_ joined #gluster
17:32 GomoX Either way offline migration is good for fault tolerance which is what I really want out of Gluster
17:32 GLHMarmot at some point
17:32 GLHMarmot It is there is qemu 1.3
17:32 Gualicho is there any difference in setting trasnport-type to "tcp" or "tcp/client"? I know both are valid, but the docs are not clear wether the behaviour is the same or not
17:32 Gualicho I mean on the client side
17:32 GomoX I guess you can't have fast, cheap and highly available :)
17:32 johnmark GLHMarmot: er, doesn't QEMU 1.3 have that?
17:33 johnmark and you're using the master branch of glusterfs?
17:33 GLHMarmot johnmark: It does, if it is compiled in. I am using proxmox as a VM platform and it doesn't have it yet.
17:33 johnmark oh, ok. got itok
17:34 johnmark GLHMarmot: note also that mohankumar and a couple of the libvirt and qemu devs hang out in gluster-dev from time to time
17:35 GLHMarmot johnmark: cool, I have been hanging out in #qemu and they tend to be VERY technical over there. Definitely above my coding grade. :-)
17:36 johnmark heh, yeah
17:37 schmidmt1 left #gluster
17:39 GLHMarmot AK6L: One other thing I forgot to mention. By default, when mounting a gluster volume on the client it connects to the node you specify to get the volume information then actually mounts the volume from a variety of servers. Even if the machine you connect to initially goes away, the volume will remain available.
17:39 AK6L cool
17:40 GLHMarmot This does leave a single point of failure when initially mounting the drive, but if you use a line like the following in /etc/fstab you can specify a secondary server.
17:40 GLHMarmot leif.slal.net:/data1 /usr2 glusterfs defaults,auto,exec,_netdev,backupvolfile​-server=erik.slal.net,fetch-attempts=10 0 0
17:40 samppah_ oh, didn't know that qemu 1.3 has been released
17:40 GLHMarmot Note the "backupvolfile=server" section in the options.
17:40 AK6L is it madness to, say, set up a round-robin in DNS so multiple clients end up hitting different gluster servers for the initial volume info?
17:41 johnmark GLHMarmot: nice
17:41 johnmark samppah_: yes, earlier this week
17:41 GLHMarmot AK6L: madness? Not by my book because that is cool geekyness.
17:41 GLHMarmot but I doubt it is necessary unless you have a TONNE of mounting going on.
17:42 AK6L nod, ok
17:55 Gualicho Anyone? is there any difference in setting trasnport-type to "tcp" or "tcp/client"? I know both are valid, but the docs are not clear wether the behaviour is the same or not
17:56 GLHMarmot Gaulicho: You aren't being ignored, I just have no clue, sorry.
17:57 Gualicho thanks :)
18:06 Jippi joined #gluster
18:20 arusso left #gluster
18:25 chirino joined #gluster
18:25 nick5 joined #gluster
18:55 daMaestro joined #gluster
18:57 JoeJulian Gualicho: Which docs are you referring to?
18:58 genewitch joined #gluster
18:58 genewitch left #gluster
18:59 genewitch joined #gluster
19:06 Gualicho JoeJulian, let me look
19:07 Gualicho http://www.gluster.org/community/documentati​on/index.php/Translators_options#Transports
19:07 glusterbot <http://goo.gl/4ozPm> (at www.gluster.org)
19:07 Gualicho transport-type STRING - - Yestcp|ib-verbs|unix|ib-sdp​|tcp/client|ib-verbs/client Transport type must be the same as that define on the server for corresponding sub-volumes.
19:07 Gualicho it says both tcp and tcp/client are valid
19:08 Gualicho but couldn't find anything about the behaviour
19:09 JoeJulian Ah, 2009...
19:09 JoeJulian That's probably right around 3.0 iirc.
19:10 JoeJulian Are you trying to write .vol files by hand?
19:12 nightwalk joined #gluster
19:14 Gualicho we have 3.0.2
19:14 Gualicho yeah, actually is a working conf, and we found that one brick is not consisten in that parameter
19:15 Gualicho one has "tcp" and the other "tcp/client"
19:15 Gualicho we didn't write the config, we inheritated these servers
19:16 JoeJulian 3.0.2 nearly caused us to abandon gluster. I'm surprised you're still using it.
19:17 daMaestro +1
19:17 JoeJulian I would recommend recreating your volumes in 3.3.1
19:17 JoeJulian Hey there daMaestro :)
19:17 andreask joined #gluster
19:18 daMaestro JoeJulian, yo
19:18 daMaestro 3.0.2 was a scary release, i agree
19:18 daMaestro 3.0 as a whole, was scary
19:18 Gualicho ok, I think I'll recommend the upgrade then
19:19 khushildep joined #gluster
19:19 Gualicho that's what I wanted to try anyway
19:19 Gualicho thanks guys!
19:19 daMaestro 3.0 was a transition release and things really didn't get sorted until 3.2
19:20 daMaestro (this is all IMHO)
19:22 mooperd joined #gluster
19:24 jn1 joined #gluster
19:24 Gilbs1 joined #gluster
19:37 bauruine joined #gluster
19:42 andreask joined #gluster
19:46 y4m4 joined #gluster
19:48 puebele joined #gluster
19:53 purpleidea joined #gluster
19:53 purpleidea joined #gluster
20:05 bitsweat left #gluster
20:20 Nr18 joined #gluster
20:21 duffrecords joined #gluster
20:26 duffrecords After a long spell of smooth sailing, I started running into issues with my Gluster boxes.  first this bug http://review.gluster.org/#change,4201 and now VMware won't start VMs that are stored on NFS volumes.  What can I do to find out if these problems are related to some common underlying condition?
20:26 glusterbot Title: Gerrit Code Review (at review.gluster.org)
20:34 mooperd joined #gluster
20:35 saz_ joined #gluster
21:00 y4m4 joined #gluster
21:02 gbrand_ joined #gluster
21:02 theron joined #gluster
21:10 badone joined #gluster
21:13 tryggvil joined #gluster
21:27 jn1 left #gluster
21:30 13WAACWMJ joined #gluster
21:39 stuarta_ joined #gluster
21:40 stuarta_ I just did two fresh installs of CentOS and gluster. When I attempt to do 'gluster peer probe hostname1' I get: Probe unsuccessful
21:40 13WAACWMJ is the service running on both machines
21:40 stuarta_ Yeah.
21:40 13WAACWMJ and can you communicate via network between them
21:41 stuarta_ Yep
21:41 13WAACWMJ turn off the firewall
21:41 stuarta_ Did that
21:41 stuarta_ I have a crossover between the two nodes and have entries in /etc/hosts. Could that effect anything?
21:42 13WAACWMJ should be fine, that's how I run mine
21:42 13WAACWMJ there is one more cents service that sandboxes app, I can't think go it right off
21:42 stuarta_ I also get this error: Probe returned with unknown errno 107
21:43 stuarta_ I did a bit of light Googling and didn't see anything immediatly
21:43 jmara joined #gluster
21:43 13WAACWMJ sec looking
21:46 stuarta_ Thanks.
21:46 13WAACWMJ how about SELinux
21:46 stuarta_ No idea. Never used it. Does CentOS have it by default?
21:46 13WAACWMJ yep, disable it
21:47 stuarta_ service selinux stop or something?
21:47 dbruhn or /etc/init.d/selinux stop
21:47 dbruhn and then you will want to disable selinux and iptables from starting up using chkconfig as well
21:47 dbruhn so when you reboot they don't come back up
21:48 stuarta_ I don't have the selinux service under /etc/init.d
21:48 stuarta_ Not seeing it in the process list either
21:48 dbruhn Try this, http://www.revsys.com/writings/​quicktips/turn-off-selinux.html
21:49 glusterbot <http://goo.gl/G9xp6> (at www.revsys.com)
21:50 stuarta_ Ok so it turns out I had SELinux running, but I turned off as stated in that link. But, same issue
21:51 stuarta_ Weird thing is, if I try to probe from the other node, it just hangs. Then says that the other node is a member of its cluster, but is disconnected.
21:52 dbruhn how did you install it?
21:53 stuarta_ The quickstart guide: http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
21:53 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
21:54 dbruhn and you can properly address the dns names you used from each machine?
21:54 stuarta_ yeah. I tried IP address too and that didn't work. I also tried not using the crossover link
21:55 stuarta_ Is there a way to increase verbosity?
21:56 dbruhn yep page 126 of the 3.3.0 administration guide lays out the logging
21:56 stuarta_ ok I will check that out
21:56 dbruhn http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf
21:56 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
21:57 stuarta_ I see a warning in cli.log: 'option transport-type'. defaulting to "socket"
21:57 stuarta_ Is that ok?
21:57 semiosis yeah its ok
21:57 semiosis where did you install gluster from?
21:58 stuarta_ I did this: wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glu​sterfs/LATEST/EPEL.repo/glusterfs-epel.repo  Then I just did yum install glusterfs{-fuse,-server}
21:58 glusterbot <http://goo.gl/5beCt> (at download.gluster.org)
21:59 stuarta_ Not sure if that answers your question
21:59 semiosis yep, thx, should be good
21:59 stuarta_ Are there different repos?
21:59 semiosis yeah but that's the latest so you're good
22:00 semiosis there's (much) older versions hanging around in other repos
22:02 m0zes @yum repo
22:02 glusterbot m0zes: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
22:02 m0zes [B[B[A[B[B[A[A[B[B[B[B[B[B[B[B[B[B[B[B
22:02 m0zes ssh client locked up, but still managed to send that :/
22:04 stuarta_ I am missing this file accoring to glusterd: /usr/lib64/glusterfs/3.2.7/rpc-transport/rdma.so Is that something I need?
22:05 stuarta_ If so, any idea which package would have it?
22:05 semiosis stuarta_: no no this should be really easy
22:05 semiosis you dont need that
22:05 stuarta_ Could this be a hardware issue?
22:05 semiosis something simple is going wrong... hosts unable to communicate
22:05 semiosis or something like that
22:05 semiosis possibly
22:05 stuarta_ I can ssh to them and ping them
22:06 semiosis you say you have glusterd running on both servers
22:06 stuarta_ Should I scan select ports to verify?
22:06 m0zes @ports
22:06 glusterbot m0zes: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
22:06 semiosis can you telnet to port 24007 from one server to the other
22:06 stuarta_ Yep. I just did service glusterd start on both
22:06 semiosis oh just did that, well if that wasnt running you would not have been able to probe
22:06 semiosis any reason why you're starting out with centos?
22:07 semiosis are you particularly fond of it?
22:08 stuarta_ Yes I can telnet to that port from eachother
22:08 semiosis do your probes work now that you've started glusterd?
22:08 stuarta_ no. I thought gluster would run well on it
22:08 stuarta_ I prefer Gentoo
22:08 semiosis ah
22:08 stuarta_ No the probes aren't working still
22:09 semiosis could you pastie.org the /var/log/glusterfs/etc-glusterfs-glusterd.log file please
22:09 stuarta_ Sure
22:10 stuarta_ Wait, this one?: /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
22:10 semiosis yeah
22:12 stuarta_ Hope you don't mind the address. It is just easier for me: https://web.cs.sunyit.edu/~stuart​a/etc-glusterfs-glusterd.vol.log
22:12 glusterbot <http://goo.gl/LBwaw> (at web.cs.sunyit.edu)
22:13 semiosis fine with me
22:13 stuarta_ I see this error too: error through RPC layer, retry again later
22:13 * m0zes runs gluster on gentoo :)
22:14 stuarta_ @m0zes: I'm jealous
22:14 semiosis Unable to find hostname: ubu
22:14 semiosis looks like thats your problem
22:14 semiosis and why i am so opposed to using /etc/hosts for this stuff
22:14 semiosis but meh
22:14 semiosis FQDN all the things \o/
22:14 stuarta_ Ok, so try full FQDN?
22:14 stuarta_ yeah ok
22:15 semiosis by the way... ,,(hostnames) vvv
22:15 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
22:16 stuarta_ Still getting that error though if I use the FQDN: Probe returned with unknown errno 107. I will check the logs
22:16 semiosis wha?
22:18 stuarta_ Yeah. I even removed it from /etc/hosts, so its pulling from the DNS server.
22:19 dbruhn what host names did you use between your nodes when configuring gluster?
22:19 stuarta_ blackforest and ubu
22:19 dbruhn each node needs to be able to identify the other nodes by their hosts name, for every node in the cluster
22:19 stuarta_ If I ping the FQDN or even just the hostnames that works, but gluster doesn't seem to like either
22:20 dbruhn did you set up the volume as RDMA or TCP
22:20 semiosis dbruhn: node can be ambiguous, and just to clarify your point, all servers and clients must be able to resolve the names of the glusterfs servers
22:21 dbruhn for the transport type
22:21 semiosis dbruhn: has not even got to creating a volume yet
22:21 dbruhn duh
22:21 dbruhn soryr
22:21 semiosis hehe
22:21 stuarta_ Yeah not yet.
22:22 semiosis stuarta_: could you pastie that same log from the other server please
22:22 semiosis s/pastie/whatever/
22:22 glusterbot What semiosis meant to say was: stuarta_: could you whatever that same log from the other server please
22:22 TSM2 joined #gluster
22:22 stuarta_ I know my regex!
22:22 semiosis glusterbot: meh
22:22 glusterbot semiosis: I'm not happy about it either
22:23 dbruhn when you used the gluster peer probe command I am assuming you used the same host names supplied to the hosts file?
22:23 stuarta_ Here it is:https://web.cs.sunyit.edu/~stuarta​/etc-glusterfs-glusterd.vol.log.2
22:23 glusterbot <http://goo.gl/27SZs> (at web.cs.sunyit.edu)
22:24 stuarta_ I removed everything from /etc/hosts
22:24 stuarta_ But yeah
22:24 dbruhn maybe try detaching the peers and reconnecting them, since you have made changes now?
22:24 semiosis Unable to find hostname: blackforest.cs.sunyit.edu
22:25 semiosis whats up with your name resolution
22:25 stuarta_ Not sure...
22:25 stuarta_ dbruhn: ok
22:25 semiosis i would go one further... stop glusterd on the servers, delete /var/lib/glusterd, start glusterd again
22:25 semiosis on both
22:27 stuarta_ Same results...
22:27 stuarta_ Want to see logs?
22:28 dbruhn when you run the probe command what exactly are you typing?
22:28 stuarta_ I'll capture it
22:30 stuarta_ Here: https://web.cs.sunyit.edu/~stuarta/ubu and https://web.cs.sunyit.edu/~stuarta/blackforest
22:30 stuarta_ Notice that just hangs, so I kill it
22:31 stuarta_ *notice that one just*
22:31 dbruhn and when you run a ping ubu.cs.sunyit.edu and to the other fqdn they are communicating without issue on both servers?
22:31 stuarta_ yeah, ill demo that too. Can't hurt
22:32 dbruhn just trying to make sure you don't have a weird address getting returned or something
22:34 stuarta_ Same links: https://web.cs.sunyit.edu/~stuarta/ubu and https://web.cs.sunyit.edu/~stuarta/blackforest
22:35 stuarta_ I am not using the crossover link
22:35 dbruhn have you tried the peer probes using ip addresses just to see if they work?
22:35 dbruhn I see that
22:36 stuarta_ I'll try it
22:37 stuarta_ update: https://web.cs.sunyit.edu/~stuarta/ubu and https://web.cs.sunyit.edu/~stuarta/blackforest
22:38 dbruhn yet you can telnet on the correct port between machines?
22:38 stuarta_ no I can telnet to whatever port I was told before
22:38 dbruhn yeah 24007
22:38 stuarta_ 24007
22:39 JoeJulian I assume someone's always asked about iptables and selinux?
22:39 stuarta_ Want to see a service list or something?
22:39 stuarta_ Yeah
22:39 dbruhn did you restart after the selinux stuff
22:39 stuarta_ full reboot of the system?
22:39 dbruhn the article I sent suggested it
22:39 stuarta_ ooo
22:39 stuarta_ no I can do that but it will take about 10 minuts
22:39 dbruhn or you need to create a temporary rule
22:39 dbruhn kk
22:39 dbruhn that might be the issue then
22:39 dbruhn SELinux sandboxes application in linux
22:40 dbruhn and can raise a lot of hell
22:40 dbruhn it might be stopping gluster from actually being able to get out and talk to the other machine, by blocking it's access to the network, ect.
22:40 stuarta_ I did this: echo 0 > /selinux/enforce
22:40 stuarta_ The article says that it will turn it off until a reboot
22:41 semiosis JoeJulian: someone did, and supposedly telnet works
22:41 JoeJulian sestatus can confirm whether or not it's enforcing.
22:41 stuarta_ It looks like its still on?
22:42 JoeJulian setenforce 0
22:42 semiosis JoeJulian: glusterd logs show hostname resolution problems... could that be caused by selinux?
22:42 stuarta_ So this isn't working: echo 0 > /selinux/enforce
22:42 JoeJulian I've never used the /selinux tree for that. I just use the cli.
22:43 dbruhn Looks like it might not be, I always just make sure it's not installed if possible
22:43 stuarta_ I didn't realize it was even there.
22:43 semiosis they told me i needed to disable selinux, so i installed ubuntu
22:43 semiosis :P
22:43 JoeJulian hehe
22:43 Gilbs1 left #gluster
22:44 stuarta_ I'll pass on Ubuntu
22:45 stuarta_ Ok, so I edited /etc/sysconfig/selinux and turned off SELinux. I will try a reboot and see if that fixs it.
22:45 JoeJulian You can make glusterd work with an enforcing selinux, you just have to know how selinux is configured. Due to the painstaking hours of complex and detailed documentation of selinux from an administrator's standpoint, nobody knows how to do that.
22:46 stuarta_ I don't really want SELinux. These machines will be firewalled up real tight. I don't want the extra hassle.
22:46 JoeJulian Though I'm thinking about making an attempt...
22:51 stuarta_ Ok, well now the one system won't boot. It crashes when booting up CentOS. I will have to give this another shot tomorrow. Its dinner time for me.
22:51 stuarta_ Thanks for the help everyone! Hopefully I won't have to pester you in the future.
22:51 semiosis and we're sorry you're having so much trouble... it really shouldn't be this difficutl to get started with glusterfs
22:51 nightwalk joined #gluster
22:51 semiosis do come back & let us know how its going
22:52 stuarta_ I got a demo up and running before and it worked well, so I am not sure why I am failing so bad.
22:52 stuarta_ Now that I finally got a few TB to throw at it.
22:58 kkeithley1 FWIW, I have selinux on my dev machines; gluster runs just fine. I did disable firewall/iptables, but only because I got tired of opening the ports every time I set up a new box.
22:58 semiosis kkeithley1: good to know
23:05 nick5 joined #gluster
23:06 nick5 joined #gluster
23:24 mjrosenb morning all.
23:25 mjrosenb I have two bricks hooked up with the distributed module (or is it called dst these days?)
23:25 mjrosenb and in some directories, I have files 1-10 on both bricks
23:25 semiosis good evneing
23:25 mjrosenb with complimentary sets of files being empty
23:26 mjrosenb e.g. 1-5 are empty on brick A, and 6-10 are empty on B
23:26 mjrosenb and it looks like the machine that is mounting them only sees the complete set of files from A
23:26 mjrosenb is it safe for me to just remove the empty files?
23:27 mjrosenb I tried poking at them with xattr, but I don't see anything there.
23:32 JoeJulian Probably, but they'll likely get recreated.
23:34 JoeJulian My assumption is that they're pointers for files that hash out to the wrong brick. If I'm right, on the brick they'll be mode 1000, 0 size, and have an xattr trusted.dht.linkto
23:44 mjrosenb JoeJulian: should attr -l foo list the attributes that are defined, or do I want a different command?
23:46 cyberbootje joined #gluster
23:47 nightwalk joined #gluster
23:50 JoeJulian getfattr -m . -d $filename
23:50 JoeJulian normally I also do a "-e hex" but in this case text makes more sense.
23:58 rob__ joined #gluster
23:59 y4m4 joined #gluster
23:59 mjrosenb JoeJulian: prints nothing

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary