Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 forest joined #gluster
00:11 Deformati joined #gluster
00:28 MacRM joined #gluster
00:38 bulde joined #gluster
00:57 forest joined #gluster
01:05 MrNaviPa_ joined #gluster
01:11 nueces joined #gluster
01:21 bala joined #gluster
01:27 forest joined #gluster
01:48 kevein joined #gluster
02:09 forest joined #gluster
02:12 bulde joined #gluster
02:15 Deformati semiosis, I got your packages working.
02:15 Deformati I will test them in the morning.
02:19 bulde johnmark: some useful tool @ http://aravindavk.in/blog/glusterfs-tools/
02:19 glusterbot Title: GlusterFS Tools - Aravinda VK (at aravindavk.in)
02:32 mohankumar__ joined #gluster
02:43 vshankar joined #gluster
02:43 joelwallis joined #gluster
03:07 hagarth joined #gluster
03:08 bennyturns joined #gluster
03:11 bharata joined #gluster
03:12 yinyin joined #gluster
03:14 thisisdave joined #gluster
03:20 thisisdave I'm having a tough time getting the fuse client to respect my '-o transport=rdma' option, based on the client's log.
03:21 thisisdave the server does have a [volname].rdma-fuse.vol file though...
03:21 aravindavk joined #gluster
03:21 thisisdave but I think I've come here at the wrong hour. ;-)
03:21 mohankumar joined #gluster
03:25 thisisdave oh hey, I can specify the volfile-id in the options too. pays to rtfm -.-
03:51 sgowda joined #gluster
04:12 shylesh joined #gluster
04:35 hagarth joined #gluster
04:49 saurabh joined #gluster
04:50 ngoswami joined #gluster
04:58 vpshastry joined #gluster
05:12 CheRi joined #gluster
05:22 lalatenduM joined #gluster
05:44 bulde joined #gluster
05:47 ricky-ticky joined #gluster
05:50 rcoup joined #gluster
05:51 rcoup evening all :) Q of the day... ---------T implies a DHT link to another brick (other than where a file 'naturally' lives). So... why would I see it in my client?
05:51 rcoup I think it's because the file has gone awol.
05:51 rcoup from all bricks
05:52 rcoup OR it's broken, since there doesn't appear to be a 'go-look-here-instead' xattr on the ---T file either (looking at the brick)
05:53 rcoup nothing seems to appear in the client logs when I stat() the file
06:03 rcoup back later, will check the logs if anyone has any insight :)
06:15 jim` joined #gluster
06:18 rastar joined #gluster
06:19 hagarth joined #gluster
06:19 StarBeast joined #gluster
06:23 jtux joined #gluster
06:28 bala1 joined #gluster
06:30 vimal joined #gluster
06:34 ollivera joined #gluster
06:36 raghu joined #gluster
06:39 ctria joined #gluster
06:44 rastar joined #gluster
06:52 ekuric joined #gluster
06:56 rb2k joined #gluster
07:01 StarBeast joined #gluster
07:06 andreask joined #gluster
07:13 Koma joined #gluster
07:18 andreask joined #gluster
07:21 jamesbravo left #gluster
07:30 tziOm joined #gluster
07:49 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
07:58 nixpanic joined #gluster
07:58 nixpanic joined #gluster
08:16 badone joined #gluster
08:17 spider_fingers joined #gluster
08:18 mynameisbruce joined #gluster
08:21 CheRi joined #gluster
08:26 Norky joined #gluster
08:31 turf212 joined #gluster
08:32 StarBeast joined #gluster
08:37 atrius joined #gluster
08:40 badone joined #gluster
08:57 ramkrsna joined #gluster
08:57 ramkrsna joined #gluster
08:57 CheRi joined #gluster
09:21 shylesh joined #gluster
09:39 aravindavk joined #gluster
09:43 manik joined #gluster
10:08 shylesh joined #gluster
10:31 realdannys1 joined #gluster
10:32 manik joined #gluster
10:35 jkroon joined #gluster
10:35 cjh_ joined #gluster
10:35 jkroon hi guys, i'm seeing bash report Input/output error when trying to open a (specific) file for writing and the file is on glusterfs.
10:36 jkroon -bash: /path/to/file: Input/output error
10:36 jkroon how do I find the root cause of this?
10:36 AndrewX192 joined #gluster
10:36 shylesh joined #gluster
10:38 isomorphic joined #gluster
10:40 nightwalk joined #gluster
10:44 mohankumar joined #gluster
10:48 jkroon Unable to self-heal contents of '/uls-srvconf/.lock' (possible split-brain). Please delete the file from all but the preferred subvolume.
10:49 jkroon ok, so since this is supposed to be a 1x4 dist/replicate, and the file is always empty and serves as a lock file ...
10:57 chirino joined #gluster
11:02 kkeithley1 joined #gluster
11:03 mohankumar joined #gluster
11:09 jbrooks joined #gluster
11:13 andreask joined #gluster
11:21 [MDL]Matt joined #gluster
11:22 rastar joined #gluster
11:25 syoyo joined #gluster
11:25 syoyo_ joined #gluster
11:29 purpleidea hey does anyone have any idea when/if gluster might get kerberos auth/encryption support? in particular for the native glusterfs mounts, i don't care about the built in nfs server as much. i saw this on the 3.4 roadmap as a maybe...
11:30 kkeithley_ maybe in 3.5
11:32 p1ke_ joined #gluster
11:32 p1ke_ @latest
11:32 glusterbot p1ke_: The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
11:32 kkeithley_ And NFS auth only, not encryption. Hmm, does our roadmap say k5 encryption?  If you want encryption on native mounts, it's in 3.3, just enable it.
11:36 kkeithley_ s/not encryption/probably not encryption/
11:36 glusterbot What kkeithley_ meant to say was: And NFS auth only, probably not encryption. Hmm, does our roadmap say k5 encryption?  If you want encryption on native mounts, it's in 3.3, just enable it.
11:40 StarBeast joined #gluster
11:44 Max_imilian joined #gluster
11:50 glusterbot New news from newglusterbugs: [Bug 955753] NFS SETATTR call with a truncate and chmod 440 fails <http://goo.gl/fzF6r> || [Bug 847619] [FEAT] NFSv3 pre/post attribute cache (performance, caching attributes pre- and post fop) <http://goo.gl/qbDjE> || [Bug 847626] [FEAT] nfsv3 cluster aware rpc.statd for NLM failover <http://goo.gl/QBwN9>
11:58 mohankumar joined #gluster
12:04 jclift joined #gluster
12:09 johnmark bulde: thanks!
12:12 johnmark bulde: any way to take in only Gluster-related content from aravinda's blog?
12:12 johnmark looking to syndicate on gluster.org
12:15 bulde johnmark: will ask aravinda to tag gluster specific blogs, so we can filter based on that tag... sounds ok?
12:15 bulde aravindavk: or is there any other way you can give it to us?
12:17 Max_imilian Hi there
12:18 Max_imilian is it necessary to use XFS for gluster FS?
12:19 Max_imilian iam using RHELS 6 and XFS/xfsprogs are not supported in RHEL anymore
12:24 hagarth joined #gluster
12:25 kkeithley_ XFS and xfsprogs are supported in RHEL 6. ISTR that xfsprogs are in a separate EUS channel on RHN, perhaps because they were a late addition or something. I'm reasonably certain they'll be included standard in 6.5.
12:30 Max_imilian hmm i guess this Extended Update Support cost extended money...?!
12:32 edward1 joined #gluster
12:32 kkeithley_ I'm a developer, I don't know about such things. It should be free. XFS support is built into the RHEL kernel. And baring all else, you could get the SRPM from ftp://ftp.redhat.com/pub/redhat/linux/enterprise/​6Server/en/os/SRPMS/xfsprogs-3.1.1-7.el6.src.rpm and compile it yourself.
12:33 Max_imilian hey thx!
12:33 Norky XFS support in RHEL is an optional (chargeable) extra. It's sold as "Scalable File System"
12:34 Norky if you buy Red Hat Storage server, you get a cut-down RHEL, with XFS support
12:35 the-me joined #gluster
12:35 Norky https://www.redhat.com/products/ent​erprise-linux-add-ons/file-systems/
12:35 glusterbot <http://goo.gl/J9fVf> (at www.redhat.com)
12:36 kkeithley_ I stand corrected.
12:36 Max_imilian Okay.. so the "regular" Red Hat Enterprise Linux Server 6 doesn't support xfs.. :-(
12:36 Norky hey, no worries, you do the code, you don't need to know or care about how the sales/marketing people chose to hock it :)
12:37 Norky Max_imilian, not withotu paying extra. It's not a great deal extra on top of the standard RHEL license.
12:38 Norky I think stock RHEL might be able to mount existing XFS, but you will have not tools to make an XFS or manage it.
12:38 Norky s/not/no/
12:38 glusterbot What Norky meant to say was: I think stock RHEL might be able to mount existing XFS, but you will have no tools to make an XFS or manage it.
12:39 Norky the 'supported' path for RHEL as a gluster server is to run RHS instead of RHEL
12:39 Max_imilian no it is not possible to mount existing XFS volumes without xfsprogs
12:40 Norky you're free of course, to download and run community glsuter, and compile xfsprogrs from source as kkeithley said - but then you get no help from RH support if those bits break
12:41 Max_imilian thats the point... the only reason why they use RHELS is to get the support.
12:41 kkeithley_ Yes, what Norky said. +1 (ugh, now I feel dirty for using +1)
12:42 * Norky has been successfully incremented.
12:42 kkeithley_ Norky++
12:43 Max_imilian is that a server feat. on irc.gnu.org or just custome user/bot scripts?
12:43 Max_imilian ;-)
12:44 * Max_imilian test
12:44 Max_imilian okay.. :-)
12:45 Norky your choice, Max_imilian - most supported is RHS, then RHEL+ScalableFS with community Gluster, to the other end of the spectrum: support-it-yourself  with CentOS (or whatever) and GlsuterFS
12:46 Norky I believe there are people who have run RHEL with ext3/4 and glusterfs, but beware the ext structure change if you choose that route
12:46 spider_fingers joined #gluster
12:47 kkeithley_ And if I'm not mistaken, 3.4 has a fix for the ext4 bug
12:52 Norky oo? really? I might try that later
12:53 Norky mostly we run RHS so it's not an issue there, but I've been testing with CentOS and Fedora
12:54 bennyturns joined #gluster
12:54 aravindavk bulde, johnmark thanks. I will create syndicate for glusterfs and ping you the link
12:59 kkeithley_ But we still recommend XFS over ext4. You can look at the Red Hat FS team's (Ric Wheeler's) recent presentations at things like LinuxCon Japan and Red Hat Summit where he basically says all the development dollars are going into XFS and btrfs. ext4 is pretty much just in maintenance mode.
13:05 JusHal joined #gluster
13:07 bulde1 joined #gluster
13:07 JusHal replacing a brick with Glusterfs-3.3 has been workign fine, with the 3.4 beta version I get: Extended attribute trusted.glusterfs.volume-id is absent, Initialization of volume 'strg11-posix' failed, review your volfile again. Any idea?
13:08 johnmark bulde: perfect - thanks
13:08 johnmark bulde: but why isn't he in the channel? :(
13:08 hagarth joined #gluster
13:10 rwheeler joined #gluster
13:17 neofob joined #gluster
13:20 dewey joined #gluster
13:25 aravindavk joined #gluster
13:42 hagarth joined #gluster
13:52 jamesbravo joined #gluster
13:54 plarsen joined #gluster
13:59 jamesbravo Any chance if someone can look at the following issue please? http://padfly.com/gluster-testing
13:59 glusterbot Title: gluster-testing PADFLY - Free Online Web Scratchpad/Notepad/Clipboard. No Login. (at padfly.com)
14:10 manik joined #gluster
14:12 portante joined #gluster
14:12 MrNaviPa_ joined #gluster
14:19 spider_fingers left #gluster
14:28 aliguori joined #gluster
14:31 bugs_ joined #gluster
14:32 portante joined #gluster
14:36 joelwallis joined #gluster
14:38 portante joined #gluster
14:40 portante joined #gluster
14:42 forest joined #gluster
14:44 JusHal ok, found a workaround. Seems to be a known issue https://bugzilla.redhat.com/show_bug.cgi?id=958076
14:44 glusterbot <http://goo.gl/aawaA> (at bugzilla.redhat.com)
14:44 glusterbot Bug 958076: medium, high, ---, kdhananj, VERIFIED , unable to start the volume once the brick is removed and created with the same name.
14:45 portante joined #gluster
14:48 portante joined #gluster
14:50 glusterbot New news from newglusterbugs: [Bug 975476] "--mode=script" option not shown in help output <http://goo.gl/F59Hi>
14:55 Deformati joined #gluster
15:02 bsaggy joined #gluster
15:02 nueces joined #gluster
15:11 vpshastry left #gluster
15:20 dooder123 joined #gluster
15:20 daMaestro joined #gluster
15:20 pkoro joined #gluster
15:24 Deformati Hi.
15:24 glusterbot Deformati: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:24 Deformati Do the machines that house the gluster bricks need any storage elsewhere?
15:24 Deformati Like in /etc or something?
15:25 Deformati Because my machines flasha  new image on reboot.
15:25 Deformati And I don't know if I need to make something persistant.
15:30 jclift Deformati: Well, on each of my Gluster nodes, the gluster state information is kept in /var/lib/gluster/*
15:30 jclift Deformati: If it's not kept between reboots, that'd probably cause issues.
15:30 Deformati Hmm.
15:31 Deformati Yeah, I suppose I should mount a disk there then.
15:31 jclift Deformati: I'm using RHEL/CentOS btw, different distro's might have different path names to do the same
15:32 jclift Deformati: Oops, it's "/var/lib/glusterd/" on my boxes.  Just checked. :D
15:33 zaitcev joined #gluster
15:35 larsks_ joined #gluster
15:38 Deformati jclift, THanks.
15:38 jclift :)
15:41 portante left #gluster
15:46 derick_ joined #gluster
15:46 larsks joined #gluster
15:55 realdannys1 joined #gluster
15:56 bala1 joined #gluster
16:01 semiosis jclift, Deformati: it *should* be /var/lib/glusterd on any/all distros.  if not, that's probably a packaging bug.  afaik all the redhat & debian dervied distros use /var/lib/glusterd
16:01 Deformati semiosis, I use your package.
16:01 Deformati It is much more stable than the one in ubuntu's repo.
16:02 semiosis iirc the package installer doesn't actually touch /var/lib/glusterd, it's created & owned by the glusterd binary, the path being set at compile time
16:02 semiosis glad to hear it
16:02 Deformati semiosis, Do I need that stuff to be persistent on any machine other htan the master node?
16:03 semiosis there is no master
16:03 Deformati Right.
16:03 Deformati My mistake.
16:03 semiosis glusterfs is fully distributed.  every glusterfs server (peer) needs its own /var/lib/glusterd
16:03 Deformati And it needs to be persistent?  It can't re-create at boot?
16:03 semiosis correct
16:03 Deformati Ok, thanks.
16:04 semiosis the file /var/lib/glusterd/glusterd.info has a UUID in it, which identifies the server to the rest of the cluster
16:04 semiosis that UUID is mapped to an IP/hostname in /var/lib/glusterd/peers/$uuid on all other servers
16:04 semiosis so it's very important for a server to keep its UUID, or other servers won't recognize it
16:05 Deformati I see.
16:05 Deformati Perhaps that is what was messing everything up then.
16:05 Deformati I will just add this to my fstab: /dev/sda6 /var/lib/glusterd/ ext3 defaults 0 0
16:05 semiosis the rest of the stuff in /var/lib/glusterd could be resynced from other peers but you should not have to do that regularly
16:06 jclift semiosis: Good to know. :)
16:06 semiosis only when things go wrong
16:08 Deformati Stopping volume home has been unsuccessful
16:08 Deformati So frustrating.
16:11 Deformati I think I need to find al the nodes that mounted the disk and umount.
16:11 Deformati Is there a way to get that list?
16:12 semiosis ,,(glossary)
16:12 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
16:12 semiosis do you mean clients that have mounted the volume?  i think 'gluster volume status' provides that
16:13 jclift semiosis: gluster volume status definitely isn't showing mounted clients here (just tried)
16:13 jclift That's with latest upstream git master
16:14 semiosis oh ok
16:14 Mo_ joined #gluster
16:14 jclift Ugh.  Looks like some problems still in the RDMA code.  dd: writing `/foo/testfile': Bad file descriptor
16:14 Deformati I am just not sure what I need to do to delete this volume.
16:14 Deformati I wonder if just deleting /var/lib/glusterd will do the job.
16:15 semiosis Deformati: ,,(pasteinfo)
16:15 glusterbot Deformati: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:15 Deformati http://www.fpaste.org/19408/13715721/
16:16 glusterbot Title: #19408 Fedora Project Pastebin (at www.fpaste.org)
16:16 semiosis is /gluster/brick a locally mounted block device?  ext4/xfs?
16:17 Deformati ext4
16:32 aravindavk joined #gluster
16:35 thisisdave joined #gluster
16:44 vpshastry joined #gluster
16:54 ultrabizweb joined #gluster
16:57 jag3773 joined #gluster
16:59 GLHMarmot joined #gluster
17:01 chouchins joined #gluster
17:02 forest joined #gluster
17:05 thisisdave Hi folks. I'm having no luck getting glusterfs.mount to respect my '-o transport=rdma' option, based on the client's log. I've searched around but have had no luck as yet. Running 3.3.1.
17:06 thisisdave volume info: https://dpaste.de/RFmdu/
17:06 glusterbot Title: dpaste.de: Snippet #231857 (at dpaste.de)
17:11 thisisdave yet the outcome of a mount specifying rdma shows up in logs as: https://dpaste.de/jWE5A/
17:11 glusterbot Title: dpaste.de: Snippet #231858 (at dpaste.de)
17:21 lpabon joined #gluster
17:21 dooder123 joined #gluster
17:33 Deformative joined #gluster
17:38 plarsen joined #gluster
17:45 andreask joined #gluster
18:04 nobodydontknow joined #gluster
18:04 jhofferRHT joined #gluster
18:14 jclift thisisdave: That's interesting, I haven't tried setting the transport option via mount parameter yet.  I kind of thought that the transport option for a volume was part of the volume set information.  eg: When a volume is created using rdma transport, "that's how it is" from then on.  Is that not the case?
18:14 brosner joined #gluster
18:15 thisisdave jclift: well, transport on this volume is tcp,rdma and the clients seem to be connecting via tcp only
18:15 jclift Interesting.  So far I've only been creating volumes with one transport type (rdma) instead of tcp,rdma.
18:15 thisisdave i've recently inherited a Gluster-on-zfs environment that I may need to re-build unless I can speed it up, and I've honed in on ensuring rdma is actually used.
18:16 vpshastry left #gluster
18:16 jclift Maybe I should try tcp,rdma and see what happens. :)
18:16 jclift thisisdave: Which version of Gluster is it running?
18:16 thisisdave 3.3.1 all around
18:16 jclift k, that's not exactly ancient then
18:18 jclift thisisdave: If you don't get a suitable answer here on IRC, definitely ask on the gluster users mailing list as well.  It's good for picking the brains of people in completely different time zones. :D
18:19 thisisdave excellent, thanks for the advice. I've got two of the cluster nodes reserved as a microcosm sandbox of the two gluster nodes, so I'll set up a volume there that's rdma only and will play around.
18:21 thisisdave jclift: you can see on this test matrix (using test scripts that were also inherited, may just use iozone moving forward) exactly how performance is suffering: http://postimg.org/image/tohlgnmiv/full/
18:21 glusterbot Title: View image: Screen Shot 2013 06 18 at 11 19 34 AM (at postimg.org)
18:22 zoldar joined #gluster
18:22 DWSR joined #gluster
18:22 DWSR joined #gluster
18:22 NeonLicht joined #gluster
18:23 jclift thisisdave: It sounds like you're doing single threaded dd testing?
18:23 NcA^ joined #gluster
18:23 jclift thisisdave: I keep on getting told that it's not a valid test.  But I'm still not exactly sure why myself.
18:24 thisisdave on the dd test, yes, but it's not limited by cpu as i found it to be with /dev/random...
18:24 jclift thisisdave: For the "dd zeros 10GB" test, it doesn't seem to have info on whether that's read speed or write speed.
18:25 forest joined #gluster
18:25 jclift thisisdave: For me with very recent (dev build) code, I'm getting 330MB/s write speed with 2 nodes, but only 70-80MB/s _read_ speed back from the same two nodes.
18:25 thisisdave jclift: it's writing to a 10GB file, reading from /dev/zero on the local fs
18:25 jclift thisisdave: Which is making no sense to me at all
18:26 jclift thisisdave: Yeah, that's how I've been trying to validate my local testing setup too.
18:27 thisisdave perhaps one day there'll be a gluster-bench repo on github that'd hold a standardized battery of tests that the gluster folks can sign off on as being 'valid'...
18:27 jclift thisisdave: You know, that's not a bad idea
18:28 thisisdave what I lack in *nix-fu, I make up for with reasonable desire :-)
18:28 jclift :)
18:29 jclift thisisdave: With the NFS results there, is that using the Gluster NFS server, or something else?
18:29 thisisdave native v4 (in CentOS 6)
18:29 kkeithley_ thisisdave,jclift: dpaste? Is this on Debian or Ubuntu? There's a bug in rdma in 3.3.1. (Which is fixed/patched in the RPMs at @yum)
18:30 thisisdave this is on CentOS 6
18:30 StarBeast joined #gluster
18:31 jclift thisisdave: Hmmm, since you have two test boxes available, maybe you'd be game to try creating your own rpms from the latest source?  It's very, very simple to do.
18:31 kkeithley_ okay, using the RPMs from http://download.gluster.org/pub/glust​er/glusterfs/repos/YUM/glusterfs-3.3/  ?
18:31 glusterbot <http://goo.gl/9tJvd> (at download.gluster.org)
18:31 kkeithley_ (or from my fedorapeople.org repo?)
18:33 thisisdave jclift: I could do that; it'll be a tad painful perhaps since the rootfs's are on usb flash, but I'm not in a huge rush.
18:34 jclift thisisdave: Heh, yeah that won't be super fast. :)
18:34 jclift thisisdave: These are the instructions though: http://www.gluster.org/community/do​cumentation/index.php/CompilingRPMS
18:34 glusterbot <http://goo.gl/aXOjy> (at www.gluster.org)
18:34 thisisdave ...wanted the three bays for the storage devices...
18:34 jclift thisisdave: It's all completely cut-n-paste-able.
18:36 jclift thisisdave: On non-usb-flash-drive hosts, it takes about 10 mins from start to end, just by cut-n-pasting the commands there.  The result is gluster*rpms in the dir you end up with, ready to install.
18:37 jclift thisisdave: The RDMA code in 3.4 and above has changed quite a lot since 3.3.  I'm finding it _generally_ more stable.  But haven't yet found a bunch of _performance_ improvements in my (very limited) testing so far.
18:37 thisisdave I'm guessing I should I uninstall the pre-existing RMPS first... but before doing this, I'll build an rdma-only volume on ZFS just to get a like-for-like comparison...
18:38 jclift thisisdave: Yeah, definitely.
18:38 jclift thisisdave: That's a good idea too.  While you're at it, you might as well try out Gluster's NFS server to, to see if the perf numbers there are useful as well.
18:39 jclift kkeithley_: Mellanox have been kind enough to ship me (from the US) some modern FDR IB cards and a 32 port switch, so I can demo things at a conference next month.
18:40 jclift kkeithley_: They didn't include any QDR cables though (!), so I haven't yet tried the cards out to see what happens.
18:41 kkeithley_ jclift: nice. IIRC they've been very accomodating for us in the past. We need to say nice things about them.
18:41 jclift kkeithley_: It's kind of sad that the SSD drives I have in my test boxes are probably the slowest components in my setup (or the cpus).  The cards themselves are PCIe v3 cards, so my PCIe slots can't even max them out. :)
18:41 kkeithley_ accomodating ~== generous
18:42 jclift kkeithley_: Yeah, I'm helping them out in their Community Forums in return.  Seems to be working out.
18:43 kkeithley_ jclift: are your rpm build instructions for 3.3.x or 3.4.x? If for 3.3.x, do they include the rdma fix?
18:43 * jclift will have to buy some more ram for the boxes soon, to have some decent sized ram drives
18:43 jclift kkeithley_: The build instructions target only upstream git master
18:43 kkeithley_ okay, I was about to look for myself, but you saved me a mouse click. ;-)
18:44 jclift :)
18:44 jclift kkeithley_: Someone that knows what they're doing could definitely adjust which branch they're on after the initial git clone
18:45 kkeithley_ my mean, e.g. doing a `git checkout release-3.3` ;-)
18:45 jclift kkeithley_: And I still haven't looked into the breakout of the gluster swift stuff yet, which I should get done this week hopefully
18:45 kkeithley_ well, if you're building master/HEAD, there's no more gluster swift stuff in there
18:45 jclift Exacltly
18:45 jclift Exactly
18:46 kkeithley_ Just get swift from Fedora Updates or from http://repos.fedorapeople.org/re​pos/openstack/openstack-grizzly/
18:46 glusterbot <http://goo.gl/24dOQ> (at repos.fedorapeople.org)
18:46 jclift kkeithley_: Also.  There seem to be slight changes needed for each of the releases. For example the 3.3 series seems to need "--enable-fusermount", but latest git doesn't (its the default).  3.4 series I don't remember.  Could be either way. ;)
18:47 jclift Yeah, it's not a focus atm.  Soon.
18:47 kkeithley_ we're in process of switching to --without-fusermount (default == --with-fusermount)
18:48 kkeithley_ release-3.4 still defaults to --without-fusermount atm
18:48 kkeithley_ But you could review my patch at http://review.gluster.org/5179 and help move that along
18:48 glusterbot Title: Gerrit Code Review (at review.gluster.org)
18:48 thisisdave jclift: and with the rdma-only volume, all I get are "connection refused" log messages.
18:48 jclift Yeah, I retested my instructions with the "--enable-fusermount" option right after the commit which changed that.  As it still works, I've left it that way, since it'll work for everything (to date)
18:49 jclift thisisdave: That's weird.  iptables blocking stuff?
18:49 kkeithley_ right
18:49 thisisdave jclift: not running on either of the two replicas, nor on the client.
18:51 jclift thisisdave: Hmmm, might be just my level of knowledge with Gluster 3.3 is crap.  I'm pretty much working with 3.4 and dev code only, so it could be just my knowledge isn't broad enough
18:52 bennyturns joined #gluster
18:56 jclift thisisdave: Actually... what's the exact command you're using to create the new gluster rdma volume, and what's the command you're using for mounting it?
18:57 thisisdave creation: gluster volume create Test replica 2 transport rdma grey0081:/Tank/brick grey0082:/Tank/brick
18:57 jclift thisisdave: Kind of wondering how you're specifying the host name, and if there's a mismatch there in how Gluster is seeing things.
18:57 jclift thisisdave: So, "grey0081" and "grey0082".  Are the IP addresses associated with those host names on the IB card interfaces?
18:57 thisisdave yes
18:58 thisisdave also, since this is an HPC (diskless), all internal resolution is through /etc/hosts
18:58 neofob so folks ask about glusterfs on reddit @ http://www.reddit.com/r/linux/comments/1g​gf4g/any_real_world_glusterfs_experience/
18:58 glusterbot <http://goo.gl/CUid7> (at www.reddit.com)
19:01 thisisdave jclift: also, mount is `mount -t glusterfs grey0081:/Test /mnt/gluster`
19:01 jclift thisisdave: Damn, that looks right.
19:03 thisisdave also, rdma is active on the hosts and the client: the output (for all of them) is: https://dpaste.de/0jK65/
19:03 glusterbot Title: dpaste.de: Snippet #231872 (at dpaste.de)
19:03 jclift thisisdave: Oh, one small different from Gluster 3.3 to Gluster dev code... you'll need to create a subdir in each of your brick mount points (eg: grey0081:/Tank/brick/somedirname) and use the subdir for gluster.
19:04 jclift thisisdave: sudo ibv_devinfo ?
19:04 thisisdave https://dpaste.de/4JvwA/
19:04 glusterbot Title: dpaste.de: Snippet #231873 (at dpaste.de)
19:05 jclift Heh, SuperMicro server
19:05 semiosis johnmark: check out that reddit link above
19:06 semiosis neofob: thx for that link, interesting comments there
19:06 thisisdave jclift: is the "brick" dir within the Tank not sufficient?
19:06 thisisdave The zpool is Tank, not brick...
19:06 jclift thisisdave: Ahhh, that might be ok then.
19:07 jclift It's been such a long, long, long (years) time that I looked at ZFS.  I claim no knowledge there at all. :)
19:07 thisisdave ...that's one issue with our prod environment (again, inherited) is that there's no directory within the zpool that the brick is on.
19:08 thisisdave hopefully i can clean that up but only if I can make the case for keeping gluster around.
19:08 johnmark semiosis: yeah, just saw that
19:09 jclift thisisdave: The point about the subdir thing is just that with newest Gluster dev code, it won't let a direct mount point be used as a brick.  It might be find with ZFS though (no idea).
19:10 johnmark semiosis: was wondering whether (and if) to respond
19:10 semiosis i just registered so i could add my 2c
19:10 johnmark but I saw lots of positive things
19:10 johnmark semiosis: make sure to include a download link :)
19:11 johnmark what I've done at previous companies and send a link like tha tto the entire community encouraging them to respond
19:11 jclift thisisdave: Hmm... "gluster volume status Test" ?
19:11 johnmark but I always feld kind of dirty doing that :)
19:11 jclift johnmark: Do it. :)
19:12 johnmark haha
19:12 johnmark ok! I'll just tell everyone jclift told me to do it ;)
19:12 johnmark ...and I always do what jclift tells me, so...
19:12 jclift Sure.  I have no problem with being blamed for stuff that can generate results.
19:13 jclift My excuse is generally "Damn, it worked last time... " :D
19:14 johnmark LOL
19:14 thisisdave jcleft: ouch, it was an obvious omission. apprently not online.
19:15 johnmark jclift: heh. no worries. I wouldn't throw you under the bus, but I'm happy to giv eyou credit if it works :)
19:16 jclift thisisdave: No stress there.  At least the fix is simple, rather than being a new bug. :)
19:16 jclift johnmark: Nah, I don't deserve credit there.  Didn't earn it.
19:18 y4m4 joined #gluster
19:18 thisisdave jclift: hmm, I can start the volume, but can't get it online.
19:21 jclift thisisdave: ok, "gluster volume status Test" and "gluster volume info Test"
19:21 jclift thisisdave: Also "sudo ibv_devinfo -v" because I'm curious. :)
19:22 thisisdave jclift: they're powercycling; figured it was a good idea since they've had so many test envs through them over the past ten days...
19:22 jclift thisisdave: Sure, no worries.  I'll be around for a while. :)
19:22 thisisdave or maybe it's the IT crowd fan in me.
19:22 kkeithley_ hmmm, johnmark always does what jclift tell him. jclift, tell johnmark to find that raspberry pi
19:23 jclift johnmark: kkeithley_ wants some raspberry pie
19:23 kkeithley_ not pie, pi
19:23 jclift :>
19:23 atrius_ joined #gluster
19:23 jclift johnmark: What he said ^^^
19:24 johnmark teehee
19:24 kkeithley_ johnmark: where did those Dell 1950s end up after summit?
19:24 kkeithley_ gb is looking for them
19:24 jclift I'm starting to regret giving away the raspberry pi I was given in Antwerp
19:24 jclift Oh well
19:24 johnmark kkeithley_: they're in my car. I will bring them in.
19:24 johnmark kkeithley_: does he need them back? darn it :(
19:25 jclift johnmark: Oh, how did that all go btw?
19:25 johnmark I was going to set up a testbed in the office
19:25 jclift 1950's... kind of noisy?
19:25 thisisdave jclift: https://dpaste.de/4vwI4/raw/
19:26 kkeithley_ we have a big new lab to put them in, why would you want them in your cubicle?
19:26 jclift thisisdave: k, so grey0081/2 are the ib adapter host names, and eth-grey0081/2 are the eth adapter ones.
19:26 jclift thisisdave: So far, it all sounds ok
19:27 thisisdave correct
19:29 johnmark jclift: a tad
19:29 johnmark kkeithley_: fair point. will this lab be up and running sometime this decade?
19:29 jclift johnmark: Yeah, that's the biggest problem I have with setting up gear.  Need it to be silent or close to.  Completely over noisy stuff. :)
19:30 jclift thisisdave: k, on the box that's trying to mount those volumes but failing, is there anything else with gluster on it running?
19:30 thisisdave yes, it does have the prod ClusterHome vol mounted
19:30 thisisdave Just tried mounting from one of the hosts itself, log snippet incoming
19:30 jclift thisisdave: Asking because what I normally do at this stage is shut down glusterd on the box that's trying to mount stuff, then nuke any contents of /var/log/glusterfs/ before starting it back up and trying stuff.
19:31 jclift So, it sounds like that won't be a go. :)
19:31 thisisdave https://dpaste.de/saQqw/raw/
19:32 jclift Ahhh
19:32 jclift thisisdave: What's the output from rpm -qa|grep -i gluster ?
19:33 thisisdave glusterfs-server-3.3.1-1.el6.x86_64 glusterfs-3.3.1-1.el6.x86_64 glusterfs-fuse-3.3.1-1.el6.x86_64
19:33 jclift thisisdave: This line sounds like the problem "[2013-06-18 12:29:37.622848] E [rpc-transport.c:256:rpc_transport_load] 0-rpc-transport: volume 'Test-client-1': transport-type 'rdma' is not valid or not found on this machine"
19:33 jclift With my boxes here (newer gluster though), there's a glusterfs-rdma rpm installed.
19:34 jclift I'm _guessing_ that's probably installed on the hosts, thus rdma volume creation worked.  But I'm guessing it's not installed on the box you're trying to mount from.
19:34 jclift s/hosts/gluster storage boxes/
19:34 glusterbot What jclift meant to say was: I'm _guessing_ that's probably installed on the gluster storage boxes, thus rdma volume creation worked.  But I'm guessing it's not installed on the box you're trying to mount from.
19:34 thisisdave that log snippet was from one of the hosts trying to mount its own volume.
19:34 jclift Heh
19:34 thisisdave (and that pkg is installed)
19:35 * jclift is now sort of confused
19:35 jclift On the box that snippet came from, it seems to be saying rdma transport isn't there.
19:36 jclift So, on that same box, the glusterfs-rdma.* rpm is or isn't installed?
19:36 kkeithley_ johnmark: you were with me when we got kicked out of the new lab. Beyond that I don't know when it'll be ready. I know eng-ops are eager to get those machines out of the conf room where they've been storing them, so I'd say we have a fair chance of getting them up and running soon.
19:37 jclift thisisdave: Sorry, forgot to direct the above lines to you
19:37 jclift thisisdave: On the box with that log message error, is /usr/lib64/glusterfs/3.3.1/rpc-transport/rdma.so present?
19:37 kkeithley_ @yum
19:38 glusterbot kkeithley_: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
19:38 thisisdave jclift: no worries, i'm just into RHEL articles and mailing list threads...
19:38 jclift kkeithley_: thx
19:38 jclift kkeithley_: Cool, so 3.3.1 has a glusterfs-rdma package too.  That definitely needs to be around. :)
19:39 thisisdave jclift: it is seemingly not
19:39 kkeithley_ 3.3.1-1 from any source isn't going to do rdma without a hack/workaround. Get the -15s from ^^^ and that should make your life a lot easier.
19:40 thisisdave jclift: it must've been a diskless node I checked and not one of the test hosts.
19:40 jclift thisisdave: No worries.  We're making progess.
19:41 jclift kkeithley_: I'm kind of worried about thisisdave mixing gluster rpm versions here between client and server.
19:42 jclift thisisdave: My understanding of stuff is that the version of Gluster on the client and server should always be _exactly_ the same.  ie exactly same rpm versions
19:43 jclift thisisdave: The rpm's that kkeithley_ is pointing to are good ones for rdma.  But, you'll want to make sure that if you update your boxes with them (test env first), that you're not still trying to mount volumes from some nodes with a different version.
19:43 jclift Hopefully I wrote that in a mostly clear way.
19:43 thisisdave all nodes are tftpboot so I don't have to worry about inconsistencies there;
19:43 jclift Cool.
19:44 jclift Are you able to update to the rpms ^^ then?  http://download.gluster.org/pub/gluster/glus​terfs/repos/YUM/glusterfs-3.3/epel-6/x86_64/
19:44 glusterbot <http://goo.gl/qilw7> (at download.gluster.org)
19:44 thisisdave their confmgmt is handled via masterless puppet so I should be good on changes-sans-carpal-tunnel
19:44 Alpinist joined #gluster
19:45 kkeithley_ That's a good rule of thumb. Mostly 3.3.1-* changes are in UFO and in general you could use any 3.3.1-*, but in this case, for rdma you want 3.3.1-12 or later to get the fix
19:45 Deformative joined #gluster
19:46 thisisdave (earlier msg didn't send) all nodes are tftpboot so I don't have to worry about inconsistencies there;
19:46 jclift Actually, that msg did show up. :)
19:47 thisisdave :realizes up arrow in webchat does what it does:
19:47 kkeithley_ johnmark: are we going to have a 3.4 readiness call tonight? For real?
19:53 johnmark kkeithley_: word up
19:54 kkeithley_ yo
20:05 jclift thisisdave: Any progress? :)
20:07 Deformative Why am I getting add brick unsuccessful?
20:08 Deformative How do I debug this?
20:08 thisisdave jclift: trying to update the version on the test servers but it's telling me it's the latest version...
20:08 jclift thisisdave: Ahhh, I remember that problem.  It's something to do with how the version numbers in the rpms are calculated.  It's a bug or something I think.
20:09 jclift thisisdave: You can download the rpms you need from the yum repo, nuke your existing installed rpms, then rpm -ivh the new ones.
20:09 jclift thisisdave: ie grab the rpms manually, etc.
20:11 jclift Let's hope that after all this you get better perf #'s. :D
20:19 tg3 joined #gluster
20:19 tg3 any idea what could cause a file to exist in the gluster volume but in none of the bricks?
20:21 rcoup joined #gluster
20:21 Deformative joined #gluster
20:23 plarsen joined #gluster
20:26 jag3773 joined #gluster
20:26 jclift Deformative: Is there anything useful in the log files? /var/log/glusterfs/*
20:27 Deformative Which file would it be in?
20:28 jclift What's the output of ls -la /var/log/glusterfs/ ?
20:28 Deformative http://www.fpaste.org/19482/58730913/
20:28 glusterbot Title: #19482 Fedora Project Pastebin (at www.fpaste.org)
20:29 nueces joined #gluster
20:29 nueces left #gluster
20:30 jclift Deformative: k, what's the output from ls -la /var/log/glusterfs/bricks/ ?
20:30 Deformative Nothing.
20:32 jclift That's weird.  What's the exact command you're using?
20:32 jclift Also, what's the output of gluster volume info?
20:33 Deformative gluster volume add-brick home m50-001:/gluster/brick m50-002:/gluster/brick
20:33 jclift Deformative: Ahhh... the output of "ls -la /var/log/glusterfs/bricks/" from one of the storage nodes.
20:33 * jclift just checked
20:34 Deformative http://www.fpaste.org/19484/71587634/
20:34 jclift The brick subdir in log dir is empty on clients, but should have files in there on the storage nodes
20:34 glusterbot Title: #19484 Fedora Project Pastebin (at www.fpaste.org)
20:34 jclift thx
20:35 thisisdave jclift: am I asking for trouble if I update 3.3.1-1 to 3.3.1-15 via yum on the existing prod servers?
20:35 bit4man joined #gluster
20:35 jclift thisisdave: I wouldn't want to bet either way.
20:36 nordac I have been using nfs with gluster just fine but the minute I turn on geo-replication all hell breaks looks and the nfs client cannot write stating " Bad file descriptor
20:36 jclift thisisdave: I'm kind of used to working in critical infrastructure environments, so I'd normally get shot for even thinking of doing that without a bunch of testing first.
20:36 Deformative http://www.fpaste.org/19486/13715877/
20:36 glusterbot Title: #19486 Fedora Project Pastebin (at www.fpaste.org)
20:37 jclift Deformative: k.  What's the output from "gluster peer status" from one of the storage nodes?
20:37 Deformative http://www.fpaste.org/19487/71587861/
20:37 glusterbot Title: #19487 Fedora Project Pastebin (at www.fpaste.org)
20:40 jclift Deformative: k, keep a tail on /var/log/glusterfs/home.log, and try the brick add command again.  See if useful error info comes through the tail.
20:41 jclift Deformative: Actually, just spotted something.
20:41 jclift In the gluster peer status, it seems like the m50-002 node is offline
20:41 jclift But that's got one of the bricks you're adding on it.
20:41 Deformative I am using m530992
20:41 Deformative m53-002
20:42 Deformative Oh.
20:42 jclift k, I was just going by the line you wrote. :)
20:42 Deformative I did give you the m50 line
20:42 jclift Sure, np.
20:42 Deformative gluster volume add-brick home m53-001:/gluster/brick m53-002:/gluster/brick
20:42 Deformative Should be that.
20:42 jclift k, try the tail thing.
20:42 Deformative Also, it won't let me do volume set on home either.
20:42 Deformative I woudl like to nfs.disable on it.
20:43 Deformative What is the tail command that will watch?
20:43 jclift sudo tail -20f /var/log/glusterfs/home.log
20:43 jclift The -f bit makes it watch
20:43 jclift the "20" number just gives 20 lines of existing history
20:44 Deformative Noting gets printed.
20:44 jclift Damn
20:45 jclift Ok, we've hit the limit of my knowledge for diagnosing this stuff then.  We'll need someone with more experience to chime in next. :D
20:45 Deformative Darn.
20:45 Deformative Thanks.
20:46 jclift np.
20:46 Deformative Does it have to do with asymmetric disks?
20:47 jclift Deformative: It's a good question, but I personally don't know the answer to it. :(
20:48 jclift Deformative: Hmmmm, it's worth taking a look in the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log file too.
20:48 jclift Deformative: That seems to have volume level operations logged into it
20:50 jdarcy joined #gluster
20:51 Deformative I am going to try to bring up all the nodes so that none are disconnected
20:52 Deformative I sort of hope that isn't it.
20:52 Deformative Not sure how the thing can deal with node failure if it just won't let you do anything when one fails.
20:55 Deformative Yeah, that was it.
20:55 Deformative It doesn't like it if any nodes are disconnected.
21:01 badone joined #gluster
21:04 balunasj joined #gluster
21:04 dooder123 joined #gluster
21:10 jclift Deformative: Interesting.  Maybe if a node fails you have to remove the failed node first, then add the replacement?  (unsure)
21:13 Guest41965 joined #gluster
21:15 lalatenduM joined #gluster
21:23 Deformative joined #gluster
21:23 jthorne joined #gluster
21:24 thisisdave jclift: initial testing over rdma only seems to indicate a 60% throughput _drop_.
21:24 thisisdave using 3.3.1-15
21:27 plarsen joined #gluster
21:47 Deformative Ugh!
21:47 Deformative So randomly I can't access the gluster fs.
21:47 Deformative Like when I ls a gluster.
21:47 Deformative It fucks up.
21:49 Deformative It just sits there stalled out.
21:50 kkeithley_ and scrolling back.... you're using ext4!
21:52 kkeithley_ I think you hit the ext4 bug
21:52 glusterbot New news from newglusterbugs: [Bug 975599] enabling cluster.nufa on the fly does not change client side graph <http://goo.gl/CTk2y>
22:05 Deformative kkeithley, Huh?
22:06 tg3 what version of kernel are you running Deformative
22:06 Deformative 3.8.0
22:06 tg3 yeah
22:07 tg3 you have to downgrade
22:07 Deformative Huh?
22:07 Deformative Why?
22:07 tg3 ext4 put a patch in (which they backported stupidly) that causes glusterfs to screw up rebalacning and replication
22:07 tg3 so if you're running ext4 under your bricks
22:07 tg3 you will hit this
22:08 Deformative Can I just use ext3 on the bricks?
22:08 Deformative Or something?
22:08 tg3 you can use xfs
22:08 tg3 as recommended by gluster
22:08 Deformative My kernel doesn't have xfs built in.
22:08 tg3 we use ext4 still
22:08 tg3 just a slightly older kernel
22:09 tg3 3.2.36-hardened
22:09 tg3 anything 3.3 and up
22:09 tg3 has this ext4 bug
22:09 tg3 we use ext4 becuase of the external ssd journaling feature
22:09 tg3 http://www.raid6.com.au/post​s/fs_ext4_external_journal/
22:09 glusterbot <http://goo.gl/Ekwcg> (at www.raid6.com.au)
22:10 tg3 if not, complile in xfs support and use that as that is what is recommended by gluster
22:10 Deformative Ok.
22:12 tg3 that should fix the problem
22:13 Deformative I will try that and report back tomorrow...
22:13 tg3 I think there was a patch for this, but I'm not sure if it's committed yet
22:14 tg3 there was some back and forth on ti
22:15 tg3 http://review.gluster.org/#/c/4822/
22:16 glusterbot Title: Gerrit Code Review (at review.gluster.org)
22:19 kkeithley_ It's fixed in 3.4.0 and 3.3.2
22:20 Deformative I have 3.8 though...
22:21 tg3 glusterfs 3.3.2, 3.4.0
22:21 tg3 not kernel
22:21 Deformative I am running 3.3.1
22:22 tg3 if you run 3.3.2 it should play nice with ext4
22:23 kkeithley_ Yeah, neither 3.3.2 or 3.4.0 have been released yet.
22:24 kkeithley_ it's fixed in the release-3.3 branch in commit 490b791f44135db72cba1d8df9b40a66b457bff2. I think that should be in 3.3.2
22:24 Deformative Is there a package for ubuntu or do I need to build it by hand?
22:24 tg3 i think somebody had made packages for 3.4
22:25 kkeithley_ There are 3.4.0beta3 rpms at
22:25 kkeithley_ @yum-beta
22:25 tg3 curse you glusterbot lol
22:25 kkeithley_ @beta-yum
22:25 glusterbot kkeithley_: The official community glusterfs packges for RHEL 6 (including CentOS, SL, etc.), Fedora 17-19 (i386, x86_64, arm, armhfp), and Pidora are available at http://goo.gl/LGV5s
22:25 tg3 seismosis had them
22:25 tg3 https://launchpad.net/~semiosis​/+archive/ubuntu-glusterfs-3.4
22:25 glusterbot <http://goo.gl/u33hy> (at launchpad.net)
22:25 kkeithley_ @ppa
22:25 glusterbot kkeithley_: The official glusterfs 3.3 packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.3 QA: http://goo.gl/5fnXN -- and 3.4 QA: http://goo.gl/u33hy
22:26 tg3 ^
22:27 kkeithley_ and fixed in the release-3.4 branch in commit 2a734f92c4f2797523aaf2ec2803ea88382ec1d6, and will definitely be in 3.4.0
22:27 kkeithley_ @learn yum-beta as The official community glusterfs packges for RHEL 6 (including CentOS, SL, etc.), Fedora 17-19 (i386, x86_64, arm, armhfp), and Pidora are available at http://goo.gl/LGV5s
22:27 glusterbot kkeithley_: The operation succeeded.
22:28 tg3 how stable is 3.4 so far?
22:28 tg3 any feedback from large setups? >500tb?
22:31 kkeithley_ If you want you can sit in on the 3.4 readiness meeting later tonight on #gluster-dev and find out.  0200 UTC (10:00 PM EDT)
22:32 kkeithley_ ... and find out
22:33 tg3 is there an official date for 3.4 to release?
22:33 tg3 or when its good 'n ready
22:35 Deformative Ok, I will change over to xfs and report back tomorrow when it is done.
22:35 Deformative o/
22:35 tg3 you could try 3.4 too
22:35 tg3 if you're not in production yet
22:36 Deformative Is it stable?
22:36 Deformative I need as stable as possible.
22:36 tg3 ok
22:37 Deformative Yeah, I think xfs is easier.
22:37 Deformative I am not getting payed to fuck with this stuff.
22:37 Deformative I just need the cluster working asap so that my team can run experiments.
22:38 Deformative Anyway, I''ll be back tomorrow.
22:38 Deformative Thanks for the help tg3.
22:38 Deformative I really hope it is the ext4 bug causing it.
22:39 tg3 np, was kkeithley that pointed it out
22:41 tg3 i would consider trying an older kernel though
22:43 p1ke_ left #gluster
22:44 rcoup joined #gluster
22:51 rjoseph joined #gluster
23:02 isomorphic joined #gluster
23:10 thisisdave arg, no luck in 3.3.1-15 forcing rdma on the mount of a tcp,rdma volume. :-/
23:20 thisisdave it seems that "mount -t glusterfs -o transport=rdma ..." isn't sufficient--the volume needs the .rdma suffix ... can anyone confirm?
23:21 thisisdave ...scratch that; even then, I'm getting tcp transport in the volume log (on the client)
23:28 thisisdave am I right in thinking that the transport type indicated in the client's log is the (only) transport type used by the client?
23:31 StarBeast joined #gluster
23:32 joelwallis joined #gluster
23:32 nightwalk joined #gluster
23:33 thisisdave figured it out: needed not the '.rdma' suffix but rather '.rdma-fuse' suffix on the volume name
23:51 nightwalk joined #gluster
23:54 nightwalk joined #gluster
23:58 ultrabizweb joined #gluster
23:58 eryc joined #gluster
23:58 eryc joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary