Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 glusterbot New news from newglusterbugs: [Bug 948086] concurrent ls on NFS results in inconsistent views <http://goo.gl/Z5tXN> || [Bug 948087] concurrent ls on NFS results in inconsistent views <http://goo.gl/hH9Ok> || [Bug 948088] concurrent ls on NFS results in inconsistent views <http://goo.gl/vA1wk>
00:12 JoeJulian a2: He said he'll be back at around 1300GMT
00:13 a2 oh ok.. might not be around then
00:15 a2 argh, rpmbuild on glusterfs.spec seems to be broek on RHEL
00:31 glusterbot New news from resolvedglusterbugs: [Bug 948088] concurrent ls on NFS results in inconsistent views <http://goo.gl/vA1wk>
00:33 bit4man joined #gluster
00:44 kevein joined #gluster
00:52 puebele joined #gluster
01:04 __Bryan__ joined #gluster
01:06 bala joined #gluster
02:51 portante joined #gluster
02:54 rmdashrf joined #gluster
03:09 vshankar joined #gluster
03:13 bulde joined #gluster
03:34 sgowda joined #gluster
03:57 bulde joined #gluster
03:58 mohankumar joined #gluster
04:04 lalatenduM joined #gluster
04:12 red_solar joined #gluster
04:15 MinhP_ joined #gluster
04:16 Norky_ joined #gluster
04:16 Zenginee1 joined #gluster
04:18 mnaser joined #gluster
04:25 rastar joined #gluster
04:26 hagarth joined #gluster
04:37 red-solar joined #gluster
04:45 yinyin joined #gluster
04:54 piotrektt_ joined #gluster
05:05 vpshastry joined #gluster
05:05 aravindavk joined #gluster
05:07 bulde joined #gluster
05:14 shylesh joined #gluster
05:20 bala joined #gluster
05:21 shireesh joined #gluster
05:35 xymox joined #gluster
05:36 furkaboo_ joined #gluster
05:37 eryc joined #gluster
05:37 ladd joined #gluster
05:37 eryc joined #gluster
05:40 lkoranda joined #gluster
05:45 hagarth joined #gluster
05:54 raghu joined #gluster
05:56 bharata joined #gluster
05:59 puebele1 joined #gluster
06:07 Nevan joined #gluster
06:07 jtux joined #gluster
06:16 ollivera joined #gluster
06:28 ricky-ticky joined #gluster
06:31 vpshastry joined #gluster
06:44 saurabh joined #gluster
06:57 ekuric joined #gluster
06:58 mohankumar joined #gluster
07:01 bala joined #gluster
07:01 vimal joined #gluster
07:02 glusterbot New news from newglusterbugs: [Bug 920434] Crash in index_forget <http://goo.gl/mH2ks>
07:04 ctria joined #gluster
07:05 tjikkun_work joined #gluster
07:08 14WAALCQM joined #gluster
07:09 andreask joined #gluster
07:19 ngoswami joined #gluster
07:23 yinyin_ joined #gluster
07:24 mohankumar joined #gluster
07:33 ProT-0-TypE joined #gluster
08:01 hagarth joined #gluster
08:07 _ilbot joined #gluster
08:07 Topic for #gluster is now  Gluster Community - http://gluster.org | Q&A - http://community.gluster.org/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - http://irclog.perlgeek.de/gluster/
08:17 yinyin_ joined #gluster
08:31 vpshastry joined #gluster
08:33 glusterbot New news from resolvedglusterbugs: [Bug 948087] concurrent ls on NFS results in inconsistent views <http://goo.gl/hH9Ok> || [Bug 761733] test bug <http://goo.gl/Hl8CK>
08:37 yinyin- joined #gluster
08:47 tryggvil__ joined #gluster
09:01 dobber joined #gluster
09:03 glusterbot New news from newglusterbugs: [Bug 948178] do proper cleanup of the graph <http://goo.gl/BOcLn>
09:12 20WAB68IE joined #gluster
09:22 shylesh joined #gluster
09:27 the-me semiosis: I just commited two updates for your r4436
09:28 shireesh joined #gluster
09:29 mooperd joined #gluster
09:32 edward1 joined #gluster
09:36 duerF joined #gluster
09:50 dustint joined #gluster
09:50 manik joined #gluster
09:51 jtux joined #gluster
09:54 ricky-ticky joined #gluster
09:56 sripathi joined #gluster
10:12 bala joined #gluster
10:13 aravindavk joined #gluster
10:17 manik joined #gluster
10:21 shireesh joined #gluster
10:24 hagarth joined #gluster
10:36 mooperd joined #gluster
10:37 hircus joined #gluster
10:39 yinyin joined #gluster
10:42 mooperd joined #gluster
10:44 mooperd joined #gluster
10:47 ricky-ticky joined #gluster
10:53 clag_ joined #gluster
10:54 manik joined #gluster
10:57 vpshastry joined #gluster
11:03 glusterbot New news from newglusterbugs: [Bug 923540] features/compress: Compression/DeCompression translator <http://goo.gl/l5Y0Z>
11:09 kkeithley1 joined #gluster
11:21 ricky-ticky joined #gluster
11:27 lalatenduM joined #gluster
11:30 lpabon joined #gluster
11:30 aravindavk joined #gluster
11:33 glusterbot New news from newglusterbugs: [Bug 895528] 3.4 Alpha Tracker <http://goo.gl/hZmy9> || [Bug 948039] rebase to grizzly ($HEAD) <http://goo.gl/NK24m> || [Bug 948041] rebase to grizzly (release-3.4) <http://goo.gl/77lzU>
11:35 noche_ joined #gluster
11:36 camel1cz joined #gluster
11:40 yinyin joined #gluster
11:41 ricky-ticky joined #gluster
11:44 Peanut joined #gluster
11:45 andreask joined #gluster
11:46 Peanut Hi folks - I would like to build a pair of machines, each hosting a number of VMs. I would like to be able to 'live migrate' VMs from one machine to the other. Would gluster be a good way to share the storage between both machines, so each server can start all the VM images?
11:48 bala joined #gluster
11:48 ricky-ticky joined #gluster
11:53 aravindavk joined #gluster
11:53 lpabon joined #gluster
11:54 ricky-ticky joined #gluster
11:54 sgowda joined #gluster
11:54 shireesh joined #gluster
11:55 camel1cz joined #gluster
11:58 rcheleguini joined #gluster
11:59 H__ Peanut: I say yes :)
12:04 Peanut H__: thanks - I'm currently reading up on the subject, but it's always difficult to gauge the current state of a project like gluster from Googling.
12:06 Peanut Is that a common setup? It seems IO performance might be an issue?
12:08 Peanut Would the hosts be mounting the gluster filesystem, or each of the VMs?
12:08 mooperd joined #gluster
12:12 H__ you can do both
12:12 H__ here's an interestnig read for you : Nice GlusterFS deployment read on overclockers.com.au -> http://forums.overclockers.co​m.au/showthread.php?t=1078674
12:12 glusterbot <http://goo.gl/DIZeA> (at forums.overclockers.com.au)
12:13 * H__ waves from Eindhoven
12:16 Peanut Ah.. see, your nick looked familiar, H__ ;-)
12:17 H__ yup :) yours too.
12:18 H__ that overclockers blog is a real nice intro IMO, you should read it
12:18 Peanut Yup, reading it now
12:20 camel1cz left #gluster
12:21 ultrabizweb joined #gluster
12:23 bala joined #gluster
12:26 deepakcs joined #gluster
12:26 plarsen joined #gluster
12:39 vpshastry joined #gluster
12:43 hagarth joined #gluster
12:55 hybrid512 joined #gluster
12:58 sgowda left #gluster
13:01 flrichar joined #gluster
13:07 satheesh joined #gluster
13:07 dustint joined #gluster
13:14 dustint joined #gluster
13:15 robos joined #gluster
13:19 portante kkeithley1: you there?
13:19 theron_ joined #gluster
13:21 rwheeler joined #gluster
13:24 hagarth joined #gluster
13:26 theron joined #gluster
13:27 satheesh joined #gluster
13:32 jclift joined #gluster
13:39 satheesh1 joined #gluster
13:47 hagarth joined #gluster
13:48 manik joined #gluster
13:50 nueces joined #gluster
13:54 bennyturns joined #gluster
14:00 Supermathie joined #gluster
14:05 bugs_ joined #gluster
14:05 robos joined #gluster
14:09 plarsen joined #gluster
14:11 bennyturns joined #gluster
14:14 Supermathie avati_: I'm going to try out the patch against the 3.3.1-11.el6 source rpm
14:16 satheesh joined #gluster
14:17 Supermathie gah
14:17 Supermathie 5542 root      20   0     0    0    0 R 100.0  0.0 901:44.08 lockd
14:17 Supermathie on one of my nodes
14:19 Supermathie that's probably the kernel server lockd...  eh?
14:19 jdarcy joined #gluster
14:31 portante joined #gluster
14:35 fleducquede joined #gluster
14:44 johnmark heya - I'm thinking of having a test day next Tuesday - for the beta of 3.4
14:45 dbruhn__ joined #gluster
14:46 hagarth johnmark: sounds good!
14:46 samppah johnmark: has there been gluster test days before?
14:46 samppah any example test cases around? :)
14:50 Peanut For VMs you really should be using 3.3, it seems? Ubuntu Raring will be shipping with 3.2.7 apparently :-(
14:50 mooperd joined #gluster
14:52 samppah Peanut: ,,(ppa)
14:52 glusterbot Peanut: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
14:53 samppah and yes, 3.3 is recommended for VM images
14:53 * Peanut likes glusterbot (and samppah ;-)
14:57 daMaestro joined #gluster
14:57 johnmark samppah: yeah, leading up to 3.3
14:58 johnmark Peanut: really? 3.2.7?
14:58 johnmark that's... disappointing
14:58 Peanut 3.2.7-3ubuntu1
15:00 johnmark gah. I have no idea why they would do that
15:00 johnmark except to screw us over
15:00 samppah Peanut: lol :)
15:01 semiosis yay!  someone merged 3.2.7 \o/
15:01 kkeithley1 Sounds like a good selling point for Fedora18 if you ask me. ;-)
15:01 semiosis been feeling bad i haven't put time into that, but after doing all the hard work I hoped someone else would pick up the ball and do the merge now that its just a matter of following the ubuntu procedures
15:02 semiosis community FTW!
15:02 Peanut johnmark: you know, I get exactly the same responses in e.g. #openldap. 'Why are you asking about such an old version?'
15:04 kkeithley1 btw, swift-1.8.0 (grizzly) is looking good for 3.4.
15:04 kkeithley1 for ufo in 3.4
15:04 Peanut semiosis: that DNS lookup failure in the PPA looks quite suspect, is that affecting anyone else apart from you?
15:04 johnmark Peanut: heh :)
15:05 johnmark semiosis: ah... well, you can be forgiven for having other things to do :)
15:05 johnmark kkeithley1: nice
15:06 semiosis johnmark: i pour my efforts into the ppa packages, thats the best way to get the latest releases to people and be able to fix packaging bugs quickly
15:06 johnmark semiosis: yup yup. and I am very grateful for the work you do there
15:06 johnmark and not just me, I bet
15:06 semiosis johnmark: i love ubuntu, but like any distro their policies and procedures slow things down
15:06 semiosis :)
15:06 johnmark semiosis: yeah, that seems to be what they're for :(
15:07 johnmark and fedora is no better in tha tdept. Have you seen waht packagers have to do upfront?
15:08 ekuric1 joined #gluster
15:08 semiosis mark shuttleworth recently blogged on the topic of rolling releases and while not endorsing a rolling release as such, he did suggest that PPAs would play a larger role going forward.  the LTS + PPA trend seems to be taking off
15:09 semiosis so you can have a stable core system (LTS) for prod, and have the latest stable releases of your most important software from official project PPAs
15:09 semiosis at least that was what i took away from the post
15:10 Supermathie The approach does have a lot of merit. "Give me a stable base, then let me put the latest version of the things I really care about on top of it."
15:10 Peanut That'll work as long as you don't run into any interdependancies. Wich is one of the reasons I'm moving to VMs now: I can have one OS install per important service.
15:11 kkeithley1 My take-away is a bit more cynical.
15:12 Peanut Do tell?
15:12 kkeithley1 The community does the work...
15:13 kkeithley1 But who gets the $$$?
15:14 ProT-0-TypE joined #gluster
15:14 semiosis not me lol
15:17 dbruhn__ can anyone tell me what evens cause the <vol>-fuse.vol and trusted-<vol>fuse.vol files to be overwritten?
15:17 dbruhn__ s/evens /events
15:18 ekuric1 left #gluster
15:20 portante` joined #gluster
15:20 johnmark kkeithley1: yeah, but it's mostly self-interest work
15:21 johnmark as in, it improves the working life of community members
15:21 manik joined #gluster
15:21 johnmark as well as, of course, improving the environment for companies that invest in the projects + community
15:21 dbruhn__ I will pay someone to fix a bug in the software for me
15:21 dbruhn__ lol
15:22 dbruhn__ 100% serious
15:22 kkeithley1 sure, but nobody's getting trips to the ISS from the sales of Fedora and the work the community puts in to make Fedora.
15:23 kkeithley1 and I'll just stop before I get my foot even further in my mouth than it already is. ;-)
15:23 semiosis hahahaa
15:23 dbruhn__ no takers?
15:23 dbruhn__ damn
15:23 johnmark teehee
15:24 johnmark kkeithley1: no, but there are entire ecosystems that exist thanks to the success of some of these projects
15:24 johnmark they build businesses on supporting or providing some unique feature
15:24 johnmark in theory :)
15:25 kkeithley1 I have no problem with most of that. I am, after all, a capitalist
15:26 johnmark heh :)
15:29 saurabh joined #gluster
15:31 johnmark kkeithley1: hey - know of any repos with a modern version of ruby? that complement rhel/centos installs?
15:32 wN joined #gluster
15:32 zaitcev joined #gluster
15:32 Supermathie Does gluster NFS rely on the external lockd? Or does it have an internal one?
15:33 kkeithley1 johnmark: er, no......
15:33 ndevos Supermathie: it has an internal one, you should not use the distribution one
15:33 johnmark kkeithley1: what? you're not a ruby guy??? heh ;)
15:33 Supermathie johnmark: rvm works rather nicely
15:33 johnmark Supermathie: ah, thanks for the top
15:33 johnmark er tip
15:33 * johnmark investigates
15:33 Supermathie lemme point you at something…
15:34 ndevos johnmark: sometimes EPEL has newer versions of packages too, not sure about ruby though
15:34 glusterbot New news from newglusterbugs: [Bug 920332] Mounting issues <http://goo.gl/TYQNi>
15:35 Supermathie johnmark: MMeh, the site is good: https://rvm.io/
15:35 glusterbot Title: RVM: Ruby Version Manager - RVM Ruby Version Manager - Documentation (at rvm.io)
15:35 * Supermathie uses RVM on the Discourse servers
15:36 kkeithley1 @repos
15:36 glusterbot kkeithley1: See @yum, @ppa or @git repo
15:36 kkeithley1 @yum
15:36 glusterbot kkeithley1: I do not know about 'yum', but I do know about these similar topics: 'yum repo', 'yum repository', 'yum33 repo', 'yum3.3 repo'
15:36 kkeithley1 @yum repo
15:36 glusterbot kkeithley1: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
15:36 johnmark Supermathie: nice! thanks
15:39 tryggvil__ joined #gluster
15:39 lh joined #gluster
15:41 hagarth joined #gluster
15:44 aravindavk joined #gluster
15:45 kkeithley1 Ruby, that's kinda like Perl, right? ;-)
15:45 kkeithley1 But not as good as Python
15:48 semiosis johnmark: use rvm, but dont install rvm with rpm/yum, do the curl | sh from rvm.io
15:48 semiosis it's really nice
15:48 semiosis johnmark: what are you doing with ruby?
15:48 Supermathie johnmark: Yeah, it may be a good idea to clear any distro ruby from your system first. Keeps things clean.
15:57 manik joined #gluster
15:58 aliguori joined #gluster
15:59 _NiC joined #gluster
16:26 wN joined #gluster
16:33 Mo____ joined #gluster
16:51 johnmark semiosis: putting up an instance of octopress
16:51 johnmark which requires ruby 1.9.x
16:52 semiosis cool
17:03 InnerFIRE joined #gluster
17:04 plarsen joined #gluster
17:07 failshell joined #gluster
17:10 failshell hello. does anyone know of monitoring plugins for gluster?
17:11 bala joined #gluster
17:12 semiosis my ,,(puppet) module has some nagios checks in it
17:12 glusterbot (#1) https://github.com/semiosis/puppet-gluster, or (#2) https://github.com/purpleidea/puppet-gluster
17:14 camel1cz joined #gluster
17:18 johnmark failshell: and soon https://forge.gluster.org/puppet-gluster
17:18 glusterbot Title: puppet-gluster - Gitorious (at forge.gluster.org)
17:19 failshell https://code.google.com/p/glusterfs-status/
17:19 glusterbot Title: glusterfs-status - CLI Tool and Nagios Plugin to monitor connection state of GlusterFS Nodes - Google Project Hosting (at code.google.com)
17:19 failshell this seems interesting
17:20 H__ indeed
17:20 johnmark failshell: that is interesting
17:20 * johnmark looks
17:20 H__ now an error log parser
17:20 johnmark something else to mirror on the forge :)
17:20 failshell well, it would be better in Ruby
17:20 failshell but it'll have to do : )
17:20 H__ where most real errors seem to be buried in fake error noise
17:21 H__ gluster has a world to win on that matter
17:21 johnmark H__: +1000
17:32 camel1cz joined #gluster
17:36 camel1cz left #gluster
17:39 bala joined #gluster
17:53 ramkrsna joined #gluster
17:53 ramkrsna joined #gluster
17:54 ctria joined #gluster
18:01 disarone joined #gluster
18:02 flrichar joined #gluster
18:06 lpabon joined #gluster
18:20 jskinner_ joined #gluster
18:20 hagarth joined #gluster
18:29 jskinn___ joined #gluster
18:33 failshell is there some kind of API to talk to the cluster?
18:33 failshell something that would allow me to get information about peers for example
18:36 semiosis failshell: gluster command --xml
18:36 semiosis failshell: gluster peer status --xml
18:36 semiosis for example
18:37 failshell gluster: unrecognized option '--xml'
18:39 semiosis what version are you using?
18:39 failshell 3.2.7
18:40 failshell part of EPEL6
18:40 semiosis hmm i thought 3.2 had --xml
18:42 failshell i could always upgrade to 3.3
18:42 failshell there's RPMs available
18:43 semiosis probably a good idea, but see ,,(3.3 upgrade notes)
18:43 glusterbot http://goo.gl/qOiO7
18:45 failshell ish
18:45 64MACX37H joined #gluster
18:49 92AAACSSD joined #gluster
18:49 66MAAHUPZ joined #gluster
18:56 Supermathie failshell: [michael@fearless1 ~]$ sudo gluster peer status --xml
18:56 Supermathie <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
18:56 Supermathie <cliOutput><opRet>0</opRet><opErrno>0</opErr​no><opErrstr></opErrstr><peerStatus><peer><u​uid>767bf095-3d21-42c0-a5ff-eff60b22c34f</uu​id><hostname>fearless2</hostname><connected>​1</connected><state>3</state><stateStr>Peer in Cluster</stateStr></peer></peerStatus></cliOutput>
18:56 Supermathie gluster 3.3.1 on RHEL6, yep.
18:58 Supermathie Hey guysâ, is getting this every 3 seconds normal or indicative of an error?
18:58 Supermathie [2013-04-04 14:58:22.707925] I [client.c:2090:client_rpc_notify] 0-gv0-client-5: disconnected
18:58 semiosis I means INFO
18:58 Supermathie s/error/misconfiguration
18:59 semiosis so it's probably not indicative of an error
19:00 Supermathie [2013-04-04 15:00:13.194324] E [nfs3.c:4741:nfs3svc_fsinfo] 0-nfs-nfsv3: Error decoding arguments
19:00 Supermathie Now *that*'s an error :)
19:00 NuxRo http://www.h-online.com/open/news/item/OpenStack​-goes-Grizzly-with-seventh-release-1835369.html
19:00 glusterbot <http://goo.gl/qcvxA> (at www.h-online.com)
19:01 NuxRo they mention gluster, anybody has more details about grizzly & gluster?
19:01 semiosis @ufo
19:01 glusterbot semiosis: I do not know about 'ufo', but I do know about these similar topics: 'ufopilot'
19:04 disarone left #gluster
19:05 johnmark NuxRo: there's also cinder integration
19:05 johnmark NuxRo: see http://www.gluster.org/community/documentation/in​dex.php/GlusterFS_Cinderhttp://www.gluster.org/co​mmunity/documentation/index.php/GlusterFS_Cinder
19:05 glusterbot <http://goo.gl/Xk568> (at www.gluster.org)
19:06 johnmark oops
19:06 johnmark NuxRo: http://www.gluster.org/community/docu​mentation/index.php/GlusterFS_Cinder
19:06 glusterbot <http://goo.gl/Q9CVV> (at www.gluster.org)
19:15 kkeithley1 wrt ufo and grizzly, I can tell you this https://bugzilla.redhat.com/show_bug.cgi?id=948039
19:15 glusterbot <http://goo.gl/NK24m> (at bugzilla.redhat.com)
19:15 glusterbot Bug 948039: low, high, future, kkeithle, NEW , rebase to grizzly ($HEAD)
19:15 kkeithley1 and this https://bugzilla.redhat.com/show_bug.cgi?id=948041
19:15 glusterbot <http://goo.gl/77lzU> (at bugzilla.redhat.com)
19:15 glusterbot Bug 948041: low, high, future, kkeithle, NEW , rebase to grizzly (release-3.4)
19:18 NuxRo thanks johnmark !
19:24 kkeithley1 and this http://review.gluster.org/4779
19:24 glusterbot Title: Gerrit Code Review (at review.gluster.org)
19:24 jskinner_ joined #gluster
19:26 robinr joined #gluster
19:34 Supermathie How do I get more detail out of glsuter than just:
19:34 Supermathie [2013-04-04 15:29:29.449260] E [nfs3.c:4741:nfs3svc_fsinfo] 0-nfs-nfsv3: Error decoding arguments
19:35 glusterbot New news from resolvedglusterbugs: [Bug 823836] s3ql based brick added but no access to it <http://goo.gl/T2dnj>
19:42 xiu left #gluster
19:48 dougalb joined #gluster
19:49 dougalb i have a gluster 3.3 question; how do I manually make changes to the volume configuration i.e. without using the gluster volume set command?
19:57 ferrel joined #gluster
19:58 semiosis dougalb: that's not recommended
19:58 semiosis not supported
19:59 ferrel Hi all, I may have asked this or something similar before but I'm still a but fuzzy...
19:59 ferrel Can I indeed store 100+GB size images files in a GlusterFS volume and expect geo-replication to save usable backups of these images files for disaster recovery if the Virtual Machines are constantly running?
19:59 avati_ Supermathie: the 3-second message is likely because the client is unable to connect to the server. 3 seconds is the default connection retry interval.
20:02 robos joined #gluster
20:04 Supermathie So every time my NFS client (Oracle) is calling NFS FSINFO, I get back 'procedure can't decode params'
20:05 avati_ odd
20:06 avati_ what nfs client is it? what do you mean by "oracle"?
20:06 avati_ is it a solaris nfs client?
20:06 Supermathie Oracle 11g
20:06 Supermathie doing DNFS
20:06 avati_ ah
20:06 avati_ i must admit we have never tested against that
20:08 avati_ i'm wondering if there is some kind of incompatiblity b/w our xdr and oracle's
20:08 avati_ though our XDR code is taking the definitions straight out of the rfc and rpcgen'ed it
20:08 Supermathie I can provide packet captures :)
20:09 Supermathie and I am tasked with Getting This Working.
20:09 avati_ can you send a packet capture from oracle and another from df from a linux nfs client?
20:10 ramkrsna joined #gluster
20:10 avati_ you do know that we have never tested gluster as a datbase backend, right?
20:11 Supermathie Now's your chance :)
20:11 avati_ and if oracle issues stable writes, all our releases have this wierd inefficent handle of nfs stable writes.. you really want to use the git HEAD which has fixed the issue
20:11 Supermathie It's working decently over kNFS
20:11 avati_ sure would work over knfs
20:11 Supermathie I believe it's sending out unstable writes...
20:12 avati_ ok
20:12 Supermathie Linux df is sending a FSSTAT RPC, not FSINFO
20:13 Supermathie (well those unstable writes are kNFS, but thanks I'll keep that in mind)
20:15 avati_ hmm, no wonder, fsinfo codepath is probably never tested on gluster
20:15 rastar joined #gluster
20:16 dougalb semiosis: i want to set options that are not available in the gluster command like option transport.socket.nodelay on
20:16 avati_ can you send me the packet capture from oracle?
20:17 avati_ the request is only supposed to carry a filehandle and nothing else
20:17 avati_ i wonder how *that* failed decoding at xdr
20:17 semiosis dougalb: iirc there's a way to pass options like that to an individual client
20:18 semiosis why would you want to set that option?
20:18 dougalb semiosis: it is a server option, typically in the vol files
20:18 dbruhn__ Does anyone have an idea of what events cause the .vol files to be written out?
20:19 semiosis dbruhn__: setting volume options, creating/deleting volumes
20:19 dougalb semiosis: working on building out an architecture and performance is a key requirement. extensive digging shows this changes the TCP connection behaviour and shows improvements. it is in the code but off by default as it would change how it behaves from the defaults of the past
20:20 dbruhn__ semiosis , sorry more specifically the fuse.vol files
20:20 semiosis dbruhn__: yeah i know, same answer
20:20 avati_ dbruhn__: nodelay must be on by default
20:20 elyograg I thought everything these days defaulted to tcp nodelay. nagle was good stuff when links were super slow, but this is a different world.
20:20 avati_ what version are you using?
20:20 * semiosis waves to avati_
20:21 avati_ semiosis!
20:21 * avati_ waves back
20:21 dbruhn__ weird, I've made no such changes and all of my file were overwritten by the defaults
20:21 dbruhn__ it breaks my rdma mounts with the defaults
20:21 avati_ dbruhn__: what version are you using?
20:21 semiosis maybe others idk about
20:21 dbruhn__ 3.3.1
20:21 ctria joined #gluster
20:22 avati_ are you sure setting nodelay on improved performance? it is supposed to be on by default already
20:23 avati_ yes, nodelay is on by default, checked 3.3.1 code just now
20:23 dbruhn__ I'm not changing the nodelay settings, I am having issues with the RDMA mounts not working unless I specify the port
20:24 dbruhn__ thats the other guy in the room
20:25 avati_ yeah.. rdma does not (yet) work out of the box
20:25 avati_ would really love to find an rdma maintainer
20:27 dbruhn__ That would be awesome! lol, I am obviously fighting a bit with it and was really confused when all of my fuse.vol files reverted back to the defaults after running for a couple weeks unchanged.
20:27 semiosis @later tell dougalb <avati_> yes, nodelay is on by default, checked 3.3.1 code just now
20:27 glusterbot semiosis: The operation succeeded.
20:28 andreask joined #gluster
20:28 avati_ argh, i just realized i was confusing dbruhn__ with dougalb :(
20:28 * avati_ feels old
20:32 dbruhn__ lol
20:32 Supermathie Does Linux NFS have trouble locking files sitting on an underlying gluster-fuse mount?
20:35 Supermathie Was going to try serving it out with Linux NFS but now I'm having locking problems. BLARGHH
20:36 csu2 joined #gluster
20:44 avati_ Supermathie: you have a firewall on your linux nfs client, i think
20:46 semiosis ,,(ports)
20:46 glusterbot glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
20:47 tryggvil joined #gluster
20:54 avati_ that ports message is now stale.. we use port number much higher up (in the IANA private range) for bricks
20:55 avati_ and for NLM, we need the server to be able to connect to the NFS client's NLM port as adveritised by rpcbind in the client
21:28 Matthaeus1 joined #gluster
21:56 nueces joined #gluster
21:58 duerF joined #gluster
22:06 tryggvil joined #gluster
22:10 georgeh|workstat joined #gluster
22:19 duerF joined #gluster
22:19 robos joined #gluster
22:29 tristanz joined #gluster
22:29 badone joined #gluster
22:30 tristanz is there any viable solution in gluster right now for snapshotting?  Something akin to GlusterFS + ZFS snapshots.
22:33 jskinner joined #gluster
22:42 elyograg tristanz: I don't think so.  If you're willing to be in "unsupported" land, you can put zfs on linux or go really bleeding edge and use btrfs for the bricks.  Then you can get snapshots on the bricks, but they would not be accessible from the volume mount. this would work reasonably well for non-distributed volumes, but for distributed volumes, you'd have to check snapshots on all the bricks to find the file you need.
22:43 elyograg there would be no central automation for creating snapshots, it would have to be done per-server.
22:47 tristanz I see. Does that have any advantages over multiple servers with ZFS volumes shared with NFS?
22:48 elyograg if a storage server goes down and your volume is replicated, the volume stays up.  By creating a shared IP and using that for NFS mounts, you can have NFS failover too.
22:49 tristanz but if volume is replicated I cannot use ZFS to snapshot?
22:50 elyograg you can't use snapshots directly with Gluster volumes.  The bricks that make up the volume can be snapshotted, though.
22:51 elyograg so you can access the data on the back end, but not from the clients.
22:53 mooperd joined #gluster
22:54 tristanz Thanks, that's helpful.  The goal allow uses to mount old versions of their home directories, so that's probably not enough.
22:57 tristanz are you aware of any solution that would allow this that scales beyond a single server?
22:57 elyograg Ceph claims to have snapshots.  It's a linux-only solution, though. They recommend that you don't share it with NFS, which is why I'm here and not there. :)
22:58 elyograg I've also heard that Ceph is in a state that some people think should be called Alpha, not even Beta.
22:58 elyograg Because it didn't meet my requirements, I never got around to testing it.
23:00 tristanz I've been looking that them.  The mailing list suggests that snapshotting FS is definitely alpha and may even be cut from stable release.
23:00 tristanz ZFS is perfect, but scaling is open question.
23:01 elyograg yes, that's why we're looking into gluster.  We have SANs with ZFS on them, and Solaris 10 doing a failover NFS cluster to access them.
23:02 elyograg Oracle bought Solaris and made it expensive, plus we have been having occasional stability problems with our SAN hardware. if it werent' for those, we'd just keep doing that, because it does everything we need.
23:03 elyograg The freeware equivalent to Sun Cluster is apparently not there, or hard to do, so we'd have to give Oracle a huge pile of money to upgrade or add to it.
23:05 elyograg At $1000 per year per CPU, adding two dual-CPU boxes means $4000.  every year.
23:06 glusterbot New news from newglusterbugs: [Bug 948643] gluster volume status --xml outputs wrong xml structure <http://goo.gl/0k527>
23:07 tristanz Yeah, not fun.  We have a ton of flexibility but have yet to find something that works.  We're doing scientific HPC where snapshotting is a very desirable feature.  There are some huge Lustre+ZFS installs but the documentation around is weak.
23:23 apatil joined #gluster
23:23 elyograg Lustre also has no built in redundancy.  I looked heavily at Lustre, got quite a ways into the documentation before I discovered that little gem.
23:24 tristanz apatil actually just asked about this on lustre mailing list, it's the same situation as gluster:  ZFS for the bricks.
23:25 apatil Hi elyograg, yeah, see Andreas Digler's response at https://lists.01.org/pipermail/hp​dd-discuss/2013-April/000126.html. I was wondering if you looked at running ZFS on top of distributed Ceph RBD's?
23:25 glusterbot <http://goo.gl/rEsIs> (at lists.01.org)
23:29 elyograg I never tested Ceph. I need a filesystem that will dynamically grow, replicated block devices don't really have that property.
23:33 elyograg Ceph has some really cool technology, and if I were in a Linux-only shop, I would probably have chosen it.
23:34 apatil I was thinking that one could create new RBD's and add them to the upper ZFS storage pool to grow the filesystem, but don't have much intuition for performance.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary