Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-29

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 yinyin joined #gluster
00:16 a2 joined #gluster
00:17 devoid1 joined #gluster
00:48 jclift_ joined #gluster
00:49 kevein joined #gluster
01:01 erik49 joined #gluster
01:10 john1000 joined #gluster
01:21 majeff joined #gluster
01:26 john1000 hi, I am trying to set up geo-replication with gluster.  Do I need the passwordless authentication and the master and slave to be the root user?  I am using AWS EC2 instances that don't allow for signing in as root, so I"m not sure how to deal with that.
01:43 JoeJulian john1000: Interesting question... I'm sure the answer's yes, but there's got to be a way around it...
01:43 JoeJulian What if you created a volume for the remote host and geo-replicated to that?
01:47 bharata joined #gluster
01:52 john1000 JoeJulian: I don't think I understand, how would that help
01:53 john1000 I can't georeplicate to a slave without sshing in as root on the slave
01:53 portante joined #gluster
01:53 john1000 the docs also said I could use an alias to a superuser, but I don't know what that means
01:53 JoeJulian Because if it were a volume, the master would mount the slave and replicate without needing a ssh tunnel.
01:53 john1000 surely this has been addressed by others using georeplication with AWS before
01:53 john1000 hmm
01:53 JoeJulian Probably, but nobody's told me about it.
01:54 john1000 i am trying to keep it as vanilla as possible
01:54 JoeJulian An alias to superuser would be another user in /etc/passwd whose uid=1.
01:54 john1000 isn't root uid=0?
01:54 JoeJulian er, 0
01:54 JoeJulian lol
01:54 john1000 hmm
01:54 john1000 that's an idea
01:54 john1000 maybe it will let me do that
01:55 JoeJulian If not, perhaps you can just change "PermitRootLogin" in /etc/ssh/sshd_config
01:55 john1000 do you know if there's a way I tell glusterfs what username to try to login to the slave with?
01:56 john1000 ok, I got root login to work
01:56 JoeJulian cool
01:56 john1000 but, I am still showing "faulty" as the status
01:56 JoeJulian Try waiting like 10min
01:57 john1000 I wish it would be more descriptive as to what is faulty
01:57 john1000 10 minutes?
01:57 john1000 that seems like a really long time, there's not even any data to sync
01:57 JoeJulian I think... maybe it's 2... I don't really use it, so my support ability kinda sucks.
01:57 JoeJulian Every time I learn something about it, nobody ever asks me again. :D
01:59 john1000 hmm, there's an actual error in the georeplication/volumename log
02:01 john1000 http://pastebin.com/1Uj46GbS
02:01 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
02:01 john1000 http://fpaste.org/15140/
02:01 glusterbot Title: #15140 Fedora Project Pastebin (at fpaste.org)
02:02 JoeJulian eof error... check the other end?
02:03 john1000 i am trying to figure out which logfile to check on the other end
02:04 theron joined #gluster
02:06 john1000 there's nothing in the geo-replication-slaves directory on the other end
02:07 JoeJulian Is there anything under /var/log/glusterfs?
02:07 john1000 it's weird because it says the start and s top are successful
02:07 JoeJulian I wonder if it's because there's no files?
02:08 john1000 well, i meant there are like 2 files
02:08 john1000 0 btye
02:08 john1000 0 byte for test
02:08 john1000 I can see by checking the auth.log on the slave that the master is connecting when I do the start command
02:10 john1000 etc-glusterfs-glusterd.vol.log just has a bunch of "reading from socket failed. Error (Transport endpoint is not connected)
02:11 john1000 i wish these log messages weren't so cryptic
02:11 john1000 makes it very hard to debug
02:19 JoeJulian That error means that a read function was performed against the socket. All glusterfs received for that read was an ENOTCONN (which isn't very descriptive either)
02:20 majeff joined #gluster
02:20 john1000 but that means it connected to the slave, right?
02:21 JoeJulian Looks like it, yes.
02:21 JoeJulian Sorry, normally I'd be digging through the source by now to see if I could find anything, but I'm having to write a very detailed letter to a programmer who doesn't realize he's going to do it my way.
02:22 john1000 nah it's ok, I appreciate the help
02:22 john1000 I just have a bit of a problem with an http cluster that is killing a lone NFS server, trying to spread out the load and I thought this might be a good way
02:23 JoeJulian seems reasonable.
02:23 john1000 yes, but only if it will stop being faulty :)
02:26 john1000 I found an old message on the list where someone got the same message with AWS, but no one replied, hah
02:29 JoeJulian I'll help you figure this out, but it might have to be after dinner... :/
02:30 john1000 I'll be here until I get it, or until I pass out from tiredness :)  Thank you!
02:41 __Bryan__ joined #gluster
02:43 john1000 anyone else in here care to give it a try in the meantime?
02:45 vshankar joined #gluster
03:07 majeff joined #gluster
03:07 hjmangalam1 joined #gluster
03:14 lanning joined #gluster
03:18 anands joined #gluster
03:25 majeff joined #gluster
03:35 mohankumar__ joined #gluster
03:35 hagarth joined #gluster
03:37 marmoset joined #gluster
03:38 marmoset left #gluster
03:39 hjmangalam1 joined #gluster
03:52 lalatenduM joined #gluster
04:16 aravindavk joined #gluster
04:19 john1000 JoeJulian, you still around?
04:23 awheeler_ joined #gluster
04:26 john1000 or anyone who can help me debug a faulty geo-replication volume?  I am going on about 3.5 hours with no progress, i can't figure out why it is "faulty"
04:28 hagarth john1000: did you check this out - http://gluster.org/community/documentation/index.​php/Gluster_3.2:_Troubleshooting_Geo-replication ?
04:28 glusterbot <http://goo.gl/kJsbK> (at gluster.org)
04:28 hjmangalam1 joined #gluster
04:30 john1000 hagarth: Thank you for that link.  I see my error message there, and it says that the solution involves the RPC communication
04:32 john1000 hagarth: but I don't see that any of the pre-reqs are not satisfied, passwordless ssh works fine, fuse is installed, the slave is just a plain directory on the slave, and it currently has 777 permissions, and glusterfs should be in the default location on both master and slave because I installed from the ubuntu packages
04:34 john1000 hagarth: does this look like a correct command to start up the georeplication?  Do I have to tell is to use the root account on the slave? "gluster volume create wwwsrvdata replica 2 peter1.wayfm.com:/export/wwwsrvdata "
04:35 deepakcs joined #gluster
04:46 hagarth john1000: have you started the volume on the slave?
04:46 hagarth oops, disregard that. I did not notice the part about the slave being a plain directory.
04:47 john1000 hagarth: yeah, it's just a plain directory, I didn't see a reason to use a volume, but if there is a reason to, I am open to it
04:48 hagarth john1000: what is the path to gsyncd on the slave?
04:49 hagarth and is the remote_gsyncd set to that path in /var/lib/glusterd/geo-replication/gsyncd.conf ?
04:50 john1000 hagarth: there is a gsyncd that is a bash script, and a gsyncd.py
04:54 john1000 hagarth: when I look at /etc/glusterd/geo-replication/gsynd.conf, there are two lines with remote_gsyncd in them, one us right under [peersrx . .] and another is under a different peersrx code
04:55 vpshastry joined #gluster
04:57 hagarth john1000: need to specify the path to gsyncd without the .py extension.
04:58 john1000 it is correct under the [peersrx . .] section, but under another peersrx section it is incorrect
05:00 hagarth what is the traceback that you see in the geo-rep logs?
05:04 john1000 hagarth: http://fpaste.org/15147/
05:04 glusterbot Title: #15147 Fedora Project Pastebin (at fpaste.org)
05:05 saurabh joined #gluster
05:07 john1000 hagarth: should I be updating the path to gsyncd on the master or the slave?
05:15 john1000 hagarth: I changed the remote_gsyncd path on the master under the second peersrx section to the correct path, and I got a starting and then an ok under status! So that is exciting.  on the downside, no files are showing up in the slave directory
05:17 hagarth john1000: we made some progress ;)
05:17 john1000 hagarth: yes!  And possibly found a bug in the ubuntu packages :)
05:19 john1000 the slave does not ssh back to the master and launch anything, right?  the master just logs in with ssh and runs the gsyncd command, right?
05:26 kshlm joined #gluster
05:32 majeff joined #gluster
05:33 hagarth john1000: yes .. no reverse communication from the slave to the master.
05:34 lalatenduM joined #gluster
05:34 john1000 hagarth: hmm, ok.  Do you have any ideas on what I should be looking at if I am getting status ok, but no copying of files?
05:36 hchiramm_ joined #gluster
05:37 lala_ joined #gluster
05:44 satheesh joined #gluster
05:52 glusterbot New news from newglusterbugs: [Bug 961856] [FEAT] Add Glupy, a python bindings meta xlator, to GlusterFS project <http://goo.gl/yCNTu>
06:01 edoceo JoeJulian: On Geo: Oh, I get it, on Master1 my Slave becomes glusterfs://Slave1/gluster_volume - and because Slave1 has also Slave2 as part of it's replica, then Master1 would start talking to Slave2/gluster_volume - right?
06:02 ricky-ticky joined #gluster
06:04 satheesh joined #gluster
06:08 rastar joined #gluster
06:10 guigui1 joined #gluster
06:12 john1000 the geo-replication log on my master is full of Gmaster waiting for being synced, and then done... but nothing is actually showing up on the slave
06:17 jtux joined #gluster
06:27 bulde joined #gluster
06:33 rastar joined #gluster
06:33 vimal joined #gluster
06:33 StarBeast joined #gluster
06:41 jtux joined #gluster
06:52 ekuric joined #gluster
06:58 dobber_ joined #gluster
07:03 lkoranda_ joined #gluster
07:04 majeff joined #gluster
07:07 hybrid5121 joined #gluster
07:08 hchiramm_ joined #gluster
07:11 Ramereth joined #gluster
07:12 StarBeast joined #gluster
07:17 ramkrsna joined #gluster
07:17 ramkrsna joined #gluster
07:22 andreask joined #gluster
07:29 kshlm joined #gluster
07:39 Gugge joined #gluster
07:41 guigui1 joined #gluster
07:55 m0zes joined #gluster
07:56 ctria joined #gluster
07:57 NuxRo http://www.slideshare.net/fullscreen/ran​dybias/state-of-the-stack-april-2013/74 glusterfs at an honourable 8% :)
07:57 glusterbot <http://goo.gl/AhtT7> (at www.slideshare.net)
08:01 samppah hmm.. where is this collected from? :)
08:03 samppah ah, some survey i guess :)
08:03 samppah nice
08:10 ngoswami joined #gluster
08:13 ujjain joined #gluster
08:18 guigui3 joined #gluster
08:19 NuxRo i dont know where they got the data
08:19 NuxRo but looks legit :)
08:20 NuxRo it's good to read all the slide if you are interested in openstack
08:26 mooperd joined #gluster
08:32 hagarth NuxRo: interesting!
08:36 vpshastry joined #gluster
08:36 shireesh joined #gluster
08:40 jiku joined #gluster
08:41 hchiramm_ joined #gluster
08:46 jurrien joined #gluster
08:50 Norky morning
08:51 Norky if I have created a volume with only TCP transport (the default), can I make it support RDMA as well after the fact?
08:51 mtanner_ joined #gluster
08:51 Airbear joined #gluster
08:51 sjoeboo_ joined #gluster
08:53 vpshastry joined #gluster
08:57 vpshastry joined #gluster
08:58 ngoswami joined #gluster
09:01 isomorphic joined #gluster
09:03 ccha joined #gluster
09:05 bulde1 joined #gluster
09:07 coredumb joined #gluster
09:14 hchiramm_ joined #gluster
09:16 hchiramm_ joined #gluster
09:21 bala1 joined #gluster
09:28 rb2k joined #gluster
09:29 bala1 joined #gluster
09:38 duerF joined #gluster
09:40 majeff joined #gluster
09:41 Norky if I have created a volume with only TCP transport (the default), can I make it support RDMA as well after the fact?
09:44 hagarth joined #gluster
09:45 rastar1 joined #gluster
09:56 Rocky__ joined #gluster
10:03 badone joined #gluster
10:07 kshlm joined #gluster
10:11 bala1 joined #gluster
10:14 hagarth joined #gluster
10:16 majeff joined #gluster
10:18 guigui1 joined #gluster
10:22 manik joined #gluster
10:28 ricky-ticky joined #gluster
10:29 bulde joined #gluster
10:42 edward1 joined #gluster
10:44 ninkotech joined #gluster
10:44 andreask joined #gluster
10:44 ninkotech__ joined #gluster
10:48 rastar joined #gluster
10:48 ricky-ticky joined #gluster
10:50 36DAAPQUW joined #gluster
10:53 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
11:05 tziOm joined #gluster
11:08 Maskul joined #gluster
11:19 hagarth joined #gluster
11:28 majeff joined #gluster
11:34 shylesh joined #gluster
11:41 isomorphic joined #gluster
11:48 spider_fingers joined #gluster
11:53 balunasj joined #gluster
12:07 robo joined #gluster
12:09 vpshastry1 joined #gluster
12:18 Norky if I have created a volume with only TCP transport (the default), can I make it support RDMA as well after the fact?
12:19 rastar joined #gluster
12:24 glusterbot New news from newglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY> || [Bug 968301] improvement in log message for self-heal failure on file/dir in fuse mount logs <http://goo.gl/GI3SX>
12:26 deepakcs joined #gluster
12:35 morse joined #gluster
12:42 bennyturns joined #gluster
12:44 sysconfig joined #gluster
12:53 bulde joined #gluster
13:02 hagarth joined #gluster
13:03 kbsingh joined #gluster
13:06 aliguori joined #gluster
13:09 piotrektt joined #gluster
13:15 loke joined #gluster
13:17 andrewjs1edge joined #gluster
13:18 majeff joined #gluster
13:44 yinyin joined #gluster
13:49 JoeJulian Norky: I don't think you can add rdma after the volume is created. At least I don't find any command that could do that in my first cursory pass.
13:53 rastar1 joined #gluster
13:56 hjmangalam1 joined #gluster
13:57 bugs_ joined #gluster
14:00 kaptk2 joined #gluster
14:05 spider_fingers left #gluster
14:09 majeff joined #gluster
14:10 zwu joined #gluster
14:25 Norky JoeJulian, nor me, I was wondering if it woudl be possible to stop, edit the server volfiles and restart
14:26 JoeJulian Probably remove the vol files and edit the info file. The vol files will be recreated.
14:26 JoeJulian Or, of course, you can just delete and recreate the volume, resetting the xattrs to avoid the "already part of a volume" message
14:27 Norky yeah, that's an option I was trying to avopid, it's 20TB of quite important data :)
14:28 JoeJulian The data won't be touched.
14:28 joelwallis joined #gluster
14:28 Norky I know, but I'm still afraid  - external customer
14:29 Norky I suppose I can test locally before doing it properly
14:29 * JoeJulian points out the irony of being afraid of using the software as intended and instead hacking a solution behind the scenes...
14:29 Norky no IB hardware to actually test RDMA, but at least I can turn a vol that is TCP only into one that reports RDMA as an option
14:29 Norky heh, fair enough :)
14:30 JoeJulian I'd probably hack it too. :D
14:30 Norky if deleting volumes and recreating really is "use as intended"
14:30 JoeJulian It has been since 3.1
14:30 Norky I always got the feeling (p[robably groundl;ess assumption on my part) that one shoudl start with empty bricks when creating volumes
14:31 Norky cursed chubby fingers
14:31 JoeJulian As long as you rebuild the volume in the same order, you'll be good.
14:32 JoeJulian ... but, like I say, I personally have nothing against hacking the info files. Just make sure you sync the same hack across all the servers.
14:32 Norky indeed
14:35 JoeJulian Looks like you'll have to create a sample volume with rdma support to see what the value of transport-type should be. Since tcp=0 I'm not sure what rdma or both would look like.
14:35 manik joined #gluster
14:37 Norky we've made rdma and rdma,tcp volumes before, but never got rdma working for this customer
14:37 Norky I suppose I shoudl have just created them using both and only used tcp until RH get rdma working properly
14:38 Norky I don't suppose you, or any one, know of a way of testing rdma without InfiniBand hardware? There is such a thing as "RDMA over Converged Ethernet" (RoCE - stupid name) but that, so far as I can tell, depends on certain (10Gb) Ethernet NICs
14:38 nueces joined #gluster
14:39 JoeJulian I've seen other pseudo IB devices. Someone once came in here and asked about them, and tried getting several working, but never reported any success.
14:40 lpabon joined #gluster
14:46 majeff joined #gluster
14:55 __Bryan__ joined #gluster
14:59 majeff joined #gluster
15:00 mohankumar__ joined #gluster
15:01 mohankumar__ joined #gluster
15:08 portante joined #gluster
15:08 daMaestro joined #gluster
15:09 krishna joined #gluster
15:19 devoid joined #gluster
15:22 ccha is it possible to display which fuse clients are connected to the glusterfs volume ?
15:23 semiosis you could look at the tcp connections
15:23 ccha there is no gluster command for that ?
15:23 JoeJulian gluster volume status $vol clients
15:26 ccha oh I mean client as "glusterfs-client"
15:27 erik49 joined #gluster
15:28 JoeJulian Only if you join that client to the peer group
15:28 ccha clients which use mount -t gluserfs on the volume
15:28 JoeJulian There's no way to get that information from the filesystem, no.
15:28 tg2 Joe, any idea why the rebalance is going uber slow? The array it's on can read/write at over 1GB/s... this has been running for nearly 8 hours
15:29 tg2 only 1.1Tb copied
15:29 tg2 http://pastebin.com/GRcJzABz
15:29 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
15:29 tg2 http://pastie.org/pastes/7979917​/text?key=bof2wpj4pzoyo8kv85wda
15:29 glusterbot <http://goo.gl/sotxx> (at pastie.org)
15:31 JoeJulian No clue. :/
15:32 JoeJulian I would look at the code to see if there's a schedule or something. Perhaps there's some rate-limit default to prevent overloading the system? I'd look but I have to leave for the day.
15:32 H__ Question : how does one prevent the self-heal daemon from starting ?
15:34 JoeJulian Like a one-off thing, or permanently?
15:52 14WAAW1UM joined #gluster
15:56 erik49 joined #gluster
16:02 H__ permanently
16:02 bala1 joined #gluster
16:12 H__ JoeJulian: permanently
16:20 heuristik joined #gluster
16:21 krishna joined #gluster
16:27 heuristik Oops -- I may have made an error.   New to this.   Am I posting to #gluster?
16:29 dbruhn You are indeed chatting in #glustter
16:29 dbruhn What's your error
16:30 semiosis hello
16:30 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:30 semiosis heuristik: ^^^
16:32 hagarth joined #gluster
16:42 Mo_ joined #gluster
16:46 heuristik Hard to know where to start.   (Thank you for advice -- I understand it is volunteer)   Added new bricks to existing distributed volume for replication and starting a self-heal.  After about 20 minutes, a large number of "possible split-brain" errors began to appear in the log files.   This is a large (for us) 3-brick (now 6 with replication) volume with 74TB of data.    Directories, as well as individual  files were listed, as being
16:46 heuristik "possible split brain" state.   Some files that were on the bricks are no longer there.  For now, I remounted the volume by NFS, readonly and I'm rsyncing 30TB to another server.   Where do I begin trying to find out what happened and what went wrong?  There are many, many  "unable to self-heal" errors that look like this: "afr-self-heal-common.c:2160:​afr_self_heal_completion_cbk] 0-gf1-replicate-1: background  meta-data entry self-
16:46 heuristik failed on /cfce1/home/bradner" and others that look like this: "[afr-self-heal-metadata.c​:472:afr_sh_metadata_fix] 0-gf1-replicate-1: Unable to self-heal permissions/ownership of '/cfce1/home/cl512' (possible split-brain). Please fix the file on all backend volumes."   (Is this too long a question?)
16:52 thomaslee joined #gluster
16:52 samppah heuristik: have you checked if those missing files exist on backend?
16:55 heuristik Unfortunately, yes.  Some that we know were present are no longer on any of the bricks, neither on the original volume bricks nor on the replicated … replication had not completed -- I halted the self-heal daemon when these errors started -- afraid more damage would occur.
17:00 vpshastry joined #gluster
17:00 vpshastry left #gluster
17:12 georgeh|workstat joined #gluster
17:14 portante joined #gluster
17:16 portante left #gluster
17:35 meunierd1 joined #gluster
17:39 meunierd1 Is there a planned timeframe for the 3.4 release?
17:42 partner joined #gluster
17:42 tg2 http://i.imgur.com/Zut2S23.png
17:43 tg2 thats the speed of my remove-brick redistribution... its on a 10gbps interface and one of the bricks is on the same server so it's using loopback interface -- the brick itself can read/write at 5gpbs easily... is there a reason the file redistribution is so slow?  Can it be sped up?
17:45 tg2 I have in 9 hours of redistribution, about 1.4Tb transferred... this seems abnormally slow given the infrastructure
17:50 krishna joined #gluster
17:58 rb2k joined #gluster
18:11 jdarcy joined #gluster
18:12 jdarcy Finally done with my $#@! slides for Summit.
18:15 zaitcev joined #gluster
18:15 H__ Question : how does one prevent the self-heal daemon from ever starting ?
18:16 jdarcy Does cluster.self-heal-daemon not control that?
18:18 H__ looking.   yes. that promises exactly what i need
18:18 H__ testing now
18:22 leaky joined #gluster
18:22 leaky left #gluster
18:26 zwu joined #gluster
18:27 \_pol joined #gluster
18:28 tziOm joined #gluster
18:31 aliguori joined #gluster
18:40 erik49 joined #gluster
18:45 H__ I see no self-heal happening, but I do see a glusterfs on one server that still mentions the /var/lib/glusterd/glustershd/run/glustershd.pid
18:47 H__ correction : on all servers. there is 1 server with an extra glusterfs with this : --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off
18:47 H__ i wonder if this is OK
18:52 H__ after a gluster full stop and start this extra glusterfs process is gone. volume info shows "cluster.self-heal-daemon: off" for all servers. seems good.
19:00 jdarcy Cool.  :)
19:03 krishna joined #gluster
19:09 Airbear joined #gluster
19:29 bennyturns joined #gluster
19:30 ixmun1 joined #gluster
19:31 ixmun1 Is it possible to change the replica count of a volume, online?
19:32 tylerflint joined #gluster
19:34 notxarb joined #gluster
19:34 y4m4 joined #gluster
19:35 willy_wolf joined #gluster
19:38 tylerflint does anybody know if hekafs is still alive?
19:44 willy_wolf has anyone made some tests with splunk?
19:45 ixmun joined #gluster
19:47 mtanner__ joined #gluster
19:49 semiosis ixmun: since 3.3.0 yes that is possible, it's done using the add-brick/remove-brick commands.  iirc the relevant parts of ,,(rtfm) show it
19:49 glusterbot ixmun: Read the fairly-adequate manual at http://goo.gl/E3Jis
19:50 mindbndr joined #gluster
19:50 semiosis tylerflint: my impression is that the hekafs features will (some day) be merged into glusterfs
19:51 meunierd1 joined #gluster
19:51 tylerflint semiosis: thanks
19:51 ixmun Excellent semiosis, thanks for RTFMing me.
19:51 bet_ joined #gluster
19:51 semiosis any time :)
19:52 johnmark_ joined #gluster
19:52 tylerflint love glusterfs, I think we prematurely jumped on board and used it as a multitenant filesystem
19:52 ninkotech_ joined #gluster
19:52 a2_ joined #gluster
19:52 tylerflint we got it to work with bindmounts, but throttling is a huge issue
19:52 semiosis tylerflint: i'm sure jdarcy would like to hear about your multi-tenant use
19:52 semiosis but seems he's not here
19:55 rosco joined #gluster
19:57 jbrooks joined #gluster
19:57 dbruhn joined #gluster
20:00 badone joined #gluster
20:01 clutchk tylerflint: Are you just mounting subdirectories in fuse using bindmounts
20:01 clutchk ?
20:03 tylerflint yep
20:03 semiosis i've done that (not for multitenant)
20:03 semiosis works pretty good
20:03 clutchk sweet
20:03 clutchk anything i should look out for if i try it?
20:03 tylerflint yeah, it works well, until one client hammers the IO and all the other tenants suffer
20:04 semiosis clutchk: it should "just work"
20:04 clutchk cool thx
20:04 tylerflint also, if you're using linux, be aware the load is disk io bound
20:05 tylerflint so when gluster is struggling because one client is issuing a thousand stat() ops triggering a health check, your server will look like it's on fire
20:05 semiosis s/will look like it's/be/ ;)
20:06 tylerflint yeah :)
20:08 rb2k joined #gluster
20:08 stickyboy haha
20:09 mooperd joined #gluster
20:09 tylerflint so yeah, bind-mounts accomplish isolation quite well, but fair share scheduling is up in the air
20:09 tylerflint oh wait, yes there's one thing to be aware of!
20:10 tylerflint if you bind-mount a subdir in of a fuse client, and then a umount is issued on the subdir, it will kill the parent mount
20:10 tylerflint then all the other subdirs get transport not connected errors
20:10 semiosis wow
20:10 tylerflint you'll have to patch the kernel
20:11 tylerflint it's not a bad patch though
20:11 stickyboy wow...
20:12 tylerflint it's a fuse issue, not a gluster issue
20:12 tylerflint and it's a "feature"
20:12 tylerflint let's see if I can look it up
20:16 ska Can a single server gluster fs perform as well as an NFS server?
20:19 semiosis ska: why would that even matter?
20:19 semiosis if performance matters, scale out
20:20 mooperd joined #gluster
20:21 Chocobo joined #gluster
20:24 tylerflint anybody "successfully" run gluster on ipv6?
20:24 andreask joined #gluster
20:25 ixmun If I `mount -t glusterfs 127.0.0.1:/test /mnt/te/`, nothing happens. If I `mount -t glusterfs 127.0.0.1:/test te/`, Transport endpoint is not connected if I ls. Any ideas why?
20:26 semiosis ixmun: check your client log files... /var/log/glusterfs/mnt-te-.log and te-.log (the log file is named for the mount point path)
20:27 semiosis ixmun: usual causes of Transport endpoint not connected are 1) volume stopped, 2) iptables blocking, 3) hostname resolution
20:27 ixmun 0-glusterfs: failed to get the 'volume file' from server
20:27 ixmun uhm, maybe one of those
20:27 semiosis ixmun: ,,(pasteinfo)
20:27 glusterbot ixmun: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:28 semiosis oh also when mounting with -t gluserfs you shouldn't put a / in the volume name, so 127.0.0.1:test instead of 127.0.0.1:/test (though it shouldn't cause problems if its there)
20:28 johnmark tylerflint: heya - you should definitely check out jdarcy when he's here
20:28 johnmark hrm... he usually is
20:28 ixmun semiosis: http://fpaste.org/15324/59329136/, thanks for help
20:29 semiosis ixmun: your volume name is "testvol" so your mount command should specify 127.0.0.1:testvol
20:30 ixmun &%#&%&$# that tutorial is not clear then!!!!!
20:30 ixmun (one that I found myself and won’t keep reading for that reason :P)
20:30 semiosis link?
20:30 ixmun http://www.howtoforge.com/high-availability-stor​age-with-glusterfs-3.2.x-on-ubuntu-11.10-automat​ic-file-replication-across-two-storage-servers
20:30 glusterbot <http://goo.gl/kxfdQ> (at www.howtoforge.com)
20:31 semiosis howtofail
20:31 semiosis that site consistently causes trouble
20:31 ixmun I recall someone here telling me that I should avoid howtoforge... most likely it was you.
20:31 semiosis lost count of how many people turn up in here stumbling like yourself because of their lackluster articles
20:31 semiosis :)
20:32 semiosis glusterbot: meh
20:32 glusterbot semiosis: I'm not happy about it either
20:32 ixmun now that we are talking about this... is it (or will it be) possible to not need to specify always the absolute path to a brick when adding one to a volume?
20:32 ixmun (or should it be a good idea to)
20:37 dbruhn what is the command to see which bricks a file is stored on?
20:38 semiosis ~pathinfo | dbruhn
20:38 glusterbot dbruhn: find out which brick holds a file with this command on the client mount point: getfattr -d -e text -n trusted.glusterfs.pathinfo /client/mount/path/to.file
20:42 erik49 joined #gluster
20:44 erik49_ joined #gluster
20:46 Airbear joined #gluster
21:00 ixmun Damn... cannot get inotify work with a glusterfs mount.
21:00 rb2k joined #gluster
21:08 semiosis yeahhh
21:10 ixmun It is like Murphy having fun with me... every time I seek for a feature, it just happens to be the only one not implemented in the software I use...
21:18 tylerflint left #gluster
21:34 andrewjsledge joined #gluster
21:51 devoid joined #gluster
21:55 notxarb left #gluster
22:12 rb2k joined #gluster
22:28 Airbear joined #gluster
22:32 robo joined #gluster
22:41 duerF joined #gluster
22:46 krishna joined #gluster
22:50 daMaestro joined #gluster
23:02 plarsen joined #gluster
23:19 hjmangalam1 joined #gluster
23:25 sonne joined #gluster
23:35 StarBeast joined #gluster
23:41 theron joined #gluster
23:42 majeff joined #gluster
23:55 vpshastry joined #gluster
23:57 hjmangalam1 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary