Camelia, the Perl 6 bug

IRC log for #gluster, 2013-02-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:18 andrei joined #gluster
00:25 zaitcev joined #gluster
00:38 raven-np joined #gluster
01:06 glusterbot New news from newglusterbugs: [Bug 859861] extras don't respect autotools ${docdir} variable <http://goo.gl/jUqNE>
01:10 bala joined #gluster
01:36 glusterbot New news from newglusterbugs: [Bug 882127] The python binary should be able to be overridden in gsyncd <http://goo.gl/cnTha>
01:37 Hymie left #gluster
02:07 kevein joined #gluster
02:32 raven-np joined #gluster
02:33 kkeithley1 joined #gluster
03:07 glusterbot New news from newglusterbugs: [Bug 903873] Ports show as N/A in status <http://goo.gl/0oB6Y>
03:37 shylesh joined #gluster
04:22 lala joined #gluster
04:25 SteveCooling joined #gluster
04:25 pull joined #gluster
04:25 Gugge joined #gluster
04:25 helloadam joined #gluster
04:25 haidz joined #gluster
04:25 DWSR joined #gluster
04:25 flowouffff joined #gluster
04:25 kkeithley joined #gluster
04:25 gm__ joined #gluster
04:25 avati joined #gluster
04:26 mtanner joined #gluster
04:26 joeto joined #gluster
04:26 eightyeight joined #gluster
04:26 trmpet1 joined #gluster
04:26 ackjewt joined #gluster
04:26 VeggieMeat joined #gluster
04:26 johnmark joined #gluster
04:26 mjrosenb joined #gluster
04:26 GLHMarmot joined #gluster
04:26 thekev joined #gluster
04:26 rnts joined #gluster
04:26 JordanHackworth joined #gluster
04:26 haakond joined #gluster
04:26 stigchristian joined #gluster
04:26 hybrid5121 joined #gluster
04:26 kleind joined #gluster
04:26 mnaser joined #gluster
04:26 wNz joined #gluster
04:26 the-me joined #gluster
04:26 m0zes joined #gluster
04:26 tjikkun_ joined #gluster
04:26 cicero joined #gluster
04:26 Zengineer joined #gluster
04:26 Humble joined #gluster
04:26 tru_tru joined #gluster
04:26 morse joined #gluster
04:26 niv joined #gluster
04:26 jiqiren joined #gluster
04:26 stat1x joined #gluster
04:26 elyograg joined #gluster
04:26 maxiepax joined #gluster
04:26 arusso joined #gluster
04:26 Shdwdrgn joined #gluster
04:26 purpleidea joined #gluster
04:26 Guest82249 joined #gluster
04:26 jmara joined #gluster
04:26 smellis joined #gluster
04:26 Ryan_Lane joined #gluster
04:26 johndescs joined #gluster
04:26 errstr joined #gluster
04:26 stigchristian joined #gluster
04:26 raven-np joined #gluster
04:26 hattenator joined #gluster
04:26 slowe joined #gluster
04:26 polenta joined #gluster
04:26 jgillmanjr joined #gluster
04:26 portante joined #gluster
04:26 mynameisbruce_ joined #gluster
04:26 tjikkun joined #gluster
04:26 zoldar joined #gluster
04:26 LoadE_ joined #gluster
04:27 jds2001 joined #gluster
04:27 nhm joined #gluster
04:27 partner joined #gluster
04:27 ndevos joined #gluster
04:27 erik49 joined #gluster
04:27 kevein joined #gluster
04:27 sashko joined #gluster
04:27 badone joined #gluster
04:27 bauruine joined #gluster
04:27 theron joined #gluster
04:27 SEJeff_work joined #gluster
04:27 juhaj joined #gluster
04:27 al joined #gluster
04:27 _Bryan_ joined #gluster
04:27 bronaugh joined #gluster
04:27 social_ joined #gluster
04:27 kkeithley1 joined #gluster
04:27 jiffe98 joined #gluster
04:27 lkoranda joined #gluster
04:27 masterzen joined #gluster
04:27 kshlm|AFK joined #gluster
04:27 dec joined #gluster
04:27 yosafbridge joined #gluster
04:27 efries joined #gluster
04:27 hagarth joined #gluster
04:27 ultrabizweb joined #gluster
04:27 bulde joined #gluster
04:27 romero joined #gluster
04:27 samppah joined #gluster
04:27 furkaboo joined #gluster
04:27 zwu joined #gluster
04:28 Ryan_Lane joined #gluster
04:28 sripathi joined #gluster
04:43 jds2001 joined #gluster
05:12 rastar joined #gluster
05:24 vpshastry joined #gluster
05:29 bala1 joined #gluster
05:32 raghu joined #gluster
05:39 mohankumar joined #gluster
05:45 hagarth joined #gluster
05:58 sgowda joined #gluster
05:59 overclk joined #gluster
06:14 vpshastry joined #gluster
06:32 rastar joined #gluster
06:45 ramkrsna joined #gluster
06:45 ramkrsna joined #gluster
06:58 Nevan joined #gluster
07:00 guigui joined #gluster
07:02 rastar joined #gluster
07:06 vimal joined #gluster
07:24 jtux joined #gluster
07:38 edong23_ joined #gluster
07:47 ekuric joined #gluster
07:53 puebele joined #gluster
07:55 ctria joined #gluster
08:03 jtux joined #gluster
08:03 RobertLaptop joined #gluster
08:04 tjikkun_work joined #gluster
08:06 vpshastry joined #gluster
08:12 puebele1 joined #gluster
08:28 vpshastry joined #gluster
08:29 fghaas joined #gluster
08:29 fghaas left #gluster
08:31 zetheroo joined #gluster
08:32 zetheroo this may be a noobish question, but what if you have 6 physical HDD's in your gluster and one of them has a hardware failure ... what happens to the data that was on that physical drive?
08:33 zetheroo and how hard/easy is it to replace that HDD with another one?
08:55 mgebbe_ joined #gluster
08:55 Staples84 joined #gluster
08:57 ndevos zetheroo: that is why it is advised to use RAID6 on your bricks
08:58 ndevos zetheroo: otherwise, you'll need to ,,(replace brick)
08:58 glusterbot ndevos: Error: No factoid matches that key.
08:59 ndevos hmm, well, there is gluster command to replace a brick, maybe glusterbot just does not know a url for that
09:00 ndevos but, when you use some RAID it is definitely easier, gluster does not need to know that there was a hardware failure that way
09:00 mgebbe_ joined #gluster
09:08 gc joined #gluster
09:13 andrei joined #gluster
09:30 vikumar joined #gluster
09:34 Ryan_Lane joined #gluster
09:36 xymox joined #gluster
09:39 bauruine joined #gluster
09:44 Norky joined #gluster
09:52 hybrid512 joined #gluster
10:03 rastar joined #gluster
10:10 zetheroo but without a RAD configuration if a HDD failed would the gluster setup fail as well?
10:21 rcheleguini joined #gluster
10:25 rastar joined #gluster
10:32 Staples84 joined #gluster
10:41 dobber joined #gluster
10:46 vikumar joined #gluster
10:46 andrei joined #gluster
10:49 ndevos zetheroo: no, your raid would run in a degraded mode, but all data should stay available, replace the disk, rebuild the raid-data and thats it
10:49 ndevos normally those procedures can be done without gluster noticing anything at all
10:50 zetheroo Sorry, I mean without the "raid6" ... what would occur then?
11:18 joeto joined #gluster
11:25 bitsweat_ joined #gluster
11:25 raven-np joined #gluster
11:28 overclk joined #gluster
11:28 vpshastry joined #gluster
11:36 Norky_ joined #gluster
12:01 w3lly joined #gluster
12:02 overclk joined #gluster
12:15 vpshastry joined #gluster
12:45 fghaas joined #gluster
12:57 edward1 joined #gluster
13:01 fghaas left #gluster
13:35 Staples84 joined #gluster
13:39 steinex joined #gluster
13:41 x4rlos joined #gluster
13:47 theron joined #gluster
13:51 sjoeboo joined #gluster
13:54 plarsen joined #gluster
14:02 fghaas joined #gluster
14:08 rwheeler joined #gluster
14:08 disarone joined #gluster
14:12 aliguori joined #gluster
14:12 fghaas joined #gluster
14:13 jack joined #gluster
14:21 fghaas left #gluster
14:28 ctria joined #gluster
14:30 tjikkun_work joined #gluster
14:35 nueces joined #gluster
14:38 hagarth joined #gluster
14:42 jskinner_ joined #gluster
14:44 stopbit joined #gluster
15:09 andrei hello guys
15:09 andrei i've got a question about adding a 2nd server to gluster volume for a replicated setup
15:10 andrei my question is as follows: can i manually replicate files from the current server to a replica server and after that add the replica server?
15:10 x4rlos andrei: Thinking of saving network bandwidth?
15:10 andrei the reason for doing it manually - when I try doing this automatically using the heal process my clients' mountpoints hang
15:11 andrei they freeze until I stop the replica server
15:11 andrei and becuase i've got around 5tb of data to copy across I can't wait a 10+ hours for the data to get across
15:12 andrei while the clients are unable to access the data
15:12 x4rlos Hmmm. There are better people here who can answer you.
15:12 x4rlos Not sure if an rsync would quite fit the bill.
15:12 andrei so, I thought of copying things across manually and connecting replica afterwords
15:12 x4rlos out of interest, what version of gluster you running?
15:13 andrei 3.3.0
15:13 andrei i know that 3.3.1 is out
15:13 andrei but it has a bug that stops me using it for the time being
15:13 andrei rdma is broken with 3.3.1
15:13 x4rlos hmmm. Wait for a better answer from the other guys on here.
15:19 wushudoin joined #gluster
15:19 bluefoxxx joined #gluster
15:19 bluefoxxx am I correct in understanding that, as long as there is space for rebalancing, I can modify a volume by removing bricks and gluster will migrate data from those bricks to other bricks?
15:20 foster andrei: I don't think that would be appropriate, you'd either skip some backend infrastructure that gluster creates to track files (i.e., .glusterfs/...), and/or potentially copy xattrs that are brick specific, which I think would lead to serious confusion in afr.
15:20 foster andrei: can you elaborate on your replication "hang?" is it a server load issue?
15:21 ctria joined #gluster
15:24 bugs_ joined #gluster
15:26 rastar joined #gluster
15:33 balunasj joined #gluster
15:34 zetheroo2 joined #gluster
15:35 chouchins joined #gluster
15:36 vpshastry joined #gluster
15:40 H__ question : why does the glusterfs-glusterd logfile name start with a - ? It looks like something is missing :  -etc-glusterfs-glusterd.vol.log
15:42 bitsweat_ left #gluster
15:44 ndevos H__: the log filenames are based on paths, / is replaced by -> /etc/glusterfs/glusterd.vol was used (but normally the first - is indeed missing)
15:49 H__ can I fix the name in config somehow ? It messes up shell globbing. (this is on 3.2.5)
15:50 nocko joined #gluster
15:55 Humble joined #gluster
15:59 luckybambu joined #gluster
15:59 hateya joined #gluster
16:21 zetheroo2 left #gluster
16:21 Staples84 joined #gluster
16:22 zaitcev joined #gluster
16:25 Staples84 joined #gluster
16:28 Humble joined #gluster
16:37 elyograg bluefoxxx: yes - with a caveat.  bug 862347 ... if your volumes are anywhere near (or over) half full, you'll run out of space during the remove-brick (which is under the covers a rebalance) at least once.
16:37 glusterbot Bug http://goo.gl/QjhdI medium, medium, ---, sgowda, ASSIGNED , Migration with "remove-brick start" fails if bricks are more than half full
16:37 bluefoxxx elyograg, will that halt the operation with no data loss?
16:37 bluefoxxx elyograg, also what if I remove a peer?  Will it remove all its bricks?
16:38 elyograg bluefoxxx: there should be no data loss, but if you have anything actrively writing to the volume, you may get errors when it fills up.
16:38 bluefoxxx well yeah
16:38 bluefoxxx full disk = go away
16:39 elyograg I haven't done an extensive before/after file verification, but it looked to me like there was no loss.  I just had to run it several times before I was able to commit it.
16:39 elyograg it was all on a testbed, not production.
16:39 bluefoxxx nod
16:40 bluefoxxx I'm trying to work out automation and brainless admins who change things and hope the automation does it without destroying everything.  Isn't as easy as it looks.
16:40 bluefoxxx Given that it'll handle it somehow though, there's two ways to do this:  eventual consistency and optimization.
16:41 bluefoxxx speaking of eventual consistency
16:41 bluefoxxx is there an eventual consistency rebalance mode?
16:42 bluefoxxx i.e. instead of an actual rebalance, just rebalance things as events happen (a brick starts to fill, we access files that are in the wrong place, etc)
16:42 bluefoxxx this is called 'lazy' I think
16:42 elyograg I believe that gluster attempts to be consistent through the whole process.  eventual consistency is probably a good way to describe it.
16:43 bluefoxxx nod
16:43 bluefoxxx well I mean if you add bricks, you can rebalance distribution so it will actively move files around, which is a BIG operation, if I'm reading the docs right
16:43 elyograg I'm just a recent inductee to the church of gluster, though ... not a emmber of the clergy. :)
16:43 jdarcy bluefoxxx: I've met several users who prefer that kind of "natural rebalance" strategy.  Just direct new allocations to the new bricks and things will work out.
16:43 bluefoxxx nod
16:44 bluefoxxx jdarcy, the argument is between continuous throughput and scheduled downtime essentially
16:44 jdarcy It's actually possible at the I/O level, but it's far from convenient to set up.
16:44 bluefoxxx lazy stuff tends to cost more when shit is out of convergence, but only the first time you touch it.  It gets better.
16:44 bluefoxxx Immediate stuff does a lot of work at once so stuff gets slow.
16:44 elyograg jdarcy: is that accomplished just by doing a rebalance fix-layout rather than the full rebalance?
16:44 bluefoxxx But these are well known CS terms so you probably knew that
16:45 jdarcy elyograg: Unfortunately, no.  It's on my list to add support for that some day.
16:45 JoeJulian bluefoxxx: iirc, rebalance operations happen with a lower priority so they /shouln't/ interfere with operations. Right jdarcy?
16:46 bluefoxxx nod.
16:46 jdarcy elyograg: For now, you'd need to set the trusted.glusterfs.dht xattrs manually so that the entire hash range for each directory belongs to the new bricks.
16:46 Staples84 joined #gluster
16:46 bluefoxxx JoeJulian, though that still incurs disk IO and thus wears the life of the disk :)
16:46 jdarcy JoeJulian: Sadly, the rebalance operations still do interfere with normal operation to a large extent.  We try to avoid the worst conflilcts, but there's a lot more to be done in that area.
16:47 jdarcy JoeJulian: Also, http://review.gluster.org/#change,3908
16:47 glusterbot Title: Gerrit Code Review (at review.gluster.org)
16:47 JoeJulian bluefoxxx: Have you looked at http://joejulian.name/blog​/dht-misses-are-expensive/ which shows how dht actually works?
16:47 glusterbot <http://goo.gl/A3mCk> (at joejulian.name)
16:47 bluefoxxx of course not
16:47 JoeJulian Might be of interest.
16:48 bluefoxxx "Jeff Darcy created a sample python plugin translator that caches entries that just don't exist and saves all those lookups by just replying that the file wasn't there a second ago so it's still not there."
16:48 bluefoxxx that's funny :)
16:49 jdarcy Actually the Python version was just imitating functionality I already had in C, so I could compare.
16:49 bluefoxxx JoeJulian, so in other words, if you wanted to implement a sane, usable lazy rebalance, you'd have to first implement a better way to handle misses during a rebalance
16:50 andrei foster:  i've not done much debugging to be honest. I've just tested it several times and the behaviour seems to be the same
16:50 jdarcy http://www.linuxjournal.com/con​tent/extending-glusterfs-python
16:50 glusterbot <http://goo.gl/m5f3x> (at www.linuxjournal.com)
16:50 andrei as soon as the heal process starts a few seconds later all of your clients are unable to access the mountpoint.
16:50 jdarcy bluefoxxx: I have scripts to do related things (e.g. size-weighted rebalance), so those might help.
16:50 andrei there are no unusual system logs
16:50 bluefoxxx nod
16:51 bluefoxxx this is irrelevant to my use case currently, but interesting.
16:51 JoeJulian Probably... I haven't had my coffee yet so done expect any serious concentration from me... :)
16:51 jdarcy http://review.gluster.org/#change,3573
16:51 glusterbot Title: Gerrit Code Review (at review.gluster.org)
16:51 jdarcy I was actually surprised (but pleased) that it got merged.
16:51 andrei lods of entries regarding lockingn of files, so i assume it's locking all new files that should go to the replica server
16:52 jdarcy We usually don't merge "rfc" patches, but since it's in "extras" it's sort of FYI so I guess we can make exceptions.
16:53 foster andrei: have you filed a bug? that would be a good way to start characterizing the problem
16:53 jdarcy Holy crap.  I just realized I could do these kinds of rebalance tricks "on the fly" using Yet Another Translator.  Why hadn't I thought of that before?
16:54 JoeJulian Where's the concentric ring rebalance?
16:54 jdarcy JoeJulian: In my head.  :-P
16:54 foster andrei: starting with your volume setup, dataset, and including some basic top/iftop stats from clients/servers when the problem occurs would be nice
16:55 sashko joined #gluster
16:56 JoeJulian You showed it to a packed room... Seems like it would be embarrassing to have not cobbled something together by the next summit.... ;)
16:58 lala joined #gluster
16:59 jdarcy Showing it to a packed room doesn't mean it's done.  It means I want to put pressure on management to let me do it.  ;)
16:59 andrei foster: i need to find the time to take the storage down as it's a production server
16:59 JoeJulian Oh, speaking of putting pressure.... johnmark!!!
17:00 andrei so i will try to find the time this weekend to do it
17:01 foster andrei: fair enough. it still can't hurt to describe as much as possible in a bug. people might read it and make some suggestions for you :)
17:16 jskinner joined #gluster
17:23 Norky_ is it possible to remove bricks from a distributed volume without losing data (assuming the remaining bricks have enough space to hold all the data)?
17:25 jdarcy Norky_: Yes, using the remove-brick CLI command.  that forces a (sort of) rebalance onto the remaining bricks before decommissioning.
17:25 Norky_ I assumed that brick removal would trigger a rebalance... but it doesn't seem to have happened
17:25 Norky_ this is a test system, so the data is unimportant
17:26 Norky_ find /mnt/gluster/| wc -l
17:26 Norky_ 200
17:26 Norky_ before the removal
17:26 jdarcy You mean just pulling a disk or powering off a server but leaving them in the volume's config?
17:26 Norky_ now it shows 104
17:26 Norky_ I mean with "gluster vol brick-remove server:/brick"
17:27 jdarcy Did you wait for the remove-brick operation to complete?
17:27 Norky_ du -s before and after show a significant reduction too, so the data is no longer visible to the client
17:28 Norky_ yes
17:28 Norky_ all the bricks are in fact different FSes on the same host
17:28 Norky_ so nothing has been rebooted
17:29 jdarcy So you followed *all* of the steps in section 7.3 of the admin guide (http://www.gluster.org/wp-content/up​loads/2012/05/Gluster_File_System-3.​3.0-Administration_Guide-en-US.pdf) and files are missing?  That would be a bug.
17:29 glusterbot <http://goo.gl/bzF5B> (at www.gluster.org)
17:32 Norky_ I went though those steps
17:32 Norky_ though I did remove two bricks at a time
17:32 Norky_ I'll try again one at a time
17:33 Humble joined #gluster
17:34 Norky_ actually I'll do this at home
17:35 Norky_ cheers jdarcy
17:39 sjoeboo joined #gluster
17:43 sjoeboo joined #gluster
17:53 jskinner_ joined #gluster
17:57 Humble joined #gluster
17:57 johndescs joined #gluster
18:02 drockna joined #gluster
18:03 drockna Is there a guide somewhere how to setup glusterfs using IPv6?
18:03 drockna it seems to want to 'look up a dns' whenever i put in a IPv6 address on the peer probe command.
18:03 hagarth joined #gluster
18:04 cicero it doesn't support ipv6 according to http://www.gluster.org/category/ipv6/
18:04 glusterbot Title: UFO Pilot web app for Gluster UFO/OpenStack Swift Object Storage | Gluster Community Website (at www.gluster.org)
18:04 cicero As an aside, Gluster UFO and OpenStack Swift (pretty much one and the same), does not support IPv6. This is due to a deficiency in the socket library that was used. I was pretty disappointed when I discovered that.
18:04 cicero dated oct 2012 so probably still true
18:05 jdarcy cicero: That's unfortunate.
18:05 cicero tis
18:06 drockna according to this http://patches.gluster.com/patch/7564/ it would appear it does. I check the source and it appears as if this patch has been added in 3.3.1
18:06 glusterbot Title: [BUG:2456] Glusterd: IPV6 support for glusterfs. - Patchwork (at patches.gluster.com)
18:06 cicero ah nice
18:06 drockna cicero: but the patch is older then the article you linked… so what the heck?
18:06 cicero article could be wrong
18:07 w3lly1 joined #gluster
18:07 cicero ah or maybe it's specific to "gluster ufo"
18:07 cicero whatever that is ;\
18:07 drockna eitherway when i try ipv6 on 3.3.1 it thinks the ipv6 address is a dynamic name. then tries looking up the ipv4 address and fails.
18:10 cicero what happens if you hardcode the ip in /etc/hosts and use a hostname?
18:11 drockna https://gist.github.com/lyo​ndhill/c912814de3c8ac16ac11
18:11 glusterbot <http://goo.gl/1tXvk> (at gist.github.com)
18:12 drockna if i place it in etc/hosts it looks exactly the same except the ipv6 address is replaced with gluster1 (or whatever i put in as the ip address)
18:12 cicero ah
18:12 cicero crappy
18:14 drockna yah. and i know gluster is listening on the ipv6 address because i see it in netstat on ::1
18:15 johndescs joined #gluster
18:19 nueces joined #gluster
18:28 sjoeboo joined #gluster
18:30 jskinner_ joined #gluster
18:30 sjoeboo joined #gluster
18:37 cicero I KNOW YOURE LISTENING, GLUSTER
18:37 drockna cicero: haha
18:39 jskinn___ joined #gluster
18:48 hagarth joined #gluster
18:59 sjoeboo joined #gluster
19:05 chouchin_ joined #gluster
19:17 hagarth joined #gluster
19:23 bluefoxxx ok, why the heck am I having no luck here
19:25 bluefoxxx hosts file.
19:33 nueces joined #gluster
19:33 chouchins joined #gluster
19:35 hagarth joined #gluster
19:36 theron joined #gluster
19:37 mkultras hey you guys have ran gluster with replicated volumes over amazon ec2 instances before, ever have to keep remounting the mountpoint as it hangs?
19:37 mkultras i'm running kaltura ontop of it
19:38 mkultras for its content folder
19:38 mkultras maybe it needs elastic ips for the gluster nodes
19:39 mkultras ips havent changed tho
19:39 mkultras this was working with gluster 3.0.2 but now i have 3.3 its hanging , course i had only 1 node before , now i have 2
19:39 Humble joined #gluster
19:42 bluefoxxx ok I don't nuderstand why this says taht the peer is connected, but the peer is not connected.
19:42 mkultras does netstat show it connected?
19:42 ctria joined #gluster
19:42 bluefoxxx Hostname: hq-ext-store-1.sbgnet.com
19:42 bluefoxxx Uuid: d3baf6c6-f528-49cd-96b6-b5d03b1a4b5c
19:42 bluefoxxx State: Accepted peer request (Connected)
19:42 bluefoxxx ^^^ gluster peer status
19:42 JoeJulian I'm back. Where are we.
19:43 bluefoxxx Host hq-ext-store-1.sbgnet.com not connected  <-- from  # gluster volume create web transport tcp replica 2 hq-ext-store-1.sbgnet.com:/mnt/silo0 hq-ext-store-2.sbgnet.com:/mnt/silo0
19:43 JoeJulian "Accepted peer request" is, indeed, not connected. There needs to be a clearer indication of where they are in the peering process. As it is now, I'm not even sure what stages it actually goes through. :/
19:44 elyograg JoeJulian: not in Kansas.
19:44 bluefoxxx JoeJulian, ok I need to read the docs again.
19:45 JoeJulian Should be "State: Peer in Cluster (Connected)"
19:45 JoeJulian I've found that sometimes if there's a state hang like that it's useful to restart all your glusterd.
19:46 bluefoxxx JoeJulian, glusterd doesn't communicate exclusively over tcp, does it.
19:46 JoeJulian It's not what I like to perceive as a "solution" but until someone has effectively diagnosed why this happens it's a workaround.
19:46 JoeJulian bluefoxxx: I'm not sure.
19:47 bluefoxxx oh, I just noticed my firewall is being funny
19:47 JoeJulian I do know that it does at least have to start that way.
19:48 bluefoxxx nope, you're right, needed to restart glusterd.
19:48 JoeJulian Also rdma handshaking in 3.3.1 is broken so your volumes won't mount correctly if you're rdma only.
19:48 bluefoxxx that's a bug.
19:48 bluefoxxx no, I didn't turn rdma on
19:48 bluefoxxx rdma looks confusing and like insanity to me.
19:49 JoeJulian It may be, but it's the "best way" if you can throw money at a system.
19:50 bluefoxxx ok i have to do something else now to get glusterfsd to start
19:54 bluefoxxx JoeJulian, how the heck do I mount nfs?  Or, troubleshoot why I can't.
19:54 bluefoxxx the glusterfsd service won't start and I can't find useful logs :|
19:56 elyograg if you're trying to do something like service glusterfsd start, that doesn't do anything.  from what I've been able to tell, only 'stop' has any effect on that service, and even that should be automatically run when required.
19:56 bluefoxxx oh ok
19:57 bluefoxxx sudo mount -o mountproto=tcp,vers=3 -t nfs hq-ext-store-1:/web /mnt/export/web/
19:57 bluefoxxx mount.nfs: requested NFS version or transport protocol is not supported
19:57 glusterbot bluefoxxx: make sure your volume is started. If you changed nfs.disable, restarting your volume is known to work.
19:58 bluefoxxx glusterbot, nfs.disable, I didn't see that anywhere
19:59 JoeJulian @nfs
19:59 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also portmapper should be running on the server, and the kernel nfs server (nfsd) should be disabled
19:59 bluefoxxx JoeJulian, the docs say -o mountproto=tcp
20:00 JoeJulian I think that's longform.
20:00 mkultras tcp is the default
20:00 bluefoxxx well, either way, I get that error.  :|
20:01 JoeJulian Plus different distros may have different option words for nfs. I don't know why they can't all be consistent on that.
20:01 mkultras oh not for version 3 its not actually
20:02 hateya joined #gluster
20:03 bluefoxxx ok well my current assessment is 'nfs doesn't work'
20:04 JoeJulian Does the native mount work?
20:04 bluefoxxx no but it gives me a huge log file
20:05 bluefoxxx JoeJulian, it's notable I don't have portmap installed/running
20:05 JoeJulian paste it to fpaste.org
20:05 bluefoxxx Redhat says Portmap was replaced completely by rpcbind, therefor absolutely nothing supplies portmap.
20:05 bluefoxxx (CentOS 6)
20:06 bluefoxxx http://fpaste.org/SWu4/
20:06 glusterbot Title: Viewing Paste #275539 (at fpaste.org)
20:06 bluefoxxx sudo mount -t glusterfs hq-ext-store-1:/web /mnt/export/web/
20:06 bluefoxxx was my command, on hq-ext-store-1
20:06 bluefoxxx (trying to mount it locally)
20:07 JoeJulian gluster volume status
20:07 bluefoxxx ah.  Not started.
20:07 * JoeJulian whacks bluefoxxx with a wet trout.
20:07 bluefoxxx well it was obscured from me
20:08 bluefoxxx ok, I think I'm done being noisy for now.  Have to make a few tweaks to my puppet module.
20:08 JoeJulian "[11:57] <glusterbot> bluefoxxx: make sure your volume is started" ;)
20:09 bluefoxxx hehe
20:09 JoeJulian Maybe you need moar caffeine! :D
20:09 bluefoxxx Strong coffee can actually make me drunk
20:09 bluefoxxx I'm not great with coffee
20:21 Humble joined #gluster
20:23 sjoeboo joined #gluster
20:33 georgeh|workstat joined #gluster
20:41 bluefoxxx Can't make it mount from a rhel5 box but that's probably our broken stupid shit more than anything else.
20:41 bluefoxxx it mounts from everywhere else.
20:51 piotrektt_ joined #gluster
20:53 lh joined #gluster
20:53 lh joined #gluster
20:57 Humble joined #gluster
21:02 piotrektt_ hey. what does it mean that on VM disks have to use VirtlO driver? can't find such settings in Virtualbox. - sorry if that's noob question.
21:06 w3lly joined #gluster
21:15 Humble joined #gluster
21:18 NuxRo piotrektt_: means paravirtualised devices, this is better suited for #virtualbox
21:20 piotrektt_ thx.
21:59 sjoeboo joined #gluster
22:13 Mo___ joined #gluster
22:15 SpeeR joined #gluster
22:30 polenta joined #gluster
22:33 Staples84 joined #gluster
22:54 cjohnston_work joined #gluster
22:55 sashko joined #gluster
22:59 sjoeboo joined #gluster
23:04 hagarth joined #gluster
23:08 Humble joined #gluster
23:14 raven-np joined #gluster
23:35 sjoeboo joined #gluster
23:42 sjoeboo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary