Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 nightwalk joined #gluster
01:03 kevein_ joined #gluster
01:19 nightwalk joined #gluster
01:23 lng joined #gluster
01:27 lng Hi! Gluster upgrade advice needed. I have 3.3.0 which was intalled from deb package. Is it okay if I just remove it and install 3.3.1 from PPA?
01:51 TSM2 joined #gluster
01:51 TSM2 joined #gluster
01:51 TSM2 joined #gluster
02:03 kevein joined #gluster
02:22 sunus joined #gluster
02:30 nightwalk joined #gluster
03:01 bala joined #gluster
03:08 nightwalk joined #gluster
03:28 hagarth joined #gluster
03:34 nightwalk joined #gluster
03:47 kevein joined #gluster
04:09 nightwalk joined #gluster
04:11 sripathi joined #gluster
04:13 glusterbot New news from newglusterbugs: [Bug 861306] Stopping the volume does not clear the locks <http://goo.gl/rHyBd>
04:18 koodough joined #gluster
04:19 JoeJulian lng: yes
04:32 nightwalk joined #gluster
05:17 vpshastry joined #gluster
05:18 shylesh joined #gluster
05:40 shireesh joined #gluster
05:46 sripathi joined #gluster
06:02 16WABBAEW joined #gluster
06:13 glusterbot New news from newglusterbugs: [Bug 874348] mount point broke on client when a lun from a storage backend offline or missing . After there the data are scrap <http://goo.gl/CjwrE>
06:25 kevein joined #gluster
06:35 ramkrsna joined #gluster
06:53 pranithk joined #gluster
06:58 mohankumar joined #gluster
07:00 pranithk joined #gluster
07:02 deepakcs joined #gluster
07:07 Technicool joined #gluster
07:07 Jippi joined #gluster
07:12 lkoranda joined #gluster
07:13 lkoranda joined #gluster
07:19 manik joined #gluster
07:21 ika2810 joined #gluster
07:22 ika2810 joined #gluster
07:29 sripathi1 joined #gluster
07:33 dobber joined #gluster
07:41 ctria joined #gluster
07:45 sshaaf joined #gluster
07:48 sripathi joined #gluster
08:07 Azrael808 joined #gluster
08:07 lkoranda joined #gluster
08:07 ekuric joined #gluster
08:19 tjikkun_work joined #gluster
08:25 inodb joined #gluster
08:26 Humble joined #gluster
08:31 kevein joined #gluster
08:44 glusterbot New news from newglusterbugs: [Bug 874498] execstack shows that the stack is executable for some of the libraries <http://goo.gl/NfsDK>
08:49 ika2810 joined #gluster
08:50 ika2810 joined #gluster
08:50 ika2810 left #gluster
09:02 1JTAAQD6F joined #gluster
09:05 sunus hi, i want to make sure one thing, is the last(bottom) volume section is the top of a graph?
09:10 kd1 joined #gluster
09:12 lng sunus: which graph?
09:12 lng JoeJulian: thanks
09:15 sunus lng: i am, glusterd.vol  has volumes, right?
09:17 lng yes
09:17 sunus lng: i mean, a transport will go through all the volumes(xlator), right?
09:18 sunus lng: the order is from the top-most to the bottom-down, right?
09:20 lng your volumes are here: /var/lib/glusterd/vols/
09:23 ndevos sunus: yes, you are correct, if you check a .vol file for the brick, you will see storage/posix on top (which is the last xlator)
09:26 lng is it ok port is N/A: 'Self-heal Daemon on gluster-2bN/AY1396'?
09:26 DaveS joined #gluster
09:26 lng is it ok port is N/A: 'Self-heal Daemon on gluster-2b N/A Y 1396'?
09:31 sunus thank you all
09:35 ndevos lng: the self-heal daemon does not listen on a port, hence the N/A
09:35 lng ndevos: ah, ok
09:38 lng when I `gluster volume heal storage info split-brain`, I have 'Number of entries: 1023; Segmentation fault (core dumped)'. what can I do about it?
09:39 ndevos lng: you might want to check if that is a bug that has been fixed in a newer version, I think it is
09:40 lng ndevos: so I need to upgrade it...
09:40 ndevos lng: or search for bugs and/or read the changelog
09:40 pranithk joined #gluster
09:46 lng thanks
09:46 lng ndevos: is it ok if I upgrade all the nodes first and clients after that?
09:47 lng 3.3.0 > 3.3.1
09:47 kd joined #gluster
09:48 ndevos lng: I *think* so, haven't tested it myself though
09:52 Nr18 joined #gluster
09:54 lng ndevos: I will test it later
09:56 sripathi joined #gluster
09:57 lng or maybe it's better to start from clients
10:00 66MAADKQY joined #gluster
10:04 nightwalk joined #gluster
10:14 lng ndevos: upgraded client w/o any issues
10:17 ndevos lng: ah, cool, good to know :)
10:20 lng ndevos: it's minor version upgrade - so it should work :-)
10:21 ndevos lng: yeah, but unfortunately *should* is not a guarantee
10:21 lng %100
10:21 lng :-)
10:21 lng our world is too fargile
10:22 lng fragile*
10:22 lng so we build up clusters
10:45 kd left #gluster
10:51 tryggvil joined #gluster
10:53 20WABJLOG joined #gluster
11:02 duerF joined #gluster
11:05 nightwalk joined #gluster
11:07 rcheleguini joined #gluster
11:13 lng how to check if gluster is doing some self-healing or rebalancing?
11:13 puebele1 joined #gluster
11:33 inodb_ joined #gluster
11:51 guigui1 joined #gluster
11:52 Triade joined #gluster
12:18 puebele joined #gluster
12:41 rwheeler joined #gluster
12:46 balunasj joined #gluster
12:48 puebele joined #gluster
13:11 Nr18_ joined #gluster
13:16 puebele1 joined #gluster
13:17 duerF joined #gluster
13:17 nightwalk joined #gluster
13:27 FU5T joined #gluster
13:29 FU5T joined #gluster
13:35 puebele1 joined #gluster
13:46 dblack joined #gluster
13:46 Nr18 joined #gluster
13:48 ninkotech_ joined #gluster
13:54 dblack joined #gluster
14:00 bulde joined #gluster
14:00 lh joined #gluster
14:02 aliguori joined #gluster
14:09 dstywho joined #gluster
14:24 nueces joined #gluster
14:27 johnmark samkottler: /me waves
14:27 samkottler johnmark: hey, still waiting to hear back about what we talked about
14:27 samkottler I'll prod today :)
14:34 rwheeler joined #gluster
14:36 JoeJulian lng, ndevos: Yes, that crash bug you were discussing is fixed in 3.3.1.
14:41 ndevos JoeJulian: right, thanks!
14:48 johnmark samkottler: prod prod prod :)
14:48 samkottler johnmark: gotcha :P
14:59 sjoeboo_ joined #gluster
15:01 stopbit joined #gluster
15:13 wushudoin joined #gluster
15:15 guigui1 left #gluster
15:19 lkoranda joined #gluster
15:40 robert7811 joined #gluster
15:42 robert7811 Can anyone help with an Amazon Ec2 Ubuntu 11 glusterfs install question? I am having problems with "Host xxx.xxx.xxx.xx not a friend" on the first node server and cannot get past it.
15:43 ika2810 joined #gluster
15:48 semiosis robert7811: you need to probe the host first before you can create volumes.  see ,,(hostnames)
15:48 glusterbot robert7811: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
15:48 semiosis also ,,(rtfm) :)
15:48 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
15:56 jbrooks_ joined #gluster
16:02 robert7811 So I have to use DNS to support gluster fully? I have done the /etc/hosts modifications and yielded the same results. So setting up a set of A records and then using those sub domain host names should resolve what looks to be dns lookup issue?
16:03 semiosis robert7811: you don't *have* to, but i recommend it
16:04 semiosis furthermore, i recommend using CNAME records to point to the EC2 public-hostname of your instances, these will resolve to local-ipv4 from within ec2 and resolve to public-ipv4 from outside ec2
16:05 semiosis that split-horizon is a feature of ec2 btw
16:18 daMaestro joined #gluster
16:19 robert7811 I have setup the CNAME records and verfied that they resolve correctly but yet still have a problem with host is not a friend " gluster volume create local replica 2 transport tcp gluster1.coetruman.com:/mnt/ebs-volume/local gluster2.coetruman.com:/mnt/ebs-volume/local". Should I trash my current configuration and start with a fresh set. I did probe as you and the documentation describes.
16:19 semiosis you need to probe peers like i said before
16:20 semiosis ,,(rtfm)
16:20 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
16:20 robert7811 Could the servers being in different zones at Amazon be the problem? Node 1 is in zone d and Node 2 is in zone a
16:20 ndevos robert7811: is gluster2.coetruman.com listed in 'gluster peer status' on gluster1.coetruman.com?
16:21 Bullardo joined #gluster
16:21 robert7811 gluster peer status Number of Peers: 2  Hostname: 54.243.241.38 Uuid: 00000000-0000-0000-0000-000000000000 State: Peer in Cluster (Connected)  Hostname: gluster2.coetruman.com Uuid: ce95ca4a-061c-456c-a442-b1d7803a9a99 State: Peer in Cluster (Connected)
16:21 semiosis ,,(hostnames)
16:21 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
16:21 semiosis robert7811: ^^^
16:22 ndevos robert7811: a uuid of 00000000-0000-0000-0000-000000000000 is wrong, are the uuids in $SERVER:/var/lib/glusterfs/glusterd.info different?
16:24 ndevos and yeah, if the IP-address changed, you'll want to use hostnames and the re-probe as semiosis mentioned
16:25 robert7811 The glusterd.info only list one uuid and it is ce95ca4a-061c-456c-a442-b1d7803a9a99
16:25 ndevos robert7811: both servers should have their own uuid in that file, and they should be different
16:28 semiosis whether the ip changed or not -- yet -- it will eventually, so imo best to get that changed to a hostname before any volumes depend on it
16:36 jbrooks_ joined #gluster
16:37 robert7811 The uuid in the glusterd.info for Node1 and Node2 are the same. I am not sure how this would be as they are to separate installs. Is there a way to have it generate a new uuid on node 2 so they are unique as I am thinking this is the problem?
16:38 jdarcy robert7811: You could probably hack them to be different, then you'd have to hack the .../peers/... stuff as well.  Probably easier to nuke the whole thing and try again.
16:40 robert7811 So nuke Node 2 and start again or do you mean nuke them both? It would assume rebuilding Node 2 would resolve the problem and then just re-probe could do the trick yes?
16:44 kkeithley I believe you can just delete /var/lib/glusterd/glusterd.info, restart glusterd and it will generate a new uuid and write a new file with the new uuid in it.
16:45 kkeithley And if it comes up with the same uuid as the other node — again — then go buy a lottery ticket.
16:46 semiosis there's no way two separate installs could have the same uuid...
16:47 semiosis robert7811: how did you install glusterfs?  did you clone a machine?
16:47 atrius joined #gluster
16:49 robert7811 I created a new instance from Amazon AMI for each server but had a snapshot ebs volume with the content created when each instance was spun up. Nothing I have not done before but this is my first time with gluster.
16:51 seanh-ansca joined #gluster
16:58 davdunc joined #gluster
16:58 davdunc joined #gluster
17:01 ste99 left #gluster
17:02 Mo__ joined #gluster
17:07 dstywho joined #gluster
17:11 Technicool joined #gluster
17:17 glusterbot New news from newglusterbugs: [Bug 875860] Auto-healing in 3.3.1 doesn't auto start <http://goo.gl/U4mpv>
17:33 robert7811 Ok. I have rebuilt both my instances from scratch and the both contain a unique uuid. I probed and each see each other. I then create a volume " gluster volume create assets replica 2 transport tcp gluster1.coetruman.com:/mnt/ebs-volume/assets gluster2.coetruman.com:/mnt/ebs-volume/assets" which I end up with "Host gluster1.coetruman.com not a friend". I have verified everything is resolving as it should. Thoughts?
17:34 Technicool robert, nothing in /etc/hosts?  have you tried setting up by IP as a sanity check?
17:34 semiosis robert7811: please ,,(pastestatus) from both machines
17:34 glusterbot robert7811: Please paste the output of "gluster peer status" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
17:34 jdarcy robert7811: When you do a "gluster peer status" from gluster1*, what *exactly* do you see?
17:35 Technicool jdarcy, good point...the peer probe was only run from one machine, correct robert7811 ?
17:36 semiosis the only way to find out is to get a copy of the output from the peer status command
17:36 benner left #gluster
17:36 semiosis ...from both hosts
17:36 JoeJulian Technicool: Want to see if we can do this: http://joejulian.name/blog/2​013-cascadia-it-conference/
17:36 glusterbot <http://goo.gl/293Qh> (at joejulian.name)
17:37 Technicool JoeJulian, yes, this is exactly the sort of thing we want to be doing
17:37 JoeJulian I've already talked to jmw.
17:37 Technicool i'll forward it up the chain for approvals but should be an easy sale
17:38 Technicool ooh, seattle!
17:38 elyograg joined #gluster
17:38 Technicool i think this should be our biggest priority now ;)
17:42 Technicool JoeJulian, I already have a lab setup guide written that we could pick clean
17:43 Technicool it would be good to extend it beyond just qemu/kvm though depending on who is attending
17:44 robert7811 The output from the peer status is "Number of Peers: 1  Hostname: gluster2.coetruman.com Uuid: 67a10bc9-9a92-43dd-9749-404ddd2be4b1 State: Peer in Cluster (Connected) ". The primary node Uuid is "2ad57d38-01ca-4859-9ba8-d404ae2d41c7"
17:46 semiosis robert7811: need to see the peer status from *both* servers
17:47 semiosis robert7811: there is no "primary" -- glusterfs is fully distributed
17:47 Technicool robert, what is the output from /var/lib/glusterd/peers/* on both?
17:48 johnmark ah, I heard my name
17:48 johnmark yes, we should totally do it
17:48 robert7811 Here is the status from gluster2 "Number of Peers: 1  Hostname: gluster1.coetruman.com Uuid: 2ad57d38-01ca-4859-9ba8-d404ae2d41c7 State: Peer in Cluster (Connected) "
17:50 Humble joined #gluster
17:51 robert7811 The path "/var/lib/glusterd/peers/*" does not exist on either system but is actually "/etc/glusterd/peers/*". gluster1 => bash: /etc/glusterd/peers/67a10bc9-​9a92-43dd-9749-404ddd2be4b1. gluster2 => bash: /etc/glusterd/peers/2ad57d38​-01ca-4859-9ba8-d404ae2d41c7
17:52 Nr18 joined #gluster
17:52 semiosis robert7811: what version of glusterfs is this?  how did you install it?
17:53 kkeithley that's a seriously old version if it's got /etc/glusterd/
17:54 nick5 joined #gluster
17:55 robert7811 In installed it from the Ubuntu repos. This is the version info " glusterfs --version glusterfs 3.2.5 built on Jan 31 2012 07:39:58 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. "
17:55 glusterbot Title: Red Hat | Red Hat Storage Server | Scalable, flexible software-only storage (at www.gluster.com)
17:55 semiosis robert7811: yeah what kkeithley said... you should install the latest from the ,,(ppa)
17:55 glusterbot robert7811: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
17:56 gcbirzan should the gluster 3.3.1 client work with 3.3.0 servers?
17:57 JoeJulian /should/ but it's always better to upgrade the servers first.
17:57 robert7811 I will create the instances again but installing the 3.3 version rather than the Ubuntu default version and see were we are at then. In case I have not mentioned it before but thank you for the help so far.
17:58 gcbirzan yeah, the problem is, restarting the servers will break the 3.3.0 clients
17:59 semiosis robert7811: btw, use ubuntu precise or quantal
17:59 semiosis robert7811: and yw
18:00 kkeithley gcbirzan: 3.3.0 and 3.3.1 are 100% compatible.
18:00 gcbirzan I'm trying to come up with a plan of upgrading, hm, 8 servers, each two paired together with a replicated volume (2 per server pair, actually, but that's kind of irrelelvant) to 3.3.1, preferably without taking more than a pair offline at a time. however, all hosts mount each other's volumes, so if the 3.3.0 client can't talk to 3.3.1 or the other way around, that plan... well, won't go too well :P
18:01 * JoeJulian considers betting kkeithley on that 100% number....
18:01 gcbirzan ideally, actually, I would've hoped to restart servers individually, but past experience has shown that if we restart glusterd on a host, it'll break the volume in hilarious ways
18:02 JoeJulian gcbirzan: That seems to depend on how long it's down, but yeah. I agree with what you're saying and I'd say it's definitely worth a try.
18:02 gcbirzan though, actually, meh. i just realised, if we bring both down, 3.3.0 won't be able to reconnect
18:03 gcbirzan since it seems that if both hosts go down for a volume, the gluster 3.3.0 client can't reconnect once they come back up.
18:03 gcbirzan so I think we're stuck with all at once, or host by host, hoping things won't break too badly.
18:04 gcbirzan one server running 3.3.1, one 3.3.0, could that work?
18:04 gcbirzan (also, I shouldn't have shut down everything and done it all at once in the staging env, teehee :P)
18:07 kkeithley sigh, okay, <weasel-words>3.3.1  client is compatible with 3.3.0 server. I wasn't talking about bugs in things like geo-rep and quorum that could break between 3.3.0 and 3.3.1 servers</weasel-words>
18:10 tryggvil_ joined #gluster
18:15 Bullardo joined #gluster
18:21 Azrael808 joined #gluster
18:28 purpleidea joined #gluster
18:28 purpleidea joined #gluster
18:33 y4m4 joined #gluster
18:37 balunasj joined #gluster
18:46 JoeJulian kkeithley: I only noticed because I found a problem migrating a brick from a 3.3.0 server to a 3.3.1 one. I probably should have filed the bug report, but the fix seemed pretty obvious. I upgraded the 3.3.0 one.
18:48 nightwalk joined #gluster
18:50 Bullardo joined #gluster
18:52 puebele joined #gluster
18:53 semiosis jdarcy: (OT) gitlab doesn't have a "fork" button
18:53 kkeithley fair enough, but if you didn't file a bz then you can't prove it ever really happened. :-p
18:54 JoeJulian :)
18:54 jdarcy semiosis: Is that being presented as a positive or a negative?  ;)
18:55 rwheeler joined #gluster
18:55 jdarcy semiosis: But seriously, what is the gitlab work flow equivalent to fork and pull request, then?
18:55 bulde joined #gluster
18:56 semiosis jdarcy: well if you were thinking it was an in-house github, it's not quite... whether that's good or bad is up to you...
18:57 seanh-ansca joined #gluster
18:59 semiosis jdarcy: https://groups.google.com/forum/?fro​mgroups=#!topic/gitlabhq/u78CXvLGUbg  :(
18:59 glusterbot <http://goo.gl/Sdp0m> (at groups.google.com)
18:59 robert7811 I have now created new instance with version 3.3.1. I have created a two node setup gluster1.coetruman.com and gluster2.coetruman.com. I have done a probe from each and they have identified each other just fine. gluster1 => "Number of Peers: 1  Hostname: gluster2.coetruman.com Uuid: 6a67ddd1-44d2-47b1-90d6-b125529a0ddd State: Peer in Cluster (Connected)". gluster2 => "Number of Peers: 1  Hostname: gluster1.coetruman.com Uuid: b1
19:00 robert7811 I then try to create a new volume "gluster volume create assets replica 2 transport tcp gluster1.coetruman.com:/mnt/ebs-volume/assets gluster2.coetruman.com:/mnt/ebs-volume/assets". Resulting in "Host gluster1.coetruman.com not a friend". Seems to be the same results regardless of version used. Thoughts?
19:01 semiosis robert7811: please use a pastebin-type site (pastie,fpaste,dpaste...) for showing us command output :)
19:01 purpleidea joined #gluster
19:03 JoeJulian robert7811: Do both fqdn resolve correctly on both servers?
19:04 robert7811 I am using the web interface (webchat.freenode.net) and I am not sure where or how those commands work. Should I be using a true irc client to send details with?
19:04 JoeJulian Just open another window and go to http://fpaste.org . Paste your output and submit. It'll return with a link that you paste here.
19:05 glusterbot Title: Fedora Pastebin (at fpaste.org)
19:05 esm_ joined #gluster
19:12 robert7811 Here are the links for the output you need. http://fpaste.org/5OXt/ http://fpaste.org/du0E/ http://fpaste.org/n2Az/ http://fpaste.org/38GO/
19:12 glusterbot Title: Viewing Paste #251617 (at fpaste.org)
19:14 robert7811 Here is the output when I try to create the volume and mirror data http://fpaste.org/Ve3Q/
19:14 glusterbot Title: Viewing Paste #251618 (at fpaste.org)
19:17 seanh-ansca joined #gluster
19:26 robert7811 Here is an updated ping for gluster1 to gluster2. I created it incorrectly on fpaste http://fpaste.org/bi0A/
19:26 glusterbot Title: Viewing Paste #251624 (at fpaste.org)
19:27 inodb joined #gluster
19:27 inodb left #gluster
19:28 ika2810 left #gluster
19:29 JoeJulian was on a phone call.
19:30 JoeJulian robert7811: So are you using dns or hosts?
19:30 DaveS_ joined #gluster
19:32 robert7811 DNS. Host files have not been touched in these new instances.
19:33 DaveS_ joined #gluster
19:38 seanh-ansca joined #gluster
19:47 JoeJulian robert7811: Does "host gluster1.coetruman.com" resolve on both servers?
19:49 robert7811 same output for both servers http://fpaste.org/OEv3/
19:49 glusterbot Title: Viewing Paste #251634 (at fpaste.org)
20:02 davdunc joined #gluster
20:02 davdunc joined #gluster
20:05 JoeJulian hmm, that's special...
20:05 robert7811 I created a quick list of steps I am doing to create the instance and install glusterfs on them. I hope this helps http://fpaste.org/u2j1/
20:05 glusterbot Title: Viewing Paste #251652 (at fpaste.org)
20:05 JoeJulian Oh...
20:05 JoeJulian Let me see /etc/hosts on gluster1
20:08 inodb joined #gluster
20:11 robert7811 Here you go http://fpaste.org/Fy8y/
20:11 glusterbot Title: Viewing Paste #251655 (at fpaste.org)
20:11 JoeJulian hmm, not that then...
20:12 JoeJulian Ok, let's see what's in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
20:14 nightwalk joined #gluster
20:14 andreask joined #gluster
20:16 robert7811 Had to do it in two pastes http://fpaste.org/PV6c/ http://fpaste.org/Aogf/
20:16 glusterbot Title: Viewing Paste #251659 (at fpaste.org)
20:18 TSM2 joined #gluster
20:20 JoeJulian This doesn't make any sense. It says it did it.... Try restarting both glusterd.
20:23 robert7811 I have bounced the service on both boxes. Would you like me to try and create the volume again on gluster1?
20:24 JoeJulian yep
20:25 robert7811 http://fpaste.org/3DOQ/
20:25 glusterbot Title: Viewing Paste #251664 (at fpaste.org)
20:31 robert7811 Here is the updated log file contents if needed http://fpaste.org/85vQ/
20:31 glusterbot Title: Viewing Paste #251666 (at fpaste.org)
20:31 JoeJulian Try shutting down glusterd and running "glusterd --debug" on each and see if you can spot anything obvious. I have to go beat my head against a wall (supporting a windows box). I'll be back in a little while.
20:42 Bullardo joined #gluster
20:43 NcA^ If I point the FUSE client at a geo-replicated brick, will it attempt to connect to the main volume? And is there a way to tune the timeout/tolerance of this?
20:47 robert7811 I have some debugging details that may help http://fpaste.org/mB0C/  . It seem glusterfs does not know gluster1.coetruman.com resolves to the local machine when requested to create a volume. Is there a way to tell it is local. I even tried running the command for the volume creation on gluster2 and the same results of not knowing one of the defined instances is a local machine.
20:47 glusterbot Title: Viewing Paste #251670 (at fpaste.org)
20:48 semiosis robert7811: i add a hosts entry to map the systems dns name to 127.0.0.1
20:50 zaitcev joined #gluster
20:52 robert7811 That did the trick. The volume is now created. Is this something that will always need to be done in the future? Just want to document if this is norm to do for glusterfs.
20:53 semiosis hmm, not sure
20:56 semiosis though fwiw i do have puppet set this up on all my machines
21:01 robert7811 Either way I will make a note to run the debug to verify it does not need to be set. Thanks for all the help. Does the glusterfs community have a donation box?
21:06 johnmark robert7811: oh uh.... sure! *cough* why yes, I believe it's at my home address
21:06 johnmark robert7811: let me fish that out for you... ;)
21:28 nightwalk joined #gluster
21:33 rwheeler joined #gluster
21:41 berend` joined #gluster
21:42 JoeJulian fwiw, I don't have my hostname or fqdn in my hosts file.
21:44 m0zes anyone going to sc12? /me waves from the convention center
21:44 elyograg i need to get back to keystone with gluster-swift.  I finally figured out what JoeJulian meant when he said that i needed to name my volumes according to the keystone ID ... it's a 32 byte hex string mapping to the tenant name.  would it be a really messy thing to have it usea a meaningful name, such as the tenant name itself?
21:45 JoeJulian I have no idea. :/
21:45 semiosis m0zes: sounds like fun
21:46 semiosis i'm going to monitorama in march
21:46 m0zes semiosis: it certainly looks fun. this is my first big convention.
21:47 semiosis should have asked johnmark for a pack of stickers :)
21:48 elyograg having volume names like 1918b675fa1f4b7f87c2bb3688f6f2f7 would really be counter-intuitive, unless I could have two different names in gluster refer to the same volume, so that swift could have its ugly name but I could mount FUSE/NFS/Samba with a name that actually means something.
21:56 inodb joined #gluster
22:03 Bullardo joined #gluster
22:03 NcA^ Does anyone have issues on RHEL 6.3 with the FUSE client hanging boot, waiting to mount "local" filesystems?
22:05 twx_ _netdev in fstab I belive
22:08 inodb joined #gluster
22:10 JoeJulian +1
22:11 elyograg NcA^: as twx_ just said, though somewhat cryptically, add _netdev to the options for the FUSE mount in /etc/fstab.  This will make the mount process skip that filesystem during normal mounting, and wait to mount it later.  You're likely to see a complaint about _netdev being an unknown option, but despite that message, the mount will work, unless there's some other problem.  I think that problem might be fixed in 3.1.1, but I am not certain.
22:12 elyograg 3.3.1, not 3.1.1.
22:12 JoeJulian Nope, it still makes that noise.
22:15 NcA^ thanks twx_ elyograg & JoeJulian
22:16 NcA^ odd thing is that it only happens on some VM's in one specific environment
22:16 Bullardo joined #gluster
22:21 mnaser joined #gluster
22:22 twx_ sorry for the short answer, kinda googleable though, wasn't it? :)
22:22 wintix can someone shed some light on the gid-timeout=1 mount option? i saw it on the mailinglist regarding a known bottleneck with performance issues but couldn't find out what exactly it does.
22:24 NcA^ no worries twx_ , was a spur-of-the-moment kind of thing and I happened to be idling
22:27 nueces joined #gluster
22:30 Mo_ joined #gluster
22:47 xillver joined #gluster
22:47 nightwalk joined #gluster
23:25 dag_ joined #gluster
23:26 rbergeron joined #gluster
23:26 Triade1 joined #gluster
23:27 TSM2 joined #gluster
23:28 aliguori joined #gluster
23:29 dstywho joined #gluster
23:29 xymox joined #gluster
23:29 JordanHackworth joined #gluster
23:29 joeto joined #gluster
23:29 circut joined #gluster
23:34 XmagusX joined #gluster
23:35 duerF joined #gluster
23:35 nightwalk joined #gluster
23:35 sjoeboo joined #gluster
23:40 arusso joined #gluster
23:56 rubbs joined #gluster
23:59 jayunit100_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary