Camelia, the Perl 6 bug

IRC log for #gluster, 2013-06-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 mooperd joined #gluster
00:10 badone__ joined #gluster
00:18 andreask joined #gluster
00:19 MrNaviPa_ joined #gluster
00:23 mooperd joined #gluster
00:25 bulde joined #gluster
00:28 Hchl joined #gluster
00:29 jag3773 joined #gluster
00:36 MrNaviPacho joined #gluster
00:45 Hchl joined #gluster
00:52 bala joined #gluster
01:09 harish joined #gluster
01:29 bulde joined #gluster
01:31 hjmangalam1 joined #gluster
01:50 hjmangalam1 joined #gluster
01:51 harish joined #gluster
01:57 nightwalk joined #gluster
02:03 hjmangalam1 joined #gluster
02:07 puebele1 joined #gluster
02:13 hjmangalam1 joined #gluster
02:14 purpleidea joined #gluster
02:14 purpleidea joined #gluster
02:36 jbrooks joined #gluster
02:46 jiffe1 joined #gluster
02:48 hjmangalam1 joined #gluster
02:50 m0zes joined #gluster
02:54 bharata joined #gluster
03:05 hagarth joined #gluster
03:27 mohankumar__ joined #gluster
03:27 badone joined #gluster
03:35 Hchl joined #gluster
03:44 hjmangalam1 joined #gluster
03:47 sgowda joined #gluster
03:49 jbrooks joined #gluster
03:50 hjmangalam1 joined #gluster
03:54 Hchl joined #gluster
04:13 glusterbot New news from newglusterbugs: [Bug 958781] KVM guest I/O errors with xfs backed gluster volumes <http://goo.gl/4Goa9>
04:24 Hchl joined #gluster
04:24 y4m4 joined #gluster
04:38 deepakcs joined #gluster
04:39 hajoucha joined #gluster
04:50 Hchl joined #gluster
04:50 vpshastry joined #gluster
04:52 CheRi joined #gluster
05:12 badone joined #gluster
05:14 glusterbot New news from newglusterbugs: [Bug 962619] glusterd crashes on volume-stop <http://goo.gl/XXzSY>
05:17 Hchl joined #gluster
05:22 lalatenduM joined #gluster
05:24 lala_ joined #gluster
05:29 sgowda joined #gluster
05:33 lalatenduM joined #gluster
05:34 shireesh joined #gluster
05:40 VeggieMeat joined #gluster
05:43 psharma joined #gluster
05:48 guigui3 joined #gluster
05:49 pkoro joined #gluster
06:03 bala1 joined #gluster
06:05 StarBeast joined #gluster
06:06 shylesh joined #gluster
06:09 rgustafs joined #gluster
06:10 bala1 joined #gluster
06:11 rastar joined #gluster
06:13 satheesh joined #gluster
06:16 jtux joined #gluster
06:16 vimal joined #gluster
06:20 ngoswami joined #gluster
06:20 raghu joined #gluster
06:28 Hchl joined #gluster
06:30 ollivera_ joined #gluster
06:32 sgowda joined #gluster
06:44 ramkrsna joined #gluster
06:44 ramkrsna joined #gluster
06:50 ekuric joined #gluster
06:57 6JTAAU2BJ joined #gluster
07:03 rastar joined #gluster
07:06 satheesh joined #gluster
07:08 rotbeard joined #gluster
07:10 andreask joined #gluster
07:36 aravindavk joined #gluster
07:48 CheRi joined #gluster
07:55 rb2k joined #gluster
08:12 a2 joined #gluster
08:12 Hchl joined #gluster
08:17 mooperd joined #gluster
08:21 jbrooks joined #gluster
08:50 Hchl joined #gluster
09:10 vshankar joined #gluster
09:14 satheesh joined #gluster
09:14 hajoucha joined #gluster
09:21 realdannys1 joined #gluster
09:24 realdannys1 Can anyone see from the cli.log whats going wrong with my volume create?  http://fpaste.org/18073/37098943/
09:24 glusterbot Title: #18073 Fedora Project Pastebin (at fpaste.org)
09:24 realdannys1 been trying for 3 days now to create a simple volue!
09:24 ricky-ticky joined #gluster
09:30 vrturbo joined #gluster
09:31 vrturbo joined #gluster
09:36 satheesh joined #gluster
09:43 ctria joined #gluster
09:49 zetheroo joined #gluster
09:50 zetheroo we had a power outage (unscheduled) and one of the two gluster servers went down. The gluster on the gluster server that did not go down is read-only.
09:50 zetheroo how do I fix this?
09:51 zetheroo I have powered up the other gluster server but it won't mount the gluster
09:51 zetheroo the brick is mounted though
09:51 zetheroo /dev/mapper/vgmars-lvmars  5.5T  882G  4.6T  16% /mnt/brick
09:52 ekuric joined #gluster
09:55 ekuric joined #gluster
09:55 social__ is there any schedule for gluster releases? I'd like to know how long it'll take to release 3.4.0?
09:58 Hchl joined #gluster
10:08 ujjain joined #gluster
10:14 shylesh joined #gluster
10:17 waldner joined #gluster
10:37 arusso_znc joined #gluster
10:42 edward1 joined #gluster
10:47 shylesh joined #gluster
11:00 matiz joined #gluster
11:01 manik joined #gluster
11:03 andreask joined #gluster
11:07 isomorphic joined #gluster
11:07 purpleidea joined #gluster
11:07 purpleidea joined #gluster
11:08 sgowda joined #gluster
11:13 ccha joined #gluster
11:14 puebele1 joined #gluster
11:18 kanagaraj joined #gluster
11:18 kanagaraj can someone pls tell me where can i get gluster-swift rpms for fedora 18?
11:27 Hchl joined #gluster
11:28 kkeithley @yum
11:28 glusterbot kkeithley: The official community glusterfs packges for RHEL/CentOS/SL (and Fedora 17 and earlier) are available here http://goo.gl/s077x
11:29 kkeithley kanagaraj: ^^^
11:29 kanagaraj kkeithley, thanks
11:30 kkeithley Fedora 18 is there too
11:31 rastar joined #gluster
11:32 kkeithley You can also get them from an ordinary yum update.
11:32 CheRi joined #gluster
11:33 kkeithley You can also get them from an ordinary yum update on Fedora 18
11:33 kanagaraj kkeithley, got it, thanks
11:34 kanagaraj kkeithley, looks like this will work with only glusterfs-3.3 not with the recent 3.4 beta releases
11:35 kkeithley 3.4.0beta2 is at http://download.gluster.org/pub/glust​er/glusterfs/qa-releases/3.4.0beta2/.  3.4.0beta3 will be there as soon as I finish assembling the repo
11:35 glusterbot <http://goo.gl/bM4y5> (at download.gluster.org)
11:36 kkeithley I.e. in about 10 minutes
11:39 kanagaraj kkeithley, thanks a lot.
11:47 kkeithley 3.4.0beta3 repo is ready now
11:55 kanagaraj kkeithley, thanks, it worked.
11:55 satheesh1 joined #gluster
11:56 ujjain2 joined #gluster
12:09 satheesh joined #gluster
12:10 ricky-ticky joined #gluster
12:18 Hchl joined #gluster
12:22 kkeithley @forget yum
12:22 glusterbot kkeithley: The operation succeeded.
12:22 fleducquede joined #gluster
12:23 kkeithley @learn yum as The official community glusterfs packges for RHEL, CentOS, SL, Fedora 17 and earlier, and Fedora 18 arm/armhfp are available here http://goo.gl/s077x
12:23 glusterbot kkeithley: The operation succeeded.
12:23 kkeithley @yum
12:23 glusterbot kkeithley: The official community glusterfs packges for RHEL, CentOS, SL, Fedora 17 and earlier, and Fedora 18 arm/armhfp are available here http://goo.gl/s077x
12:23 satheesh1 joined #gluster
12:24 kkeithley @forget yum
12:24 glusterbot kkeithley: The operation succeeded.
12:25 kkeithley @learn yum as The official community glusterfs packges for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x
12:25 glusterbot kkeithley: The operation succeeded.
12:26 mohankumar__ joined #gluster
12:28 kkeithley @learn beta-yum as The official community glusterfs packges for RHEL 6 (including CentOS, SL, etc.) Fedora 17-19 (i386, x86_64, arm, armhfp), and Pidora are available at http://download.gluster.org/pub​/gluster/glusterfs/qa-releases/
12:28 glusterbot kkeithley: The operation succeeded.
12:28 kkeithley @beta-yum
12:28 glusterbot kkeithley: The official community glusterfs packges for RHEL 6 (including CentOS, SL, etc.) Fedora 17-19 (i386, x86_64, arm, armhfp), and Pidora are available at http://goo.gl/LGV5s
12:29 kkeithley @forget beta-yum
12:29 glusterbot kkeithley: The operation succeeded.
12:29 kkeithley @learn beta-yum as The official community glusterfs packges for RHEL 6 (including CentOS, SL, etc.), Fedora 17-19 (i386, x86_64, arm, armhfp), and Pidora are available at http://goo.gl/LGV5s
12:29 glusterbot kkeithley: The operation succeeded.
12:30 kkeithley @forget yum
12:30 glusterbot kkeithley: The operation succeeded.
12:32 kkeithley @learn yum as The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
12:32 glusterbot kkeithley: The operation succeeded.
12:33 kkeithley @yum
12:33 glusterbot kkeithley: The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
12:33 realdannys1 @kkeithley - I had the same problem on a new EC2 instance, I think I must be having issues formatting the EBS volume or something
12:34 ndevos @yum repo
12:34 glusterbot ndevos: The official glusterfs packages for RHEL/CentOS/SL are available here: http://goo.gl/s077x
12:34 kkeithley @forget yum repo
12:34 glusterbot kkeithley: The operation succeeded.
12:34 ndevos hey!
12:34 kkeithley @learn yum repo as The official community glusterfs packages for RHEL (including CentOS, SL, etc), Fedora 17 and earlier, and Fedora 18 arm/armhfp are available at http://goo.gl/s077x. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates repository.
12:34 glusterbot kkeithley: The operation succeeded.
12:35 ndevos haha, you cant have enough of them!
12:35 kkeithley ;-)
12:36 kkeithley realdannys1: You might be right. I haven't ever used EC2 so I don't have any good suggestions for you.
12:37 jbrooks joined #gluster
12:37 kkeithley you should be able to create a brick pretty much anywhere though, so I'm kinda stumped atm
12:39 kkeithley 3.4.0beta3 rpms are available now from http://goo.gl/LGV5s
12:39 glusterbot Title: Index of /pub/gluster/glusterfs/qa-releases (at goo.gl)
12:40 Dave2 Hrm. I seem to have a pair of bricks which reckon they're in a split-brain state, but "gluster volume heal <volume> info split-brain" shows a load of entries where the path on brick is "/". Any idea what might cause this?
12:41 Dave2 (This is in a volume with 4 nodes, each with one brick, with nodes 1 and 2 replicating, and nodes 3 and 4 replicating)
12:43 Dave2 (It's GlusterFS 3.3.1)
12:43 duerF joined #gluster
12:43 JoeJulian Dave2: I've seen that and I still am not entirely sure why. Make sure that ${brick_root}/.glusterfs/00/00/0000*0001 is a symlink (there's a rare bug that makes it a directory). If it is, 0 out trusted.afr.* ,,(extended attributes)
12:43 glusterbot Dave2: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
12:45 edward1 joined #gluster
12:46 Dave2 JoeJulian: it appears to be a symlink; should I be looking at the extended attributes for that symlink?
12:46 JoeJulian No, extended attributes for the brick root
12:48 satheesh joined #gluster
12:52 Dave2 JoeJulian: zero out = "setfattr -n trusted.afr.gv0-client-2 -v 0x0 brick1" (and the same for client-3), or would that be wrong?
12:53 JoeJulian You have to have the full set of 0s or you'll get an error.
12:54 Dave2 OK, I was wondering about that
12:55 Dave2 Should that just clean itself up?
12:55 aliguori joined #gluster
12:57 _ndevos joined #gluster
12:58 jbrooks joined #gluster
13:00 Dave2 (And what might be the root cause behind this?)
13:01 JoeJulian Once they're set you shouldn't get any *new* entries in the info split-brain list. The old ones remain as if it's a log.
13:02 Dave2 ah, right
13:02 JoeJulian And if I could have figured that out, I would have filed the bug report. I think I remember seeing something about this, though, so I think it'll be fixed in 3.4 and 3.3.2.
13:04 Dave2 Hah, yes, always the way.
13:05 Dave2 Is the split-brain output meant to be a log, or is it just a side-effect of what's going on here?
13:06 JoeJulian It's meant to be a log. That one I did file an enhancement request on.
13:06 JoeJulian So is freenode using gluster?
13:06 Dave2 This is a work thing, rather than a freenode thing
13:07 JoeJulian cool
13:08 Dave2 And probably my last question, as I can't find it in the docs, is there a way to clear out the log once you've fixed the condition?
13:10 JoeJulian That's the point of my enhancement request. I think downing all glusterd and bringing them back up will do that.
13:10 Dave2 ahh, OK, thanks.
13:11 vpshastry left #gluster
13:11 satheesh joined #gluster
13:15 dewey joined #gluster
13:17 dewey_ joined #gluster
13:20 saurabh joined #gluster
13:24 bulde joined #gluster
13:33 chouchins joined #gluster
13:34 ekuric joined #gluster
13:34 chouchins Is there a Gluster booth this year at the Rh summit?  It's not where it was last year.
13:35 purpleidea joined #gluster
13:35 purpleidea joined #gluster
13:37 manik joined #gluster
13:42 chouchins joined #gluster
13:46 kkeithley A Gluster booth? There was a RHS booth last year. johnmark had a dev room last night. I wasn't able to get down there for it.
13:46 charlescooke_ joined #gluster
13:47 mohankumar__ joined #gluster
13:47 guigui1 joined #gluster
13:49 semiosis Expo Hall b is where all the open source projects are
13:49 semiosis Including gluster
13:58 vpshastry joined #gluster
13:58 vpshastry left #gluster
13:59 bugs_ joined #gluster
14:04 rastar joined #gluster
14:06 badone joined #gluster
14:08 MrNaviPacho joined #gluster
14:12 dbruhn joined #gluster
14:12 mohankumar__ joined #gluster
14:26 hajoucha joined #gluster
14:29 dbruhn is there a way to force a brick to resync after it's been offline from a replication group for a while?
14:30 hajoucha so we have built gluster from git on fedora 18 today and tried it with new kernel 3.9.5. Glusterfs native client fails to copy big files (4GB) and nfsmount as well. We copy those files from local filesystem to a mounted volume. Transport is rdma.
14:30 hajoucha rather randomly the copied file is smaller than original.
14:32 hajoucha for copying we use either "cp -av" or quickly made programm which only opens file, reads and writes. Both experience similar behaviour - simply the resulting file is (significantly) smaller.
14:35 clutchk any reason why a brick remove would cause the brick to take on more data?
14:36 hajoucha jclift_: .
14:38 hagarth joined #gluster
14:38 chirino joined #gluster
14:43 rgustafs joined #gluster
14:47 mohankumar__ joined #gluster
14:57 hagarth joined #gluster
15:03 failshell joined #gluster
15:05 portante joined #gluster
15:08 bambi23 joined #gluster
15:10 manik joined #gluster
15:10 hjmangalam1 joined #gluster
15:13 bulde joined #gluster
15:16 mynameisbruce joined #gluster
15:19 * bulde @ Gluster Developer Lounge, Red Hat Summit, Boston - MA
15:21 bala joined #gluster
15:28 chirino joined #gluster
15:31 jthorne joined #gluster
15:41 zaitcev joined #gluster
15:49 realdannys1 FfS!!
15:51 realdannys1 Three days on this and all I had to do was use the public DNS rather than the elastic IP and the volume creates
15:51 realdannys1 I could have done that on the last 4 instances I had!
15:51 kkeithley glad you figured it out
15:56 MrNaviPa_ joined #gluster
16:06 zetheroo left #gluster
16:07 bambi23 joined #gluster
16:14 vpshastry joined #gluster
16:15 hjmangalam1 joined #gluster
16:19 vpshastry left #gluster
16:21 neofob left #gluster
16:24 MrNaviPa_ joined #gluster
16:39 Koma joined #gluster
16:41 bennyturns joined #gluster
16:44 bulde joined #gluster
16:44 stickyboy joined #gluster
17:04 DEac-_ joined #gluster
17:08 Koma is it possible to grow a volume while adding more and more nodes to the cluster?
17:10 jdarcy joined #gluster
17:23 ctria joined #gluster
17:26 kkeithley It is possible to add more nodes, yes, and thereby grow the volume. Read the docs about re-balancing after you add nodes.
17:26 Koma cool
17:27 Koma and  gluster can determine where to place the files in order to create less network usage?
17:27 mtanner__ joined #gluster
17:28 Koma for example  if I've 10 nodes and I've more usage of a particular file  on the node "6" the file will be synced on the node six_'
17:28 kkeithley gluster uses elastic hashing to place files. Network usage doesn't enter into it
17:28 Koma ?
17:29 kkeithley Don't know what you mean by 'synced on the node six'
17:29 Koma the files are sparsed an made avaiable on the whole cluster
17:30 Koma but  the single file is  copied 2, 3 or many times based on the redundancy that I choose
17:31 Koma if a file is cloned 3 times  I've 3/10 possibilities that the file is on the same node that request the file itself
17:31 Koma gluster will move hte file to the more convenient node based on the request?
17:31 MinhP joined #gluster
17:32 H___ joined #gluster
17:32 kkeithley I don't know what you mean by sparsed.  If you've got a 10 brick distribute volume the file is written to one brick based on the hash. It never moves
17:33 jdarcy Koma: GlusterFS does not use all possible combinations of N drives for replication.  It divides the bricks into N-brick sets, then distributes randomly across those.
17:33 H__ joined #gluster
17:33 kkeithley If you've got a 10 brick replica 2 volume, (5x2) then the client, where ever that is, writes to two bricks, one from each set of five.
17:33 jdarcy Koma: There is a feature to *create* files locally, instead of on a random replica set, but subsequent usage won't cause it to move.
17:33 kkeithley In general nothing moves files until you add bricks and rebalance.
17:34 Koma mmm 'k
17:34 neofob joined #gluster
17:34 Koma just wondering because if I create a file of like 10gb and i request it everytime from another node that will cause a lot of network traffic
17:35 jdarcy AFAIK very few distributed file systems do anything much fancier.  Some let you replicate across more different combinations of servers, but dynamic migration can induce its own set of problems so most forego it.
17:35 jdarcy It can be a real shame to migrate a file just after its peak usage has ended, away from the node where it will be used next.
17:35 kkeithley If you're asking about a situation where the clients are on the server, you're correct, you're going to incur a copy over the network when you read it.
17:36 jdarcy Koma: Another thing you can do is move the jobs to the data instead of vice versa.  That's the basic principle of Hadoop, and seems to work pretty well in practice.
17:37 Koma jdarcy please explain
17:37 kkeithley If you're asking about a situation where the clients are on the servers, you're correct, it's likely you're going to incur a copy over the network when you read it.
17:38 jdarcy Koma: If you know a virtual machine or compute job is going to need certain data, you start it up on a machine that has the data.
17:39 Koma and in case of failure to relocate the data?
17:39 Koma ...I'm an idiot
17:39 jdarcy ?
17:39 Koma cp on the same node with the same permission and the data will me automatically on the same node..
17:40 Koma jdarcy thank you
17:40 Koma BRB
17:40 jdarcy Koma: Always happy to help.
17:40 * Koma AFK
17:41 jdarcy (OK, not always.  Some days I'd rather be a snarky twit, just not today.)
17:44 chirino joined #gluster
17:45 kkeithley why, because it's Saint Onouphrius day? Or something else?
17:46 jdarcy kkeithley: Saving it up for Summit.
17:48 arusso joined #gluster
17:51 duerF joined #gluster
18:01 daMaestro joined #gluster
18:07 chouchins joined #gluster
18:08 ramkrsna joined #gluster
18:09 hajoucha joined #gluster
18:21 portante joined #gluster
18:35 duerF joined #gluster
18:37 joelwallis joined #gluster
18:40 y4m4 joined #gluster
18:45 mooperd joined #gluster
18:50 realdannys1 Hmmm I got this on my client EC2 when trying to finally mount a GlusterFS volume - any ideas? @kkeithly http://fpaste.org/18260/37106298/
18:50 glusterbot Title: #18260 Fedora Project Pastebin (at fpaste.org)
18:53 H__ left #gluster
18:56 NcA^ joined #gluster
19:07 JoeJulian @shutdown
19:07 JoeJulian @quit
19:07 JoeJulian hmm...
19:11 glusterbot joined #gluster
19:14 sohoo joined #gluster
19:18 realdannys1 hmmm just keep getting mount failed no matter what I try
19:32 tg2 @realdanny, looks like one of the gluster servers isn't running
19:32 tg2 or not accessible from the client
19:32 tg2 > failed (No route to host)
19:35 vedranm joined #gluster
19:35 vedranm hey guys
19:36 vedranm did I understand the documentation correctly that distributed volume means "my files are somewhere across the disks, but a single file is always in a single disk"
19:36 vedranm while strip means "my files are in parts all over, single file is spread accross multiple disks"
19:39 bambi23 joined #gluster
19:43 ccha joined #gluster
19:44 MrNaviPa_ joined #gluster
19:48 realdannys1 @tg2 I've only got on instance running, I've turned Selinux off on it just to make sure, its on Ec2 and I've setup the security groups to allow all ports needed and allowed each IP to the other...
19:50 chirino joined #gluster
19:57 jones_d joined #gluster
19:59 rb2k joined #gluster
20:08 joshcarter joined #gluster
20:09 joshcarter joined #gluster
20:16 realdannys1 I've opened every port possible in the security settings
20:16 realdannys1 I can easily say out of everything I've ever done on a server, setting up gluster has been the least enjoyable!
20:17 sohoo Anyone here know how to fix volume deviations of the two bricks replication configuration.  For example, two drives have a difference of 1 GB. I ran rsync-PavhS - xattrs - delete  on the two bricks but the deviation still exists
20:18 andreask joined #gluster
20:24 sohoo it looks like im stack with this 1GB deviation forever
20:27 sohoo self-heal didnt help ither :)
20:28 sohoo the 2 bricks are not the same stacked as alert on zabbix
20:45 bulde joined #gluster
20:47 bennyturns joined #gluster
21:21 MrNaviPacho joined #gluster
21:22 RangerRick2 joined #gluster
21:31 mooperd joined #gluster
21:33 realdannys1 Are there ANY EC2 users in here at all??
21:37 brosner joined #gluster
21:39 brosner i am trying to figure out why my replicated setup of two nodes is using so much CPU (both nodes) i did an strace on both and something like https://gist.github.com/br​osner/1491b816b2df7b8426cc is appearing in what appears to be an infinite loop. does that make sense to anyone more familiar with the internals? is there anywhere else i can look to figure out the problem.
21:39 glusterbot <http://goo.gl/dOOY0> (at gist.github.com)
21:40 majeff joined #gluster
21:58 realdannys1 joined #gluster
22:04 semiosis realdannys1: i use EC2
22:05 semiosis but i'm at RH summit so my availability on IRC this week will be spottier than usual
22:05 realdannys1 semiosis - how did you get the client to speak to the server (gluster of course)
22:05 realdannys1 I have the volume set up and working finally (I needed to reference the public DNS rather than the elastic IP to have it work!)
22:05 semiosis the regular way... mount -t glusterfs server:volume /mount/point
22:05 semiosis yes i strongly recommend using the public-hostname of ec2 instances
22:05 realdannys1 but now my client won't mount the drive, it keeps giving me an error but I can't find any security measures that would stop it mounting
22:06 semiosis imho elastic ip is unnecessary, but if you want go ahead
22:06 realdannys1 well - it wouldn't even make the volume with the elastic IP
22:06 semiosis in fact, i strongly recomment making new dns CNAMEs for yoru gluster servers (server1.domain.net...)
22:06 semiosis and mapping those to the public-hostnames of your ec2 instances (elastic or not)
22:06 realdannys1 however I can't get the drive to mount on my client with elastic IP, or public DNS
22:06 realdannys1 yeah? oh i'll do that
22:07 semiosis realdannys1: security groups?
22:07 realdannys1 yeah the EC2 security groups
22:07 semiosis @ports
22:07 glusterbot semiosis: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
22:07 realdannys1 Ive opened all the ports I could possible open, yet still the client won't mount the drive
22:07 realdannys1 opened all those :/
22:08 realdannys1 [root@ip-10-202-210-190 ~]# mount -a
22:08 realdannys1 Mount failed. Please check the log file for more details.
22:08 semiosis also you'll probably want to map those new dns CNAMEs to 127.0.0.1 in their respective hosts' hosts file
22:10 realdannys1 this is what my log is kicking out - http://fpaste.org/18318/10750151/
22:10 glusterbot Title: #18318 Fedora Project Pastebin (at fpaste.org)
22:12 ferringb joined #gluster
22:15 clag_ joined #gluster
22:15 realdannys1 So @semiosis if you were to mount a volume on a client - what would you use for the server, your domain names?
22:15 semiosis yes
22:16 semiosis ec2 provides split horizon dns, so the instance's (or EIP's) public-hostname resolves to local-ipv4 from within ec2, and public-ipv4 from outside on the net
22:19 realdannys1 ok, so I'm using cloudfalre, so at the minute I'm creating a cname "uploads" so I have uploads.myurl.com
22:20 realdannys1 which will point to the elastic IP for the instance with gluster on
22:20 rb2k joined #gluster
22:21 realdannys1 having said that it won't let me point a cname to an IP so I've had to set up an A record instead
22:22 realdannys1 oh, sorry just realised you said to make a cname pointing to the DNS
22:23 semiosis ec2 has four kinds of addresses local-vs-public & ipv4-vs-hostname... for example, local-ipv4 (the 10.x addr) or public-hostname (the ec2-ip-ip-ip-ip.amazonaws.com)
22:23 semiosis even elastic ips have their own hostnames too
22:23 semiosis and you can map a cname to one
22:24 semiosis just ping the ip (or do dig -x) to get a reverse dns lookup for the EIP's public-ipv4, and you'll get its public-hostname
22:26 realdannys1 gotcha, i didn't know the public dns of an instance always stayed the same - which is why I've always assigned elastic iPs straight away
22:26 realdannys1 anyway I'm not sure if this is the root of my problem anyway but I've set up a cname anyway i'll give it time to propagate and go from there
22:27 realdannys1 did you see the pastbin with the log errors?
22:27 realdannys1 I ever probed the client from the server to check and it can probe it just fine so I don't understand why it won't mount
22:34 realdannys1 Ok, cname created, same problem occurs, mount failed :(
22:39 badone joined #gluster
22:44 gmcwhistler joined #gluster
22:44 gmcwhistler 2
22:57 semiosis realdannys1: the public-hostname of an elastic ip always stays the same.  instance public-ips change
22:58 realdannys1 oh ok, so for cnames, you'd reverse the elastic IP and use that hostname?
22:59 semiosis yep
23:18 realdannys1 @semiosis I did that and it just gave me the same public DNS that I can see in my EC2 panel
23:19 gmcwhistler joined #gluster
23:21 realdannys1 @semiosis - same error message with cname, Im not sure gluster is supposed to work for me!
23:21 realdannys1 [2013-06-12 19:19:22.589118] E [socket.c:1715:socket_connect_finish] 0-glusterfs: connection to  failed (No route to host)
23:21 realdannys1 [2013-06-12 19:19:22.589219] E [glusterfsd-mgmt.c:1787:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: Transport endpoint is not connected
23:26 y4m4 joined #gluster
23:30 brosner joined #gluster
23:54 StarBeast joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary