Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 tyl0r joined #gluster
00:25 tyl0r Has anyone here successfully used cluster.min-free-disk with non-uniform bricks? Or rather, has anyone here successfully setup distributed replication on non-uniform disks? I'm looking to add 2 new bricks that are twice the capacity.
00:26 nueces joined #gluster
01:05 semiosis @qa releases
01:05 glusterbot semiosis: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
01:06 semiosis tyl0r: it's all fun & games until a brick fills up.  my advise is to not count on gluster to save you from that.  you might want to use lvm to split your new disks into logical volumes the same size as other bricks
01:06 semiosis i would, if i could
01:07 tyl0r Okay, thanks for the heads up
01:09 tyl0r While creating the LVM volumes, would you recommend keeping the OS on a separate LV ?
01:09 tyl0r RAID5 => (LV1 for OS, LV2 for brick1, LV3 for brick2) ?
01:28 semiosis oh well
01:45 dkorzhevin joined #gluster
01:55 harish_ joined #gluster
02:14 davidbierce joined #gluster
02:14 semiosis JoeJulian: if you're out there... HALP!
02:14 semiosis or anyone really
02:14 semiosis trying to build 3.5qa1 and getting this: logging.c:29:28: fatal error: gf-error-codes.h: No such file or directory
02:14 semiosis and in fact i can't find gf-error-codes.h in the source tree
02:15 semiosis building on debian wheezy
02:15 semiosis (for packages)
02:30 skered- semiosis: Did you run autogen.sh before configure?
02:30 skered- Based off the srpm file that generates gf-error-codes.h
02:31 semiosis skered-: hmm, i thought that was only for builds from git, not release tarballs
02:31 semiosis i will try though, thanks!!!
03:13 semiosis skered-: looks like that did the trick, thanks again
04:13 _Bryan__ joined #gluster
04:42 tyl0r joined #gluster
05:16 vpshastry joined #gluster
05:16 vpshastry left #gluster
07:45 getup- joined #gluster
07:46 ekuric joined #gluster
08:07 ngoswami joined #gluster
08:55 vpshastry1 joined #gluster
10:12 vpshastry joined #gluster
10:20 diegol__ joined #gluster
10:33 dkorzhevin joined #gluster
10:40 davidbierce joined #gluster
10:55 davidbierce joined #gluster
11:01 ndevos semiosis: hmm, I guess that 'make dist' should generate a tarball that includes that file, did you file a bug for that?
11:01 glusterbot http://goo.gl/UUuCq
11:15 getup- joined #gluster
11:39 juhaj_ Any ideas why "mount -t glusterfs 10.0.4.13:/dir /glusterfs/dir" produces "E [socket.c:2157:socket_connect_finish] 0-dir-client-0: connection to 10.0.6.1:49152 failed (Connection timed out)"?
11:40 juhaj_ Of course, the mount does not work either
11:41 dkorzhevin joined #gluster
11:48 XpineX joined #gluster
12:07 vpshastry joined #gluster
12:07 ndevos juhaj_: can the client that mounts the volume reach 10.0.6.1?
12:08 ndevos juhaj_: upon mounting, the client gets the volume layout, that layout contain the path to the bricks as it was given on 'gluster volume create'
12:18 hateya joined #gluster
12:22 hateya_ joined #gluster
13:00 mohankumar__ joined #gluster
13:00 juhaj_ I already destroyed the whole volume, but that would explain it. Now, 10.0.6.1 pings from the failing host, but I cannot tell (anymore) if the server was bound to that address (I think it was, because I had not limited it in any way)
13:00 vpshastry joined #gluster
13:00 ThatGraemeGuy joined #gluster
13:00 vpshastry left #gluster
13:08 RedShift joined #gluster
13:16 crashmag joined #gluster
13:31 davidbierce joined #gluster
13:40 ThatGraemeGuy joined #gluster
14:02 hagarth joined #gluster
14:17 DV joined #gluster
14:19 TDJACR joined #gluster
14:22 diegol__ joined #gluster
14:26 vpshastry joined #gluster
14:27 diegol__ joined #gluster
14:29 hateya joined #gluster
14:32 diegol__ joined #gluster
14:34 DV joined #gluster
14:37 diegol__ joined #gluster
14:43 TDJACR joined #gluster
14:44 diegol__ joined #gluster
14:51 hateya joined #gluster
14:52 diegol__ joined #gluster
15:00 diegol__ joined #gluster
15:27 zerick joined #gluster
15:32 _BryanHm_ joined #gluster
15:40 diegows joined #gluster
15:40 vpshastry joined #gluster
15:43 mohankumar__ joined #gluster
15:47 hateya joined #gluster
16:16 vpshastry joined #gluster
16:33 ndevos joined #gluster
17:06 vpshastry left #gluster
17:11 hagarth joined #gluster
17:19 geewiz joined #gluster
17:37 tg2 joined #gluster
17:41 hateya joined #gluster
18:10 vpshastry joined #gluster
18:22 lkoranda joined #gluster
18:27 ndevos joined #gluster
18:27 ndevos joined #gluster
19:07 zerick joined #gluster
20:49 kseifried joined #gluster
20:49 kseifried ping, anyone around?
20:49 kseifried semiosis, ping, you specifically, are you here?
20:50 kseifried semiosis, http://www.gluster.org/community/documentation/index.php/Getting_started_setup_aws mentions you, specifically about not using the internal IP in AWS deployments, but the public IP, trying to track down the exact reason, is it the peer discovery/announce IP, or something else?
20:50 glusterbot <http://goo.gl/MMQwpo> (at www.gluster.org)
21:04 samppah kseifried: afaik internal IP isn't reserved for specific host and it can change if you shutdown the host
21:04 kseifried right, I'm not talking EIP's though, just the externally facing ones
21:04 kseifried vs the internal 10.* ones
21:04 samppah ahh
21:05 kseifried plus with EIP you then add latency/cost which is not so good =)
21:05 kseifried I've been using the internal 10.* IP's for my gluster in AWS for like... 2-3 years now, no problems noticed
21:06 kseifried trying to figure out what corner/case/etc this is, unfortunately I can't find any good data via google
21:06 calum_ joined #gluster
21:06 kseifried as best I can tell maybe the peer probe gets messed up sometimes but I can't find confirmaiton of that
21:09 samppah is there any qos or other traffic when using internal IP?
21:10 samppah it's recommended to use hostnames when doing peer probe and creating volumes.. easier to point hostname to new ip address than change ip in gluster setup :)
21:10 hateya joined #gluster
21:10 kseifried again, not really a concern when using the IP's, especially in a VPC configuration
21:11 samppah no, i mean in general
21:11 kseifried right. I'm not talking in general. I'm talking specifically about AWS =)
21:11 samppah :)
21:14 kseifried I hate this, someone authoritative says "don't do X, it's dangerous" but then leaves out why it is dangerous, or the data to support it.
21:14 samppah nod
21:14 samppah well semiosis is very active here so just wait a little ;)
21:15 samppah although it tends to be bit quiet here during weekends
21:16 kseifried yah
21:17 lbalbalba joined #gluster
21:21 kseifried hmm the man pages seem out of date or did the backupvolfile-server option go away?
21:22 JoeJulian @tutorial
21:22 glusterbot JoeJulian: I do not know about 'tutorial', but I do know about these similar topics: 'semiosis tutorial'
21:22 JoeJulian @semiosis tutorial
21:22 glusterbot JoeJulian: http://goo.gl/6lcEX
21:22 JoeJulian Not sure if he explains in there, but ...
21:23 kseifried he doesn't, but man that sucks. I'm gonna go file some RFEs
21:23 kseifried the RHS docs cover this, but not the man pages. great
21:29 geewiz joined #gluster
21:33 kseifried JoeJulian, : no, I'm just looking at upgrading to 3.4, read the man pages, check older versions too, a lot fo stuff was never documented: https://bugzilla.redhat.com/show_bug.cgi?id=1031328
21:33 glusterbot <http://goo.gl/4e1PjH> (at bugzilla.redhat.com)
21:33 glusterbot Bug 1031328: medium, medium, ---, amarts, NEW , Gluster man pages are out of date.
21:33 kseifried JoeJulian, : I just never really noticed before
21:37 Guest97247 left #gluster
22:00 glusterbot New news from newglusterbugs: [Bug 1031328] Gluster man pages are out of date. <http://goo.gl/4e1PjH>
22:22 lbalbalba joined #gluster
23:04 lbalbalba joined #gluster
23:15 jbrooks joined #gluster
23:22 Liquid-- joined #gluster
23:51 diegol__ joined #gluster
23:57 diegows joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary