Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 andrewklau left #gluster
00:37 recidive joined #gluster
00:43 _pol joined #gluster
00:47 sprachgenerator joined #gluster
00:51 primusinterpares joined #gluster
01:05 bala joined #gluster
01:06 _pol_ joined #gluster
01:14 failshell joined #gluster
01:18 harish_ joined #gluster
01:32 RicardoSSP joined #gluster
01:32 RicardoSSP joined #gluster
01:40 DV joined #gluster
01:52 harish_ joined #gluster
01:55 _pol joined #gluster
02:11 andrewklau joined #gluster
02:36 hagarth NuxRo: around?
02:46 ndarshan joined #gluster
02:56 kanagaraj joined #gluster
03:07 saurabh joined #gluster
03:10 vshankar joined #gluster
03:11 wgao joined #gluster
03:18 _pol joined #gluster
03:20 shubhendu_ joined #gluster
03:23 bulde joined #gluster
03:27 kshlm joined #gluster
03:28 eesullaw joined #gluster
03:33 jag3773 joined #gluster
03:34 davinder joined #gluster
03:35 recidive joined #gluster
03:58 mohankumar joined #gluster
03:58 andrewklau joined #gluster
03:59 itisravi joined #gluster
04:03 vpshastry1 joined #gluster
04:05 andrewklau joined #gluster
04:06 CheRi joined #gluster
04:07 ppai joined #gluster
04:14 _pol joined #gluster
04:15 Shyam joined #gluster
04:25 dusmant joined #gluster
04:39 ndarshan joined #gluster
04:45 shruti joined #gluster
04:46 lalatenduM joined #gluster
04:48 rjoseph joined #gluster
04:48 shubhendu_ joined #gluster
04:58 gdubreui joined #gluster
05:00 bala joined #gluster
05:00 vpshastry1 joined #gluster
05:04 dusmant joined #gluster
05:08 andrewklau joined #gluster
05:11 gdubreui hi, 2 servers are part of cluster and provide a brick to a volume. How to make a 3rd server join the cluster?
05:14 shubhendu_ joined #gluster
05:16 pithagorians_ joined #gluster
05:17 nshaikh joined #gluster
05:24 rastar joined #gluster
05:28 aravindavk joined #gluster
05:28 psharma joined #gluster
05:31 raghu joined #gluster
05:31 dusmant joined #gluster
05:34 vpshastry2 joined #gluster
05:36 shubhendu_ joined #gluster
05:36 JoeJulian @later tell gdubreui If you'd stayed long enough to get an answer, it would have been that you need to probe the 3rd server from either of the first two.
05:36 glusterbot JoeJulian: The operation succeeded.
05:36 JoeJulian Ironically, I have no patience for impatience... Hmmm...
05:36 micu1 joined #gluster
05:37 gdubreui JoeJulian, thanks, but it didn't work, resetting things :)
05:37 JoeJulian Wait... you ARE here??? wierd. Tab completion didn't work on your nick...
05:38 micu2 joined #gluster
05:39 gdubreui JoeJulian, are you talking about the bot?
05:40 lalatenduM joined #gluster
05:40 JoeJulian No. I'm using XChat. When I start typing a nick I usually hit about 3 letters then the tab key. When the user is still online it finishes the name. If they're not, it doesn't.
05:40 ababu joined #gluster
05:40 JoeJulian I typed gdu<tab> and it stayed gdu so I thought you were gone.
05:41 JoeJulian wierd.
05:41 JoeJulian Anyway.... what happened when it didn't work?
05:43 gdubreui JoeJulian, just testing again
05:46 hagarth joined #gluster
05:47 gdubreui JoeJulian, ok I was trying to probe from the 3rd (new) srv
05:48 saurabh joined #gluster
05:50 gdubreui JoeJulian, Do there is a way for the 3rd server to join by itself. What's weird is the 2nd server can join by probing the first but the next one can't
05:53 mohankumar joined #gluster
05:57 kshlm gdubreui: you have to always probe a new peer from the cluster, not the other way round.
05:58 JoeJulian gdubreui: an untrusted server cannot join a trusted pool.
05:59 shubhendu_ joined #gluster
05:59 JoeJulian gdubreui: The first two establish a trusted pool. Once you have that trust, only trusted servers can invite new servers into that pool. It's a security thing.
06:01 JoeJulian If you're looking for a way to do that through ,,(puppet), the only way we've thought of so far is through exported resources or by specifying every server in the manifests.
06:01 glusterbot (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
06:01 gdubreui JoeJulian, yeah that's what I thought, security.. But if the second one can join is that not an issue ? Of course why anyone would have only one brick anyway?
06:02 JoeJulian Yeah, it's not perfect.
06:02 Durzo JoeJulian, any news on qa3 going stable?
06:02 gdubreui Yeah, you're reading my mind *puppet*, I tried providing the whole cluster list but it's a real orchestration issue. Ansible better for that
06:02 JoeJulian ... of course, none of the security model is perfect.
06:02 JoeJulian Durzo: rc1 was pushed today.
06:03 Durzo yaaaaay
06:03 JoeJulian @qa
06:03 JoeJulian @qa releases
06:03 glusterbot JoeJulian: The QA releases are available at http://bits.gluster.com/pub/gluster/glusterfs/ -- RPMs in the version folders and source archives for all versions under src/
06:03 anands joined #gluster
06:03 JoeJulian yeah, that's the link I was looking for...
06:04 JoeJulian The rc is there too.
06:04 Durzo waiting on debs... i guess semiosis doesnt do them for qa/rc so waiting for final
06:04 JoeJulian You could use alien on the rpms....
06:04 Durzo heh, not for clients productions clusters .. no thanks
06:05 JoeJulian Well, not qa or rc releases for production either.....
06:05 Durzo well 3.4.0 stable is causing memleaks at 14gb every 2 days.. so rc1 on prod is fine with me
06:06 JoeJulian hmm, what are you doing that leaks? I haven't had any issues.
06:06 * JoeJulian touches wood.
06:06 Durzo geo-repl
06:06 JoeJulian Oh, right...
06:07 JoeJulian I forgot I knew that.
06:07 Durzo :P
06:07 JoeJulian It's been a few crazy weeks for me...
06:10 gdubreui JoeJulian, just out of curiosity, why not use the same rule for the 2nd node as well: Can only get in if probed from the first?
06:10 JoeJulian which one's the first?
06:10 gdubreui :)
06:10 JoeJulian like, how would it know?
06:11 vimal joined #gluster
06:11 gdubreui well in my case that was the first one who started the volume since I was using add-brick
06:11 JoeJulian But what if they both have volumes?
06:12 JoeJulian Actually... I think if they both have volumes it'll fail to peer anyway...
06:12 gdubreui Yeah, I see the light now!
06:12 JoeJulian Honestly, it should probably all be certificate based anyway.
06:13 gdubreui Fair enough!
06:13 gdubreui JoeJulian: Thanks for your help and time!
06:13 JoeJulian You're welcome.
06:15 JoeJulian I'm going to eat some apple pie and head for bed. I'm up way too late for somebody who's trying to adjust his timezone from his native GMT-7 to GMT-5.
06:17 jtux joined #gluster
06:17 andrewklau left #gluster
06:17 shylesh joined #gluster
06:18 rgustafs joined #gluster
06:19 satheesh joined #gluster
06:23 kshlm joined #gluster
06:30 glusterbot New news from newglusterbugs: [Bug 990028] enable gfid to path conversion <http://goo.gl/1HwiQc> || [Bug 969461] RFE: Quota fixes <http://goo.gl/XFSM4>
06:33 davinder joined #gluster
06:40 satheesh2 joined #gluster
06:45 ricky-ticky joined #gluster
06:52 davinder joined #gluster
06:55 ngoswami joined #gluster
06:58 davinder joined #gluster
06:58 eseyman joined #gluster
07:07 ababu joined #gluster
07:12 jag3773 joined #gluster
07:12 dmojoryder joined #gluster
07:15 eseyman joined #gluster
07:15 ndarshan joined #gluster
07:22 vpshastry1 joined #gluster
07:24 ctria joined #gluster
07:40 NuxRo hagarth: around now
07:40 NuxRo also replied to email
07:40 pithagorians_ joined #gluster
07:44 andreask joined #gluster
07:45 zwu joined #gluster
07:48 badone_ joined #gluster
07:54 mgebbe_ joined #gluster
07:56 chirino joined #gluster
08:00 rgustafs joined #gluster
08:04 badone_ joined #gluster
08:04 lalatenduM joined #gluster
08:07 purpleidea @tell gdubreui i have a puppet gluster::simple class that isn't published yet that automatically uses all hosts without specifying them individually. JoeJulian wanted this feature, and it's a good idea. Also, don't think about it as two peers trusting each other, but rather as one peer initiates creating a cluster, peer 2 accepts, and then each member can invite other members who are willing.
08:07 glusterbot purpleidea: Error: I haven't seen gdubreui, I'll let you do the telling.
08:07 purpleidea darnit. glusterbot doesn't do that. people need to stay in channel, and glusterbot needs that remember for later feature...
08:12 ndevos @later tell purpleidea thats how glusterbot works
08:12 glusterbot ndevos: The operation succeeded.
08:13 purpleidea @later gdubreui i have a puppet gluster::simple class that isn't published yet that automatically uses all hosts without specifying them individually. JoeJulian wanted this feature, and it's a good idea. Also, don't think about it as two peers trusting each other, but rather as one peer initiates creating a cluster, peer 2 accepts, and then each member can invite other members who are willing.
08:14 ndevos *later tell*
08:14 purpleidea ndevos: oh sweet. thanks!
08:14 purpleidea *later tell* ?
08:14 ndevos just @later isnt sufficient, I think
08:15 purpleidea oh
08:15 purpleidea @later tell gdubreui i have a puppet gluster::simple class that isn't published yet that automatically uses all hosts without specifying them individually. JoeJulian wanted this feature, and it's a good idea. Also, don't think about it as two peers trusting each other, but rather as one peer initiates creating a cluster, peer 2 accepts, and then each member can invite other members who are willing.
08:15 glusterbot purpleidea: The operation succeeded.
08:15 chirino joined #gluster
08:15 ndevos that look better :)
08:15 purpleidea @later tell g1234 this is a test
08:15 glusterbot purpleidea: The operation succeeded.
08:16 ndevos and now join as g1234?
08:16 g1234 joined #gluster
08:16 ndevos hehe
08:16 purpleidea ndevos: yep
08:17 g1234 seems to have worked, sweet.
08:17 ndevos nice!
08:18 badone_ joined #gluster
08:21 tryggvil joined #gluster
08:33 vpshastry joined #gluster
08:35 raghu joined #gluster
08:36 satheesh1 joined #gluster
08:46 mbukatov joined #gluster
08:51 vpshastry joined #gluster
08:55 ekuric joined #gluster
08:59 tryggvil joined #gluster
09:00 Shyam left #gluster
09:04 ababu joined #gluster
09:10 bulde joined #gluster
09:18 eseyman joined #gluster
09:20 VerboEse joined #gluster
09:23 VerboEse Hi. I don't understand why disk space is cut to half if I set up a replicated volume. Say SRV1 and SRV2 has each 100GB. With gluster volume create myname replica 2 .... the new volume only has 50 GB? Where do I miss something?
09:24 Shyam joined #gluster
09:26 kshlm VerboEse: replica 2 means all data you write to the volume is duplicated once. So effectively you have half the total space available to write data.
09:27 dneary joined #gluster
09:27 chirino_m joined #gluster
09:27 VerboEse Ah,so it's not only duplicated to the other bricks, but also on the local volume?
09:27 Shyam left #gluster
09:28 Shyam joined #gluster
09:28 kshlm it's duplicated to the other bricks.
09:29 VerboEse From http://goo.gl/YUq1Ft I understood, that I have to define replica as the number of servers.
09:29 glusterbot Title: Gluster 3.2: Creating Replicated Volumes - GlusterDocumentation (at goo.gl)
09:30 kshlm Can you give the command you used to create the volume?
09:30 VerboEse gluster volume create GFS_1 replica 2 transport tcp 10.0.0.10:/mnf/gfs 10.0.0.11:/mnt/gfs
09:31 kshlm 10.0.0.{10,11}:/mnt/gfs are 100GB partitions?
09:31 VerboEse All I need is a simple remote mirror ...
09:32 VerboEse Ah, 100GB was a simplification :) One has 66, one has 168. The 66GB should get extended when I have the space available ...
09:33 VerboEse but the created volume only has 33GB
09:33 kshlm I'm not sure how replicate calculates size when we have bricks of non uniform size.
09:34 kshlm ideally you should have the space of the smallest brick in the replica set,
09:35 VerboEse Yes, that is, what I expected to be: 66G
09:35 kshlm I need to check this out.
09:35 VerboEse So you would suggest to resize the bigger partition to fit the size of the smaller and then recreate the volume?
09:41 tryggvil joined #gluster
09:42 kshlm I just tested and it seems to be working correctly.
09:43 VerboEse ok, I will try that. Thanks :)
09:43 kshlm I created a replica volume with bricks of size 1GB and 2GB, and the volume size was 1GB as expected.
09:43 VerboEse oh.
09:43 kshlm i tested on v3.4.0 btw
09:43 VerboEse yes, me too: glusterfs 3.4.0
09:43 VerboEse strange.
09:44 VerboEse I will go through my complete configuration then. I think, there has to be an error ...
09:45 VerboEse thanks again kshlm
09:48 spandit joined #gluster
10:01 jskinner_ joined #gluster
10:03 bulde1 joined #gluster
10:04 manik1 joined #gluster
10:10 tryggvil joined #gluster
10:13 mooperd joined #gluster
10:14 vpshastry1 joined #gluster
10:16 jag3773 joined #gluster
10:17 zwu joined #gluster
10:22 dusmant joined #gluster
10:23 badone_ joined #gluster
10:29 shruti joined #gluster
10:31 SachinPandit joined #gluster
10:31 glusterbot New news from newglusterbugs: [Bug 1011662] threads created by gluster should block signals which are not used by gluster itself <http://goo.gl/q9M1z5>
10:42 vpshastry joined #gluster
10:47 kkeithley1 joined #gluster
10:55 jtux joined #gluster
10:56 harish_ joined #gluster
11:00 ngoswami joined #gluster
11:01 glusterbot New news from newglusterbugs: [Bug 1010874] Dist-geo-rep : geo-rep config log-level option takes invalid values and makes geo-rep status defunct <http://goo.gl/om4qdi>
11:02 kPb_in_ joined #gluster
11:04 jclift_ joined #gluster
11:04 mbukatov joined #gluster
11:14 purpleidea joined #gluster
11:20 andreask joined #gluster
11:20 ppai joined #gluster
11:22 mooperd joined #gluster
11:24 ababu joined #gluster
11:28 edward1 joined #gluster
11:31 satheesh1 joined #gluster
11:34 SachinPandit joined #gluster
11:39 bulde joined #gluster
11:40 tryggvil joined #gluster
11:40 dusmant joined #gluster
11:42 Shyam left #gluster
11:44 nshaikh joined #gluster
11:44 hagarth @channelstats
11:44 glusterbot hagarth: On #gluster there have been 185915 messages, containing 7675275 characters, 1281204 words, 5081 smileys, and 677 frowns; 1114 of those messages were ACTIONs. There have been 74254 joins, 2283 parts, 71951 quits, 23 kicks, 170 mode changes, and 7 topic changes. There are currently 224 users and the channel has peaked at 239 users.
11:45 VerboEse Hm. Which filesystems as base for the bricks are supported? I just tried ext4, but though the fs just got created gluster complains about it or a prefix of it is already part of a volume ...
11:45 glusterbot VerboEse: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
11:45 VerboEse thanks bot, I already tried :)
11:46 chirino joined #gluster
11:46 VerboEse the first link that means. The second one I'm not sure I understand ...
11:48 VerboEse but the bug mentioned is closed in glusterfs-3.4.0, which I use ...
11:49 gdubreui joined #gluster
11:52 VerboEse Ah, just found this note: "#ext4 is not recommomended by the gluster team !" on http://goo.gl/UGQLVv. I will try xfs then. Maybe this should be mentioned somewhere in documentation and/or FAQ. Didn't find it there :(
11:52 glusterbot Title: jayunit100: A completely rebuildable Fedora16 Gluster development box. (at goo.gl)
11:56 Shyam1 joined #gluster
11:57 kkeithley_ VerboEse: ext4 is fine. We recommend xfs because it works better with large file systems.
11:58 VerboEse ok, so why am I not allowed to create my gluster then? It's true that I had defined a volume with these settings, but not with these underlying disk-volumes. I recreated the volumes via LVM before trying ti create the gluster volume.
12:00 Shyam1 left #gluster
12:02 kkeithley_ hard to say. I presume you did a new mkfs on the lvm volumes. But gluster sees its xattrs in the directories so it believes they're in use or have previously been used.
12:02 sprachgenerator joined #gluster
12:02 Shyam joined #gluster
12:03 VerboEse but its even not possible to setfattr -x these attributes. :-/
12:04 VerboEse as no attributes are found (with setfattr)
12:04 VerboEse xfs doesn't work either.
12:05 VerboEse I wonder why it seems alway me who gets in trouble with FS ...... oO
12:06 kanagaraj joined #gluster
12:09 andreask joined #gluster
12:12 VerboEse well. Just did a mkfs.ext3 and voilá: volume creation works. I don't like ext3 for big volumes though ...
12:12 kkeithley_ So you've got an lvm, e.g. /dev/mapper/gluster-brick1. You did `mkfs.xfs --i size=512 -n size=8192 /dev/mapper/gluster-brick1`. Then you mounted it, e.g. `mount /dev/mapper/gluster-brick1 /bricks/1`. Then you did `gluster volume create theVol myhostname:/bricks/1`
12:12 kkeithley_ ?
12:13 VerboEse well, not exactly these options, but this order, correct.
12:13 kkeithley_ N.B. new advice for gluster w xfs. Current best practices are -i size=512 and -n size=8192
12:14 VerboEse I have /dev/pveext/gluster, did mkfs.xfs -f /dev/pveext/gluster, mounted it with mount /dev/pveext/gluster /mnt/gfs and did gluster volume create GFS_1 replica 2 transport tcp 10.0.0.10:/mnf/gfs 10.0.0.11:/mnt/gfs
12:15 VerboEse last command failed with "volume create: GFS_1: failed: /mnf/gfs or a prefix of it is already part of a volume"
12:15 glusterbot VerboEse: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
12:15 VerboEse nice bot btw :)
12:16 hagarth joined #gluster
12:16 VerboEse this is on debian 7.1 btw. (a upgraded/modified proxmox installation)
12:17 RicardoSSP joined #gluster
12:17 kkeithley_ that's weird
12:17 vpshastry2 joined #gluster
12:18 VerboEse you can say ...
12:19 VerboEse well, as the virtualization clöuster will only hold testing machines I think I can go with ext3, but for production this would be a no go :(
12:22 kkeithley_ wild stab in the dark. What happens if you `dd if=/dev/zero of=/dev/pveext/gluster count=$bignum`, then mkfs.xfs, etc.
12:23 VerboEse *sigh*, yeah, I could give this one a last try, but I need my FS soon :)
12:23 kkeithley_ $bignum doesn't need to be the whole lvm, just the first few MB.
12:24 shruti joined #gluster
12:24 VerboEse omygod
12:24 VerboEse I think I found my error.
12:24 kkeithley_ ?
12:24 VerboEse Its a simple typo ...
12:24 kkeithley_ ;-)
12:25 VerboEse tcp 10.0.0.10:/mnf/gfs against " tcp 10.0.0.10:/mnt/gfs" ....
12:26 kkeithley_ glad you found it
12:27 itisravi joined #gluster
12:31 vpshastry2 left #gluster
12:33 kkeithley_ 3.4.1rc1 RPMS are available in the YUM repo at http://download.gluster.org/pub/glus​ter/glusterfs/qa-releases/3.4.1rc1/  (Fedora, RHEL, CentOS, etc.)
12:33 glusterbot <http://goo.gl/G5gui8> (at download.gluster.org)
12:35 ndarshan joined #gluster
12:42 itisravi_ joined #gluster
12:50 morse joined #gluster
12:50 vpshastry joined #gluster
12:51 tryggvil joined #gluster
12:56 rcheleguini joined #gluster
13:01 glusterbot New news from newglusterbugs: [Bug 1010834] No error is reported when files are in extended-attribute split-brain state. <http://goo.gl/HlfVxX>
13:05 pdrakeweb joined #gluster
13:05 chirino joined #gluster
13:16 ProT-0-TypE joined #gluster
13:19 dusmant joined #gluster
13:24 shylesh joined #gluster
13:26 lpabon joined #gluster
13:30 bdperkin joined #gluster
13:34 pithagorians joined #gluster
13:43 manik joined #gluster
13:43 vpshastry joined #gluster
13:54 jclift joined #gluster
13:56 bugs_ joined #gluster
13:59 mmalesa joined #gluster
14:04 NuxRo semiosis: did the reject peer stuff, but glusterd will not start now:
14:04 NuxRo [2013-09-26 14:01:55.494995] E [xlator.c:390:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
14:04 NuxRo [2013-09-26 14:01:55.495015] E [graph.c:292:glusterfs_graph_init] 0-management: initializing translator failed
14:04 NuxRo [2013-09-26 14:01:55.495027] E [graph.c:479:glusterfs_graph_activate] 0-graph: init failed
14:08 bennyturns joined #gluster
14:09 jclift NuxRo: I think you'll need to fpaste your .vol file and the .log file it's crapping out on.
14:10 jclift NuxRo: Note though, I'm in a meeting right now so I'm not the right person to assist. (and I might not be skilled enough anyway). :(
14:12 NuxRo thanks jclift
14:14 ndk joined #gluster
14:18 NuxRo jclift: http://www.fpaste.org/42416/02049791/ if you have any ideas, i also bugged some of the guys via email
14:18 glusterbot Title: #42416 Fedora Project Pastebin (at www.fpaste.org)
14:20 ndevos NuxRo: make sure the files in /var/lib/glusterd/peers/ are valid and not duplicates or soe
14:20 ndevos s/soe/something/
14:20 glusterbot ndevos: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
14:21 ndevos glusterbot: you are wrong!
14:23 johnmark lulz
14:25 samppah @split
14:25 glusterbot samppah: I do not know about 'split', but I do know about these similar topics: 'split-brain', 'splitbrain'
14:25 samppah @splitbrain
14:25 glusterbot samppah: To heal split-brain in 3.3+, see http://goo.gl/FPFUX .
14:28 NuxRo ndevos: in peers I can only see the one peer that I probed per the wiki instructions
14:33 NuxRo there is also something weird, on my older servers I have a  /etc/init.d/glusterfsd running
14:33 NuxRo on the servers installed recently there is no such file
14:34 NuxRo all of this is 3.4.0 but from different RPMs I guess
14:35 NuxRo not sure if this is meaningful in any way
14:37 NuxRo /etc/glusterfs/glusterd.vol seems to be the same on all servers, so I'm rulling that one out
14:41 sprachgenerator joined #gluster
14:41 vpshastry joined #gluster
14:41 vpshastry left #gluster
14:44 inodb_ joined #gluster
14:45 joaquim__ joined #gluster
14:45 zwu joined #gluster
14:46 msciciel joined #gluster
14:47 manik1 joined #gluster
14:53 NuxRo right, I restored the "peers" dir from backup on the rejected peers and glusterd restarted successfully.
14:54 Technicool joined #gluster
14:54 NuxRo semiosis: I think probing all the peers and not just one of them might resolve this, if this makes any sense then maybe update the wiki
14:57 phox joined #gluster
14:57 jag3773 joined #gluster
14:57 phox so... am I likely going to be able to mount something from a 3.4 server with a 3.3 client?
15:01 msciciel_ joined #gluster
15:02 glusterbot New news from newglusterbugs: [Bug 1012503] Too many levels of symbolic links <http://goo.gl/WEF3gN>
15:03 phox testing out 3.4 locally on one server for now and it would be "convenient" if I could poke it remotely as well
15:03 phox also anything I need to do when moving a volume from 3.3 to 3.4?  doubt there's much but I don't know for sure...
15:06 ekuric1 joined #gluster
15:06 msciciel joined #gluster
15:09 davinder joined #gluster
15:12 semiosis NuxRo: it's not about what makes sense, but rather what works
15:13 semiosis i've not yet found a consistent fix for peer rejected
15:13 phox a common problem amongst geeks.
15:13 semiosis rofl
15:14 phox couldn't help myself there.
15:14 semiosis and on that note, i'm afk for a bit
15:14 phox you have any comments either way on 3.3 working with 3.4 at all?
15:14 phox testing 3.4 on one machine prior to upgrading all of them
15:14 phox oh right also, do I have to -do- anything to existing volumes when I upgrade? :)
15:14 samppah @upgrade
15:14 glusterbot samppah: I do not know about 'upgrade', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
15:15 samppah @3.4 upgrade notes
15:15 glusterbot samppah: http://goo.gl/SXX7P
15:15 daMaestro joined #gluster
15:15 phox excellent, thankya
15:15 samppah err
15:15 samppah oh.. ok
15:16 * phox blinks
15:16 hagarth joined #gluster
15:16 phox ok, so boring... just swap out software
15:16 phox and I guess heal is kinda irrelevant if I'm not using mirroring
15:16 phox probably?
15:17 samppah distributed only setup?
15:17 phox neither
15:17 phox ... heh.
15:17 samppah oh
15:17 phox standalone
15:17 samppah heh
15:17 samppah no need for heal then :)
15:17 * phox nods
15:17 phox wasn't sure if there were also potentially some metadata implications of heal or something
15:18 phox i.e. "go make sure your accounting crap is all kosher"
15:18 phox oops, I made a joke there too :l  heh.
15:18 phox that was not intentional
15:19 vpshastry1 joined #gluster
15:20 vpshastry1 left #gluster
15:21 bulde joined #gluster
15:21 LoudNoises joined #gluster
15:27 DV joined #gluster
15:30 zerick joined #gluster
15:32 recidive joined #gluster
15:38 mooperd joined #gluster
15:42 hybrid5121 joined #gluster
15:42 NuxRo right, after a reboot glusterd will not start again, same errors about not being able to initialise management volume
15:43 hagarth NuxRo: the same fpaste as before?
15:44 NuxRo hagarth: yes
15:44 NuxRo i upgraded gluster as well to 3.4.0-8 (latest) from 3.4.0-1
15:45 NuxRo so now i have exactly the same version on all servers
15:47 hagarth [glusterd-store.c:2472:glu​sterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore -- seems to indicate a DNS/peer problem
15:48 vpshastry joined #gluster
15:49 NuxRo hagarth: there's no firewall between them and i can ping them just fine
15:50 hagarth NuxRo: can you check if the output of "gluster peer status" is same on all nodes?
15:51 mooperd joined #gluster
15:52 NuxRo hagarth: no, it's not, on the other server from pair1 on which I'm yet to restart glusterd (after yum update) some peers are listed 2 times
15:52 ababu joined #gluster
15:52 NuxRo it does say "3 peers" which is correct, but lists some of them 2 times
15:53 hagarth NuxRo: that doesn't seem right
16:00 zaitcev joined #gluster
16:02 itisravi_ joined #gluster
16:02 an joined #gluster
16:12 joaquim__ joined #gluster
16:14 derelm joined #gluster
16:17 derelm i am running a gluster v3.2.7 setup with two nodes - what i am experiencing is that the interconnect between the two nodes requires exactly the same bandwidth as the (readonly) bandwidth on the glusterfs is that expected? nodes are setup to replicate to each other
16:18 failshell joined #gluster
16:18 vpshastry joined #gluster
16:27 kkeithley_ when you say they're set up to replicate to each other, you mean you're using geo-replication? Or your volume is "replica 2"?
16:27 mohankumar joined #gluster
16:28 PatNarciso so -- a gluster volume in a gluster volume is a documented no-no.  how no-no is this?
16:28 PatNarciso I'm considering making a large cloud gluster (redundant, distributed) for never-ending storage capacity.  cool.  gluster can rock this easily.  I'll call this "remote-gluster-cloud".
16:28 PatNarciso I'd like to setup lan-server-a && lan-server-b that would replicate some portions of the remote-gluster-cloud.
16:28 glusterbot PatNarciso: You've given me 5 invalid commands within the last minute; I'm now ignoring you for 10 minutes.
16:28 rwheeler joined #gluster
16:29 PatNarciso [sorry glusterbot]
16:29 PatNarciso In my mind, and to reduce management headaches, I see "the" remote-gluster-cloud as one big volume, that houses reduandant volumes for lan-servers-[a,b].
16:29 PatNarciso But, again, in the docs: this is a no-no.  If this something I should abide by, and simply suck up and overcome the management headache?
16:29 PatNarciso I define management headache as wanting to mitigate the requirement for setting up a volume each time a new project.
16:29 phox PatNarciso: you could pull some other jackass move like putting a file-based ZFS vdev on the "cloud" storage
16:30 phox heh
16:30 Mo__ joined #gluster
16:30 derelm kkeithley: gluster volume info shows "Type: Replicate" - 2 Bricks one on each node
16:31 krishnan_p joined #gluster
16:34 rotbeard joined #gluster
16:34 derelm kkeithley_: gluster volume info shows "Type: Replicate" - 2 Bricks one on each node - not entirely sure this answers your question correctly
16:35 dusmant joined #gluster
16:36 hagarth joined #gluster
16:39 PatNarciso phox: hmm. I'm not a large fan of that idea.
16:42 bdperkin joined #gluster
16:44 __Bryan__ joined #gluster
16:44 PatNarciso phox: I mean, it would work.  it's just I'm having trouble buying into a plan labeled "jackass move".
16:44 PatNarciso altho to be completely honest -- for that reason I'm also considering doing it.
16:46 Bullardo joined #gluster
16:46 shylesh joined #gluster
16:53 eesullaw__ joined #gluster
16:55 DV joined #gluster
16:56 hagarth joined #gluster
16:58 failshell joined #gluster
17:02 vpshastry joined #gluster
17:03 vpshastry left #gluster
17:10 rcheleguini joined #gluster
17:19 Peanut joined #gluster
17:19 Peanut Hi folks - I just did an apt-get dist-upgrade on my Ubuntu 13.04 gluster node, and it turned into an interesting disaster.
17:20 PatNarciso oh yah?
17:20 Peanut The OS for some reason chokes on the /gluster entry in /etc/fstab, and because of this, never enables networking but waits for a prompt (skip mounting, manual recover?)
17:21 kkeithley_ derelm: okay, so. replication is handled by the client(s). There's no replication traffic between the nodes.
17:21 Peanut At the same time, the new kernel seem to mix up ttyS0 and ttyS1, so I had no output from grub/console anymore, and a machine that's unpingable,  unresponsive.. and the XFS filesystem for the /gluster needed xfs_repair.
17:22 semiosis you should always use 'nobootwait' on your remote mounts in fstab
17:22 semiosis see man fstab for details
17:22 Peanut semiosis: ah, good point, I guess glusterfs can be considered remote.
17:22 semiosis absolutely
17:23 Peanut So, after this mini-rant, my question is: how can I now verify that both bricks are in sync again, after one half of the cluster having been AWOL for an hour or two?
17:23 semiosis some kinda command to show self heal info
17:23 semiosis i forget what it is
17:24 semiosis gluster volume heal $volname info
17:24 semiosis maybe?
17:24 derelm kkeithley_: i export the fuse.glusterfs mounted directory via http - i currently have around 250mbit/s traffic there - and i see 25mbit/s traffic on the interconnect between the two gluster nodes
17:26 derelm kkeithley_: sorry, 250mbit/s also on the interconnect, a zero missing in my last message
17:27 B21956 joined #gluster
17:27 B21956 left #gluster
17:27 kkeithley_ I suggest you use tcpdump or wireshark to figure out what it is. As before, gluster replication (afr, "replica 2") is all done between the clients and the servers. There is no server-to-server replication traffic unless you're doing geo-replication too for some reason.
17:29 derelm kkeithley_: what exactly do you mean with client and server? both my servers host a replica set and both mount their local fuse.glusterfs - so they are both servers and clients
17:30 kkeithley_ aha. that explains things.
17:30 derelm kkeithley_: i wanted to be able to turn of one server and still have everything running
17:31 kkeithley_ the client mount on A issues reads (and writes) to both nodes in the replica set.
17:32 derelm kkeithley_: ok i see, i thought it would use what is on the local disk and only speak to the other server on writes
17:32 kkeithley_ I'd think you could turn off one server and things should continue to work. When you turn it back on gluster will heal the out of date side.
17:32 kkeithley_ In newer versions than 3.2.7 it will try to use the local copy first.
17:33 kkeithley_ 3.2.7 is pretty old
17:33 derelm kkeithley_: oh ok, so in case i upgrade - to e.g. 3.4 - i will see a reduced amount of bandwidth used between my two servers?
17:33 derelm kkeithley_: 3.2.7 is what comes with debian wheezy
17:34 kkeithley_ right, you get get newer bits for debian from http://download.gluster.org/pub/gl​uster/glusterfs/3.4/LATEST/Debian/
17:34 glusterbot <http://goo.gl/TmmIeG> (at download.gluster.org)
17:34 DV joined #gluster
17:35 kkeithley_ With 3.4 I expect you'd see much reduced bandwith between the two.
17:35 derelm kkeithley_: i'll have to read and test the upgrade path first as i don't want to interrupt the service. thanks for your explanations
17:35 lpabon joined #gluster
17:36 kkeithley_ yes, updating to 3.3 or 3.4 will require downtime.
17:36 samppah kkeithley_: do you know if this fix is included in 3.4.1rc1 http://review.gluster.org/#/c/5601/?
17:36 glusterbot Title: Gerrit Code Review (at review.gluster.org)
17:37 derelm kkeithley_: do you have any idea what will happen if the read requests will excess the interconnect bandwidth?
17:37 kkeithley_ derelm: everything will get slow
17:37 kkeithley_ samppah: not sure, let me look
17:38 derelm kkeithley_: with 3.2.7 -- my current setup is base on the assumption that the link between the two servers doesn't need to be as big as the uplink of the servers. hm ok i see
17:38 kkeithley_ if your servers weren't also your clients that would be true.
17:43 * kkeithley_ wonders why the interwebs are so slow
17:45 derelm kkeithley_: the servers in 3.2.7 (and probably up) only exchange data in case of healing required?
17:45 derelm kkeithley_: clients will always send read/write to all bricks?
17:46 derelm kkeithley_: is that summed up correctly?
17:46 B21956 joined #gluster
17:46 B21956 left #gluster
17:47 kkeithley_ yes, clients send a lookup to both/all servers in a replica set. The first server to respond to the lookup is the one that handles the read or write
17:48 kkeithley_ I have a 50/50 chance of answering the self heal question correctly, which means I'd probably tell you the wrong answer ;-)
17:49 derelm :)
17:49 derelm i am a little confused
17:49 kkeithley_ about?
17:49 derelm generally ;)
17:49 derelm no, i am still somewhat concerned about the bandwidth requirements of glusterfs
17:50 kkeithley_ are you using php on your web server?
17:51 derelm no it is just big chunks of static data
17:51 kkeithley_ samppah: looks like that change is in the release-3.4 branch in commit 873ac7b37b0b6c18a14969286ebcf89bb67dfee2
17:52 derelm so if i expect to have 100mbit/s bandwidth for http downloads available, at the same time i need 200mbit/s to satisfy the glusterfs lookups, right?
17:52 dneary joined #gluster
17:53 derelm so i need 300mbit/s bandwidth in the end
17:53 samppah kkeithley_: hmm
17:54 samppah kkeithley_: ok, thanks.. i hope i got time to do more testing tomorrow
17:54 derelm well that calculation is a little of, with full duplex i only need 200mbit/s ...
17:55 rwheeler joined #gluster
17:55 phox can I mess with transport type after creating a volume?
17:55 phox seems I should bea ble to but I get the impression I cannot
17:55 kkeithley_ confirmed, that change is in the release-3.4 source.
17:56 hagarth joined #gluster
17:57 kkeithley_ samppah: for reference, that change in the 3.4 branch was  http://review.gluster.org/5880
17:57 glusterbot Title: Gerrit Code Review (at review.gluster.org)
17:59 samppah kkeithley_: it was included at 10th of september, right?
18:01 hagarth NuxRo: ping
18:01 NuxRo hagarth: pong
18:01 hagarth NuxRo: what's the latest?
18:02 NuxRo we established the nfs on this server kind of works (remote servers can mount it and use it), but fills the logs with "Bad file descriptor"
18:02 NuxRo and shows as "N" in volume status
18:03 hagarth that looks very strange!
18:04 NuxRo i think i got a stuck xenserver trying to use it, might be related#
18:07 NuxRo krishnan_p is helping, thanks
18:11 kkeithley_ samppah: yes
18:26 semiosis derelm: a lookup is just a metadata operation, it adds a tiny bit of latency when opening a file, but doesn't multiply the bandwidth consumed by read data ops
18:41 samppah kkeithley_: thank you :)
18:41 kkeithley_ yw
18:43 quique joined #gluster
18:46 NuxRo hagarth: where could a mere mortal read more about dht?
18:47 mooperd joined #gluster
18:47 tg2 joined #gluster
18:48 NuxRo hagarth & krishnan_p, thanks for the help, and when in london, beers are on me :)
18:48 hagarth NuxRo: http://hekafs.org/index.php/2012/03​/glusterfs-algorithms-distribution/
18:48 glusterbot <http://goo.gl/MLB8a> (at hekafs.org)
18:48 hagarth NuxRo: https://github.com/gluster/glusterfs/​blob/master/doc/features/rebalance.md
18:48 glusterbot <http://goo.gl/MHAogF> (at github.com)
18:48 hagarth these should help in understanding dht
18:48 NuxRo roger, looks like tomorrow's reading
18:48 semiosis NuxRo: not sure about beer
18:49 NuxRo cudre?
18:49 NuxRo *cidre?
18:50 phox CIDR?
18:50 NuxRo cider :)#
18:50 semiosis phox: you're on a roll with the jokes today.  +1
18:50 NuxRo geez
18:50 phox :l
18:50 jcsp joined #gluster
18:51 NuxRo now that you mention it, CIDR should be every network engineer's favourite beverage :)
18:51 phox CIDR around here is usually about a /14
18:51 hagarth lol
18:51 phox or I guess if it's actually a mask it's like a /4, but that's more approximate
18:51 phox ~7% ABV
18:51 krishnan_p Earn a beer, check!
19:12 davinder joined #gluster
19:19 kaptk2 joined #gluster
19:31 anands joined #gluster
19:35 polfilm joined #gluster
19:43 samppah i'd like to share different volumes to different vlans.. is there any other ways for access control than using different ip addresses?
19:43 samppah ie. i'd like to have separate vlan for VM storage and web hosting
19:43 dneary joined #gluster
19:44 phox generally you'd use different IPs on different VLANs.
19:44 samppah and i don't want web servers to have any kind of access to VM images
19:46 samppah phox: yes, i'm thinkin if that is enough.. for example if there is rogue server that is able to fake it's ip address etc
19:48 phox you can have VLAN tagging but once that's implemented and assuming your switch supports forcing port VLANs so it's even an actual barrier, then I don't see why the gluster side of it is any more complex than "using different IPs [that happen to be on different VLANs]"
19:48 * phox blinks
19:50 samppah true
19:50 * phox nods
19:50 B21956 joined #gluster
19:57 saltsa joined #gluster
20:06 anands joined #gluster
20:13 purpleidea samppah: sure, although i don't think it's a good access control method.
20:13 purpleidea samppah: you could selectively allow the various volume specific ports to the vlan's you're interested in
20:14 purpleidea samppah: the tricky thing is you can't reliably predict them until your volume is running. look at my puppet module for more details
20:14 purpleidea @ports | samppah
20:14 purpleidea @ports
20:14 glusterbot purpleidea: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:14 purpleidea @puppet
20:14 glusterbot purpleidea: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
20:14 semiosis ~ports | purpleidea
20:14 glusterbot purpleidea: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:15 purpleidea semiosis: ah thanks, that's the trick
20:15 purpleidea ~ports | samppah
20:15 glusterbot samppah: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:15 purpleidea ~puppet | samppah
20:15 glusterbot samppah: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
20:18 samppah purpleidea: thanks, i'll take a look at those tomorrow (time to get some sleep now :)
20:19 purpleidea ok night
20:23 DV joined #gluster
20:24 semiosis marcoceppi: ping
20:43 tryggvil joined #gluster
20:50 a2 joined #gluster
20:55 tg2 does anybody have videos up of the gluster talks?
20:57 semiosis johnmark: ^^ ?
21:02 marcoceppi semiosis: pong
21:03 semiosis marcoceppi: hi!
21:03 semiosis i was pinging you earlier re: https://bugs.launchpad.net/ubunt​u/+source/glusterfs/+bug/1205075
21:03 glusterbot <http://goo.gl/Bp1zjD> (at bugs.launchpad.net)
21:03 semiosis but i got in touch with slangasek about it
21:06 Peanut Ah, I just updated that bug because I can't find a way to make glusterfs mount on boot.
21:06 semiosis i will make some time to look into this tonight
21:07 semiosis this mount options stuff is surprising, i've never run into a problem with that before
21:07 Peanut Let me know if I can help, testing e.g. I could not find any documentation about the whole mountall thing.
21:07 semiosis there isn't much
21:08 Peanut I'm thinking maybe I should put a mount command in /etc/rc.local for now.
21:08 semiosis and it's scattered around related man pages, mount, fstab, etc
21:08 semiosis Peanut: many people do that when they give up on the package working correctly
21:09 Peanut I am not very impressed with plymouth, mountall and that kind of stuff. The mount errors didn't even show up on my console until I stopped using the serial console.
21:09 semiosis seems like i've been fighting mount-on-boot problems for ever
21:09 cjh joined #gluster
21:10 Peanut I had a system that worked fine, installed the latest patches *boom* took me 2 hours to get it booting again, and that was before I even got around to getting glusterfs to mount again.
21:10 semiosis yep
21:10 phox hey is there any way to definitively tell if a client is using RDMA or not?
21:10 Peanut Scary. But, thanks to glusterfs, all my kvm guests were on the other half of the cluster, and they're fine.
21:10 phox you just connect with the one address and it does its own thing so it would be useful to be able to find out what's going on
21:11 cjh is it possible to manually generate the glustershd-server.vol file?
21:11 semiosis Peanut: neat!
21:11 semiosis phox: netstat?
21:11 phox semiosis: you mean to check IP throughput?
21:11 l0uis semiosis: unless im mising something it mounts ok on boot if you include the mounting-glusterfs.conf from the 3.3 release
21:11 johnmark tg2: videos? no
21:11 semiosis phox: just check to see what connections are open
21:11 johnmark I'm posting presentations, though
21:11 phox semiosis: would be nice if it would just _say_ either way :)
21:12 phox semiosis: right, I guess it will either connect to a brick over TCP or it won't?
21:12 semiosis phox: client log file?
21:12 Peanut Are these packages available for Debian perhaps? I'm seriously considering ditching Ubuntu for these servers.
21:12 phox semiosis: maybe but I don't think it says either way there
21:12 phox I'll see how it goes
21:12 phox wish I could just switch transports on and off dynamically instead of having to recreate the volume :)
21:13 * phox dislikes opacity :)
21:13 DV joined #gluster
21:13 semiosis l0uis: yeah i'm skeptical about Peanut's mount options issue... but i will dig into it to be sure.  my suspicion is that if that were a problem it would affect everyone, on all versions of glusterfs & ubuntu, going back a long time -- which it doesn't
21:13 semiosis Peanut: can you pastie a client log file from a failed mount attempt at boot time?
21:14 l0uis semiosis: also, gluster mounts in 3.4 might fail on boot for non-related reasons since the gluster log directory isn't being created by the package
21:14 semiosis Peanut: there are debian packages, but there's no PPA.  i know i haven't been as responsive with updates lately, but i think the debian packages would be updated even less frequently
21:14 semiosis l0uis: i'll fix that tonight
21:15 semiosis tonight is the night!
21:15 l0uis so i suspect including the mounting-glusterfs.conf and that log dir will make it more stable
21:15 l0uis sweet!
21:15 * l0uis fist pumps
21:15 Peanut semiosis: I did not author that ticket, so there's at least 2 of us ;-)
21:15 phox that always sounds like a euphemism
21:15 Peanut semiosis: which logfile would you like?
21:15 semiosis Peanut: client log file, from a mount attempt that failed during boot
21:16 Peanut Yes, which is the 'client log file' though?
21:16 phox semiosis: I think kkeithley or someone builds .debs
21:16 semiosis phox: i do
21:16 semiosis :)
21:16 phox mm
21:16 phox tyvm :)
21:16 * phox is running wheezy debs
21:16 semiosis cool!
21:16 phox well, on testing.  production as of next Monday when I've scheduled a service interruption.
21:16 semiosis Peanut: client log file is located in /var/log/glusterfs/the-mount-point.log or something close to that
21:17 phox nobody likes 3.3
21:17 phox well, I do, because it punishes stupid people who makes 100,000 files for no good reason.
21:17 phox but I don't like its apparent lack of RDMA :)
21:17 * semiosis switches to a terminal window, in a directory with 62k files
21:17 semiosis :)
21:17 semiosis but i have my reasons
21:18 Peanut semiosis: /var/log/glusterfs/bricks/bla.log then.
21:18 semiosis no that's a brick log file
21:18 phox semiosis: and is taht on gluster? :P
21:18 semiosis phox: no way
21:18 phox this is people using 2D files instead of 3/4D files
21:18 phox because zomg Exsmell can read them!
21:21 polfilm joined #gluster
21:21 polfilm joined #gluster
21:23 Peanut Oh, "dns resolution failed on localhost", that's a familiar one...
21:25 Peanut "Failed to connect with remote-host: Succes"
21:25 phox hahaha yeah I love successfailures.
21:25 Peanut "glusterfsd-mgmt: -1 connect attemts left" is also a good one.
21:26 phox heh
21:26 phox that's just an off-by-one
21:26 phox some sillyass...
21:30 DV joined #gluster
21:31 semiosis Peanut: thanks for that log file
21:31 semiosis Peanut: problem seems to be the mount is tried before networking is ready
21:31 marcoceppi semiosis: awesome, gotchya
21:31 semiosis the dns resolution error
21:31 phox heh
21:31 phox yeah I have a special mount-gluster-shit init script for that :l
21:31 quique when a gluster node crashes and then boots up again, what is supposed to happen?
21:32 phox which deps on glusterd, and glusterd has grown an internal delay because the process that launches the daemon exits before the daemon is ready :l
21:32 semiosis Peanut: i made an upstart job which blocks mounting until the static-network-up event, that should be /etc/init/mounting-glusterfs.conf and is provided (since 3.4.0) by the glusterfs-client package
21:32 phox quique: typically it's transparent
21:32 phox quique: apart from EIO or whatever on the clients _until_ the server is back
21:32 phox unless of course you have mirroring in which case you shouldn't even get that.
21:33 quique phox: how does the node get back into the pool?
21:33 semiosis phox: there's an upstart expect line for that iirc, expect fork vs expect daemon
21:33 Peanut semiosis: but I am using your package from the PPA
21:33 semiosis Peanut: :(
21:33 phox semiosis: hm
21:33 phox quique: it remembers its peers and vice versa?
21:33 phox I can't comment on how that's implemented but that's what I've observed
21:35 quique i have gluster1, gluster2, gluster3, gluster4, i took gluster2 down and then brought it back up, it is a different server, but with the same brick and hostname
21:35 quique 1,3, and 4 say 2 is connected
21:35 quique but 2 does not
21:35 semiosis quique: restart glusterd on 2
21:36 quique semiosis: restarted and 2 is the same
21:36 kkeithley_ 3.4.1rc1 RPMS are available in the YUM repo at http://download.gluster.org/pub/glus​ter/glusterfs/qa-releases/3.4.1rc1/  (Fedora, RHEL, CentOS, etc.)
21:36 glusterbot <http://goo.gl/G5gui8> (at download.gluster.org)
21:37 quique where is the info about other nodes stored?
21:37 semiosis quique: /var/lib/glusterd/peers
21:40 badone joined #gluster
21:40 quique semiosis: is that documented anywhere? like what does state=3 mean?
21:41 semiosis doubt it
21:41 semiosis use the source
21:43 Peanut semiosis: I will replace 'localhost' with '127.0.0.1' in my fstab tomorrow, and see if that helps?
21:45 semiosis Peanut: that's a great idea... if dns resolution is the problem, use an IP address!  but it doesn't help
21:45 semiosis or at least, didn't help when I tried
21:45 semiosis but do try, and if it works for you please let me know
21:50 polfilm joined #gluster
21:56 dberry joined #gluster
21:59 fidevo joined #gluster
22:00 phox heh, FWIW I'm using /etc/hosts for most/all of what gluster needs to care about
22:00 phox partly because I'm faking it and making foo-ib be two different IPs depending whether a client machine is or is not on the IB segment :)
22:14 DV joined #gluster
22:27 khushildep joined #gluster
22:45 rwheeler joined #gluster
22:56 glusterbot New news from resolvedglusterbugs: [Bug 862082] build cleanup <http://goo.gl/pzQv9M>
23:01 * phox ponders in-kernel gluster
23:01 phox :P
23:09 a2 :O
23:10 derelm joined #gluster
23:13 a2 in-kernel?
23:19 phox would be nice
23:19 phox but I'm sure it's a good long ways off
23:19 phox well
23:19 phox long
23:19 phox maybe not so good :)
23:19 phox anywho
23:20 phox bbl.
23:21 dtyarnell joined #gluster
23:22 roo9 joined #gluster
23:22 roo9 when enabling gluster quotas, does gluster calculate the existing size of the directory?
23:29 roo9 quota also doesn't seem to accept exabytes as a valid unit
23:30 B21956 left #gluster
23:30 rcheleguini joined #gluster
23:33 dtyarnell when you integrate gluster into openstack and cinder my understanding is that the running cinder-volume mounts the fuse glusterfs and then uses tgtd to share the 'lun' out to the instance
23:34 dtyarnell i am getting a error on the local nova node that makes it look like libvirt is trying to attach with a local file
23:35 StarBeast joined #gluster
23:37 failshell joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary