Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 elyograg haidz: to ensure a consistent number of bricks per host, the best way is to have/add hosts in a multiple of your replica count.
00:13 elyograg haidz: replica sets are determined by the order in which you specify the bricks on the create or the add-brick commands.
00:22 semiosis FyreFoX: in & out but start talkin & i'll catch up when i can
00:27 haidz elyograg, thats the trick! awesome thanks. That makes this so much easier to conceive
00:43 kevein joined #gluster
00:48 raven-np joined #gluster
01:22 chirino joined #gluster
01:23 FyreFoX semiosis: k, the process for building ubuntu packages you gave me the other day. will that work for 3.4.0qa6 ?
01:28 bdperkin joined #gluster
01:31 H__ joined #gluster
01:31 _pol JoeJulian: if someone were starting from scratch, what OS would you recommend for gluster?
01:39 balunasj joined #gluster
01:43 noob2 joined #gluster
01:47 m0zes I had an interesting thought. since glusterfs needs a privileged port to mount, would it be possible to modify the rpc a little to open two ports, one privileged one not, and negotiate the original mount via the privileged port and then hand off to the unprivileged? then you could free the privileged port until a reconnection is necessary.
01:49 * m0zes is gonna fork and look at the feasibility. :)
01:50 ajm what would the advantage be?
01:51 m0zes freeing the privileged port for other uses. currently you *can* run into issues where services won't start because the glusterfs process stole a registered privileged port. 993 is a common complaint.
01:52 ajm couldn't you just stop listening after you accept?
01:52 m0zes the glusterfs process needs to be able to communicate to the server
01:53 ajm its tcp, no?
01:53 m0zes and it needs to keep listening in the event that the volume changes. the server has to push out the new vol-file.
01:53 ajm (i'm not intimately familiar with glusterfs internals)
01:53 ajm ah, ok
01:53 ajm so negotiate to use a non-priv port over a privileged port
01:54 m0zes yes. I think that could be a very useful feature. and given the fact that the client needs a privileged port per server, this could heal a lot of issues with big volumes.
01:55 ajm again, not familiar with internals, but what's the requirement for using a privileged port ?
01:56 m0zes they want a privileged port to be sure an administrator is communicating to the glusterd server iirc. there isn't much in the way of security otherwise atm...
01:58 m0zes it may be that this should be tackled differently, I am not sure. perhaps implementing some sort of keyed communication for the initial communication.
01:58 ajm mmmh, an option that simply discards that requirement with a big "don't use this on multiuser systems"
02:00 m0zes there is this, https://bugzilla.redhat.com/show_bug.cgi?id=764314 but I am not sure that is a wonderful option, because I wouldn't be able to use it ;) I am running in an HPC environment and I don't trust my users.
02:00 glusterbot <http://goo.gl/221FB> (at bugzilla.redhat.com)
02:00 glusterbot Bug 764314: medium, medium, ---, amarts, ON_QA , allow option to accept messages from insecure ports
02:02 m0zes I have plenty of users that are researching security and would try to break things, and I've got a good many non technical users that would do something silly and really break things ;)
02:02 ajm m0zes: mmh, that kinda sucks. i guess rpc-auth-allow-insecure is in the source now even
02:05 ajm m0zes: are you actually running out of ports, or do you just want to exclude some from being used
02:05 m0zes I've been working on designing our bigger-better storage setup. I'm using glusterfs now with 2 servers and ~500TB of space. This next one looks like it will have 100-250 servers and 4-10PB of space. I would quickly get bitten by the privileged port issue.
02:06 m0zes I'm not getting hit yet, but in the future... perhaps.
02:06 ajm your going to have that many active mounts per box?
02:06 ajm in that case I guess I do like your solution
02:07 m0zes these servers would be fileservers only. I've got 150 clients right about now.
02:13 dhsmith joined #gluster
02:27 raven-np1 joined #gluster
02:42 bitsweat_ joined #gluster
02:44 efries_ joined #gluster
02:44 Zenginee1 joined #gluster
02:48 dhsmith joined #gluster
02:50 bharata joined #gluster
02:56 dhsmith joined #gluster
03:08 y4m4 joined #gluster
03:11 overclk joined #gluster
03:27 shylesh joined #gluster
03:32 dhsmith joined #gluster
03:57 mohankumar joined #gluster
03:58 Humble joined #gluster
04:09 glusterbot New news from newglusterbugs: [Bug 890618] misleading return values of some functions. <http://goo.gl/WsVnD>
04:14 sripathi joined #gluster
04:21 Humble joined #gluster
04:25 deepakcs joined #gluster
04:55 sgowda joined #gluster
04:57 Humble joined #gluster
04:58 jvyas joined #gluster
04:58 jvyas hi guys: is libglusterfs going to entirely replace FUSE mounting?
05:01 vpshastry joined #gluster
05:04 semiosis imho, doubt it
05:05 jvyas but it has some performance benefits over fuse i guess... ?
05:08 semiosis i'd think it may, in cases where fuse is the bottleneck (usually the bottleneck is disk and/or network)
05:08 semiosis @ext4
05:09 glusterbot semiosis: Read about the ext4 problem at http://goo.gl/PEBQU
05:09 semiosis i needed that
05:09 jvyas hmm
05:09 jvyas yeah ... .i thought FUSE was reasonably fast nowadays
05:09 jvyas reading hekafs.org/ post by jeff darcy
05:11 semiosis i use glusterfs in ec2, so for me fuse is not the bottleneck... disk/network is
05:11 semiosis but some people use glusterfs with infiniband & ssd
05:11 semiosis lots of range
05:12 hagarth joined #gluster
05:21 hagarth left #gluster
05:21 hagarth joined #gluster
05:31 jvyas oh okay i see what you mean
05:32 jvyas why is the gfapi faster than fuse ? is it just a small latency cost ? or is it more significant (in the infiniband scenario) ?
05:34 semiosis when your network has +1ms latency, cpu cycles are hardly noticed.  when your network has 1us latency, cpu cycles are more noticeable
05:34 semiosis that's how i understand it at least
05:35 jvyas but i assume fuse would have the same approx # of cpu cycles as the API ?  or am i missing something?
05:38 dhsmith joined #gluster
05:38 semiosis http://en.wikipedia.org/wi​ki/Filesystem_in_Userspace
05:38 glusterbot <http://goo.gl/6KWE7> (at en.wikipedia.org)
05:53 sripathi joined #gluster
06:08 harshpb joined #gluster
06:18 shireesh joined #gluster
06:31 gm__ joined #gluster
06:34 kkeithley joined #gluster
06:48 shireesh joined #gluster
06:50 vimal joined #gluster
06:55 hagarth joined #gluster
06:56 bala1 joined #gluster
07:08 ngoswami joined #gluster
07:20 jtux joined #gluster
07:20 dhsmith joined #gluster
07:37 bala1 joined #gluster
07:53 hagarth joined #gluster
07:57 ekuric joined #gluster
08:01 ctria joined #gluster
08:08 sripathi joined #gluster
08:11 bala1 joined #gluster
08:12 passie joined #gluster
08:13 jjnash joined #gluster
08:13 nightwalk joined #gluster
08:16 passie left #gluster
08:25 bala1 joined #gluster
08:29 bala1 joined #gluster
08:31 tjikkun_work joined #gluster
08:31 guigui1 joined #gluster
08:50 ramkrsna joined #gluster
08:50 ramkrsna joined #gluster
08:51 dobber joined #gluster
08:52 duerF joined #gluster
08:53 bala1 joined #gluster
09:01 Humble joined #gluster
09:03 berend` joined #gluster
09:03 dblack joined #gluster
09:04 jdarcy joined #gluster
09:05 kkeithley joined #gluster
09:06 stigchristian joined #gluster
09:06 circut joined #gluster
09:06 hagarth_ joined #gluster
09:06 ndevos joined #gluster
09:10 wintix joined #gluster
09:10 wintix joined #gluster
09:11 andreask joined #gluster
09:13 bala1 joined #gluster
09:14 mohankumar joined #gluster
09:17 _Bryan_ joined #gluster
09:20 shireesh joined #gluster
09:49 vpshastry left #gluster
09:54 vpshastry joined #gluster
09:59 ndevos joined #gluster
10:01 kshlm joined #gluster
10:01 kshlm joined #gluster
10:05 wN joined #gluster
10:07 jdarcy joined #gluster
10:08 dblack joined #gluster
10:08 bdperkin joined #gluster
10:16 vpshastry joined #gluster
10:16 sgowda joined #gluster
10:20 dhsmith joined #gluster
10:31 puebele1 joined #gluster
10:34 sripathi1 joined #gluster
10:46 sgowda joined #gluster
11:05 vpshastry joined #gluster
11:18 Humble joined #gluster
11:19 sripathi joined #gluster
11:24 hagarth joined #gluster
11:27 hchiramm_ joined #gluster
11:33 ngoswami joined #gluster
11:57 andreask joined #gluster
12:09 puebele joined #gluster
12:41 chirino joined #gluster
12:43 balunasj joined #gluster
12:43 hagarth joined #gluster
13:20 raven-np joined #gluster
13:23 monkey joined #gluster
13:26 chirino joined #gluster
13:41 Alpinist joined #gluster
13:42 vpshastry joined #gluster
13:44 rwheeler joined #gluster
13:53 jtux joined #gluster
14:01 plarsen joined #gluster
14:03 manik joined #gluster
14:05 chirino joined #gluster
14:08 aliguori joined #gluster
14:15 vpshastry left #gluster
14:17 andreask1 joined #gluster
14:17 andreask1 left #gluster
14:18 andreask joined #gluster
14:37 jtux joined #gluster
14:40 hagarth joined #gluster
14:47 chirino joined #gluster
14:48 greylurk joined #gluster
14:50 lurpy joined #gluster
14:54 noob2 joined #gluster
14:55 bdperkin joined #gluster
14:57 ctria joined #gluster
15:01 sjoeboo sigh, still having lots of geo-replicationsetup woes
15:01 sjoeboo glsuter volule geo-replication <volume> status show nothing
15:01 sjoeboo but, i can see 3 non-functioning sessions going
15:01 sjoeboo can't turn off indexing to stop them
15:02 sjoeboo and haven't would a awy to stop them, if i kill the procs, they return w/ a restart of glsuterd
15:02 sjoeboo (i would have posted thi to the mailing list, but i'm not getting any join emails from it!)
15:11 stopbit joined #gluster
15:12 sjoeboo hm, in /var/lib/gluster/vols/$volname/info, there are slaves listed
15:12 sjoeboo how would one remove those, and propigate to all peers?
15:17 bitsweat_ left #gluster
15:21 wushudoin joined #gluster
15:26 bugs_ joined #gluster
15:27 nueces joined #gluster
15:43 hateya joined #gluster
15:56 johnmark sjoeboo: hey - what happens when you try to join the mailing list?
15:57 sjoeboo nothing, i get no confirmation emails
15:57 johnmark I added you to the "accept" list
15:57 sjoeboo i've checked spam too
15:57 sjoeboo hm
15:57 johnmark hrm, ok
15:57 sjoeboo ah, see i think i was follwing adifferent link to teh mailing list managment page tan what was in teh reply you jsut sent...
16:01 sjoeboo well, my mail go though, thats what matters...
16:01 johnmark oh? odd
16:01 johnmark yeah, but if you don't see replies, it's all for nought :)
16:01 johnmark er naught?
16:01 sjoeboo yah...
16:01 haidz im trying to determine the best way to load balance front end consumers (using the native gluster client) across the backend gluster storage servers. What would be the best way to do this? with a load balancer? or does gluster handle this itself? any documentation or help would be great.
16:02 johnmark sjoeboo: ok, I manually subscribed you
16:02 johnmark you should be getting a notification
16:02 sjoeboo cool
16:02 johnmark haidz: our default replication should take care of this
16:02 johnmark it sends requests to whichever server answers first
16:03 haidz johnmark, yes, but im referring to the actual mounts.. to distribute mounts across backend servers
16:03 johnmark right
16:03 johnmark that's what I meant
16:03 haidz so how does it handle a hard down node?
16:03 johnmark you mount a distributed-replicated volume
16:03 sjoeboo haidz: we do a DNS round robin on the servers, and use fetch-attempts=X where X= the number of servers
16:04 sjoeboo so it will retry the mount (fetching teh volume info), X number of times
16:04 haidz sjoeboo, ah awesome
16:04 johnmark sjoeboo: good to know - but are you using NFS? or the GlusterFS client?
16:04 sjoeboo so if one node is down, the mount should just try a few times, each time getting a new ip to try until its gets the volume info
16:04 sjoeboo we do it with both nfs and gluster
16:04 johnmark haidz: without that, there's a timeout setting which will have to be exceeded before it moves on automatically
16:04 sjoeboo tough nfs dons't have the fetch-attempts, of course,
16:05 johnmark sjoeboo: interesting
16:05 haidz sjoeboo, any issues with using a load balancer?
16:06 sjoeboo haidz: not sure, we don't use one
16:06 johnmark haidz: and you can lengthen or shorten that timeout period depending on your preferences, although it sounds like a load balancer would lead to better results
16:06 haidz johnmark, the native gluster code uses http or rest on the backend?
16:06 johnmark sjoeboo: sounds like you created a script to load balance?
16:06 Humble joined #gluster
16:07 johnmark haidz: it uses good ole' RPC
16:07 johnmark it's a native protocol
16:07 robinr joined #gluster
16:07 johnmark er native GlusterFS protocol
16:07 haidz ah ok
16:07 ndevos haidz: like http://tools.ietf.org/html/rfc5531
16:07 glusterbot Title: RFC 5531 - RPC: Remote Procedure Call Protocol Specification Version 2 (at tools.ietf.org)
16:08 ndevos with the data encoded in http://tools.ietf.org/html/rfc4506
16:08 glusterbot Title: RFC 4506 - XDR: External Data Representation Standard (at tools.ietf.org)
16:08 haidz i see
16:09 haidz ok ill give the round robin a try.. is there a full list of mount options? doesnt appear to be a man page for gluster in the rpms
16:09 haidz lemme try google for that
16:09 robinr hi, i did a "gluster volume heal RedhawkHome info" and got number of entries: 1 '<gfid:9ed83644-cae6-4d16-a5b7-7ccb48c41695>'. What's the quickest way to see what file was affected ? I ended up walking the entire file system to find which file to correspond to that gfid. I'm at 3.3.1-1 from download.gluster.org. The link is the output: http://www.dpaste.org/avrYO/
16:10 glusterbot Title: dpaste.de: Snippet #216037 (at www.dpaste.org)
16:10 ndevos "glusterfs --help" and the script /sbin/mount.glusterfs is readable too
16:13 haidz sjoeboo, im not seeing that mount option.. is that in a gluster config somewhere? outside of fstab?
16:13 sjoeboo no, its a mount option, can't remember where i found it, but it works
16:13 sjoeboo fetch-attempts=X
16:13 haidz ah
16:13 haidz i see it in the mount.glusterfs
16:14 haidz awesome thanks
16:14 haidz ill give that a go
16:16 chirino joined #gluster
16:21 andrei_ joined #gluster
16:21 andrei_ hello
16:21 glusterbot andrei_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:21 andrei_ is anyone here? I have some questions about setting up glusterfs on an existing file server
16:22 andrei_ and migrating data from the current setup
16:22 andrei_ which is a single nfs server to a setup with two glusterfs servers
16:25 andrei_ anyone here?
16:26 kkeithley should be pretty straight forward. Do you have specific questions?
16:27 andrei_ kkeithley: thanks, yeah, I do
16:27 andrei_ basically, i've got an existing file server for kvm images
16:27 andrei_ a live environment
16:27 andrei_ the file server uses nfs to run two kvm host servers and around 30vms
16:28 andrei_ i want to create a fault tollerant setup
16:28 andrei_ with 2 file servers
16:28 andrei_ and I want to move to glusterfs without any downtime
16:29 andrei_ all the guides that i've seen describe how to create a glusterfs setup without any previous data
16:29 andrei_ however, as I already have the data on the file server
16:29 andrei_ how do I make glusterfs use that data?
16:29 andrei_ as an example, all my nfs mount points are stored under /zfs-mirror mount point
16:30 andrei_ i want glusterfs to use /zfs-mirror as a brick
16:30 andrei_ obviously I can't loose any data
16:31 andrei_ and copying data from let's say /zfs-mirror over to /glusterfs-brick folder is not an option as I need to preserve the virtual machines and I can't simply copy it
16:31 andrei_ how do i tell gluster to use /zfs-mirror mount point as a brick and do the necessary to index it or whatever to make sure the data in /zfs-mirror would show up as a glusterfs volume
16:33 chirino joined #gluster
16:33 ctria joined #gluster
16:34 kkeithley maybe JoeJulian or semiosis have a trick for doing that, but generally speaking that's not a use case that glusterfs tries to address.
16:34 kkeithley IOW there isn't a tool for doing what you want at this point in time
16:35 andrei_ what will happen to the data if I simply create a new glusterfs brick and point it to an existing mountpoint like fileserver:/zfs-mirror
16:35 andrei_ will it not automatically make all the data that is currently stored under that mountpoint a glusterfs mountpoint?
16:38 sjoeboo johnmark: still on confirmation...
16:38 sjoeboo no*
16:43 kkeithley No, it's not going to work. Gluster adds xattrs to the files and directories. Without them gluster won't recognize your volume as a gluster volume. That's a rather simplistic explanation, there's obviously lots more to it than that.
16:44 andrei_ kkeithley: thanks
16:45 semiosis andrei_: if you want to move to glusterfs you need to point your application (vm, whatever) to go through the glusterfs mount point, afaik you'd need downtime to make that change
16:46 andrei_ I am surprised there is no tool in the glusterfs that you can make an existing mountpoint a glusterfs mountpoint
16:46 semiosis also you really should build a test environment and try out all the different failure scenarios you can think of before moving your production setup onto glusterfs
16:46 semiosis andrei_: i cant imagine how that would be possible
16:47 semiosis kkeithley?
16:47 kkeithley er, yes?
16:47 andrei_ semiosis: not sure myself. But from the glusterfs developer point of view, I would imagine there will be a lot of people who would need migrating their existing data over to glusterfs
16:47 andrei_ not simply start from scratch
16:48 semiosis is it even possible to "convert" an existing mount of one kind to another kind?
16:48 semiosis kkeithley: ^
16:48 andrei_ i can't be the only person with existing data that wants to use glusterfs
16:48 semiosis andrei_: everyone schedules downtime
16:48 kkeithley do you mean, e.g. changing an ext4 to xfs?
16:48 semiosis kkeithley: or nfs to glusterfs... while in use
16:49 andrei_ kkeithley: nfs over to glusterfs
16:49 andrei_ semiosis: i understand about the downtime. a few minutes of downtime would not be a problem
16:50 andrei_ the problem would be to switch all vms off
16:50 andrei_ and copy 10TB+ of data from one partition on the file server to a glusterfs enabled mountpoint
16:50 andrei_ this will take ages!
16:51 andrei_ plus finding an extra 10TB+ of storage space
16:51 kkeithley if I understand andrei_ correctly, he just wants to use his current nfs as the bricks. The existing data on them aside, I don't know why it wouldn't work. I wouldn't like the performance or reliability issues that that would probably have
16:51 semiosis it should be possible to create a new pure replica volume with data pre-loaded on one of the bricks, then glusterfs self heal should sync that to the other replica(s)
16:52 johnmark sjoeboo: are you getting mailing list emails now?
16:52 semiosis andrei_: glusterfs over nfs?  i wouldn't recommend it
16:52 andrei_ semiosis:  at the moment I've got a single server with just over 10tb of data
16:53 sjoeboo johnmark: nothing
16:53 andrei_ i've got a spare server which I want to use for glusterfs to create a replica setup
16:53 andrei_ two glusterfs servers
16:53 andrei_ at the moment all my vm data is served over nfs
16:54 semiosis do you have a test environment where you can experiment?
16:54 andrei_ and I am trying to find out the best way to migrate this over to two server glusterfs setup
16:54 andrei_ semiosis: i can create several VMs with some disk space and experiment
16:55 Mo___ joined #gluster
16:55 johnmark sjoeboo: ah, well your message was the last one, actualy
16:55 andrei_ but I do not have a separate physical environment
16:55 sjoeboo ha, okay, but i never got a conrim from you/the list
16:55 kkeithley keeping in mind that the self heal will take a long time too. TANSTAAFL.
16:56 * semiosis had to look up TANSTAAFL
16:56 semiosis but now that i did... +1 to that
16:56 kkeithley ;-)
16:56 andrei_ how would self heal help me?
16:57 semiosis it would copy the data from your existing server to the new replica
16:57 semiosis after the gluster volume was mounted
16:58 semiosis but it would necessarily cause a performance hit (idk how bad) and lock some parts of the vm image file, which may interrupt vm ops... hard to say for sure
16:58 kkeithley what is the current nfs server? You say the underlying volume is a zfs fs?
16:58 semiosis then there's that
16:59 andrei_ semiosis: so, what are the steps that i need to do to migrate from a live nfs server (call it server1) to a setup where I would have server1 and server2 acting as a replica glusterfs setup?
16:59 andrei_ don't I need to make the current live server a glusterfs server first? and then add the second replica server?
17:00 andrei_ kkeithley: yes, the underlying fs is zfs
17:00 andrei_ it's running on ubuntu lts
17:01 kkeithley you'll need to install gluster on both machines, and start gluster on server1 without nfs initially. Once the self heal finishes you can disable knfs on server1 and start gnfs on both. You'll have to manage the logistics of doing that somehow so as not to disrupt your running vms.
17:02 semiosis create volume, then set nfs.disable on before starting the volume -- should do it
17:02 andrei_ kkeithly: thanks
17:03 semiosis see ,,(options)
17:03 glusterbot http://goo.gl/dPFAf
17:04 andrei_ but can I use the existing mount point (currently stored on server1  under /zfs-mirror) and make it glusterfs enabled?
17:04 semiosis the only way to get a sure answer is to try
17:05 semiosis but yes, that's definitely worth trying
17:05 andrei_ semiosis: thanks, I will give a go with a test vm server
17:05 semiosis just specify it as a brick such as server1:/zfs-mirror when creating the volume
17:05 andrei_ so, the option that I need is called self heal
17:05 puebele joined #gluster
17:05 andrei_ semiosis: thanks a lot!
17:05 semiosis no the option you'd need is nfs.disable (iirc)
17:06 andrei_ yeah, I will disable the nfs when creating a glusterfs volume
17:06 semiosis self heal happens automatically when glusterfs recognizes that your replicas aren't in sync
17:06 andrei_ semiosis: okay, so I can start with just one glusterfs server and add a replica to it?
17:06 semiosis uh, maybe?
17:07 semiosis bbiab
17:11 sjoeboo so, hate to be a huge bug, but, really looking to get this geo-replication problem(s) solved, anyone have any thoughts/insight?
17:11 sjoeboo basically, looking currently for a sure-fire way to STOP all georeplication/clear if from all configs
17:11 sjoeboo so i can try again, and, if i doing get it right, again, have a way to back ou
17:11 sjoeboo out*
17:15 andreask joined #gluster
17:19 johnmark sjoeboo: I've forwarded your inquiry to our dev list, in the hopes that they'll pipe up on gluster-users
17:19 sjoeboo cool
17:30 chirino joined #gluster
17:41 lh joined #gluster
17:41 lh joined #gluster
18:03 zaitcev joined #gluster
18:05 chirino joined #gluster
18:09 erik49 joined #gluster
18:20 tqrst joined #gluster
18:23 robinr joined #gluster
18:25 * elyograg skips today's backlog. tl;dr
18:51 _pol joined #gluster
18:51 _pol Hi all, I tossed this question out yesterday but didn't stick around to see if there was an answer.  If someone were starting a gluster setup from scratch, what OS is recommended?
18:52 _pol My hunch is something like RHEL > CentOS > Redhat-ish > Ubuntu > Debian-ish > everything else
18:52 _pol Is that accurate?
18:53 sjoeboo _pol: I'm jsut a user, but I'd think so. RHEL even has there storage appliance OS you can buy
18:53 sjoeboo we do mostly Centos here for storage
18:54 ajm gluster seems to be well done enough to be very much not os-specific
18:54 _pol ajm: true, but when you are trying to limit your variables...
18:55 _pol Is there a consensus around the channel about the most common distro for gluster?
18:55 ajm for me, that's making the OS one I know/understand :) (gentoo)
18:55 _pol It's a lot easier to get help when you are using a distro that is popular in the community.
18:56 ajm i've seen that with some projects, not here
18:56 jdarcy _pol: Definitely RHEL/Fedora.  That's the platform that we build packages for all the time, and then those packages get used for all of the hard-core testing.
18:56 elyograg I use CentOS.  If I could get management to pay, I'd probably use RHEL.  Gluster is a redhat project, so I think it makes sense to stay within that universe.
18:56 jdarcy There are Debian packages, but they don't get anywhere near the same level of scrutiny.
18:56 neofob_laptop joined #gluster
18:57 jdarcy I'm thinking of installing Arch on my next laptop, and making packages for that just because I'm an utterly insane bastard.
18:57 ajm arch is very nice
18:57 _pol This was my thinking too, though I didn't think to put Fedora ahead of CentOS
18:59 _pol Is there any experience with a mixed OS gluster setup (some nodes with Ubuntu, some with Fedora, etc)?
19:00 jdarcy I wouldn't run mixed servers, with respect to either distro or 32-bit vs. 64-bit, but it doesn't matter too much if the clients are different.
19:01 kkeithley As much as I'd like to say use Fedora (it's what I use for my development work) I hear a lot of people say the lack of a LTS version makes it a non-starter for production, and I'm kinda inclined to agree.
19:01 bauruine joined #gluster
19:04 polfilm joined #gluster
19:22 y4m4 joined #gluster
19:31 sjoeboo anyone in here using/have experiance w/ geo-rep ? really hit the wall this week w/ it
19:38 johnmark jdarcy: can I call you the resident geo-rep expert?
19:38 johnmark jdarcy: ^^^^ see sjoeboo's pleas for helpo
19:38 JoeJulian sjoeboo: I haven't seen a lot of people that use it hanging out in here. I'll take a look. You want an idiot's guide to removing geosync from a volume if I understand correctly.
19:39 sjoeboo basically, yes
19:39 sjoeboo i've tried setting it up a few times, to no avail, but my main concern is being able to stop the failed attempts so i can systematically see where i'm failing
19:40 Technicool joined #gluster
19:41 johnmark Technicool: w00t
19:41 johnmark Technicool: what's shakin', bacon?
19:41 Technicool well all i can say for certain is, its not the bacon apple pie from Williams of Sonoma
19:42 Technicool which apparently has deceptively little bacon
19:42 Technicool happy new year everyone....except, of course, you dirty, lying Mayans
19:42 jvyas "nm" seems to give method names . Its output is funny though - no arguments..
19:42 jvyas also it is prefixed with a really weird hexadecimal # .
19:43 JoeJulian Technicool: They Mayans couldn't even predict the end of their own civilization, not sure why anyone would trust any of their predictions after that.
19:45 Technicool JoeJulian, dunno, i really i thought they had a lock on it...kind of like, if 17 Black hasn't come  up in roulette 10 times in a row, you better put money down there since its totally about to come up the next time
19:47 JoeJulian 35 times...
19:47 JoeJulian 10 times doesn't leave very good odds. ;)
19:49 lh joined #gluster
19:49 lh joined #gluster
19:54 kkeithley jvyas: what were you expecting nm to output?
20:01 jvyas kkeithley, function name + arg names.
20:01 jvyas i think -s does that.
20:02 jvyas im sorry guys i was in the wrong room.
20:02 johnmark teehee :)
20:05 ninkotech_ joined #gluster
20:05 JoeJulian Gah! Seriously? That static path is still in geo-sync?
20:07 chirino joined #gluster
20:08 lhawthor_ joined #gluster
20:09 lh joined #gluster
20:10 lhawthor_ joined #gluster
20:15 ninkotech_ joined #gluster
20:20 JoeJulian Ok, at least it's not hardcoded anymore. It's in a conf file.
20:21 kkeithley which path are you referring to?
20:21 JoeJulian /var/lib/glusterd/geo-replicat​ion/gsyncd.conf:remote_gsyncd = /usr/local/libexec/glusterfs/gsyncd
20:22 JoeJulian Of course it's actually installed at /usr/libexec/glusterfs/gsyncd
20:22 kkeithley hmmm. yuck indeed
20:26 johnmark jvyas: is there a particular reason you're compiling from source? I mean, it should work and all, just wondering why you're not using hte packages
20:27 kkeithley he wants to use libglapi (nee libgfapi) which isn't in any released packages yet
20:31 JoeJulian When did that change? You have to run glusterd on the slave now for geo-rep.
20:31 johnmark oh oh right
20:31 johnmark JoeJulian: that wasn't always the case?
20:32 JoeJulian I don't think so.
20:32 * sjoeboo reads w/ interest...
20:32 johnmark Technicool: ^^^ any words of wisdom?
20:32 JoeJulian But it's been over a year since I last tried figuring anything out related to this so I could have just lost my mind.
20:32 johnmark Technicool: I know you've done a thing or two with geo-rep back in your day
20:35 _pol kkeithley: (to resurrect my earlier topic) so then there is an LTS for CentOS?  Or is RHEL the only RHish distro with that feature?
20:36 erik49 Has anyone here tested various AWS gluster setups?
20:36 erik49 In the process of doing that and it'd be nice to share notes
20:37 sjoeboo _pol: what do you mean an LTS for centos ?
20:38 JoeJulian _pol: CentOS has no support, let alone long-term. That's kind-of the point to it.
20:38 kkeithley CentOS is a clone of RHEL, so it's presumably got the same sort of longevity as the associated RHEL release
20:38 dustint joined #gluster
20:39 JoeJulian Oh, that... right.
20:39 _pol I guess I need to brush up on my information wrt CentOS vs Fedora.  If CentOS is a clone/fork of RHEL, then Fedora is...
20:39 hateya joined #gluster
20:39 JoeJulian The upstream distro that becomes RHEL.
20:39 JoeJulian more or less
20:40 kkeithley I keep hearing that it's possible to download RHEL — for free — but just don't expect support. If that we're demonstrably true I'd say go with real RHEL
20:40 bugs_ fedora12 became rhel6 which became centos6
20:40 _pol Ooh. So it is like the "Sid of RedHat"
20:40 kkeithley that's correct
20:40 _pol kkeithley: oh really.  Hm.
20:40 * JoeJulian raises an eyebrow...
20:40 _pol I am with an edu, so maybe getting RHEL for free isn't so rough.
20:42 JoeJulian well there's a good bug... http://fpaste.org/dbcw/
20:42 glusterbot Title: Viewing Paste #264654 (at fpaste.org)
20:43 kkeithley Fedora 18 will be out in a couple weeks. I forget whether it's F17 or F18 that will be RHEL7, probably a bit of both.
20:43 JoeJulian 18
20:43 sjoeboo i think 18, msotly for the kvm/libvirt bits
20:43 sjoeboo JoeJulian: yeah, that what i'm hitting, basically
20:44 sjoeboo (along with some seemingly "ghost" ones i cannot stop)
20:48 JoeJulian sjoeboo: Are you Matthew Nicholson?
20:49 chirino joined #gluster
20:49 sjoeboo JoeJulian: yes :-)
20:49 JoeJulian Ok, just wanted to make sure I'm not duplicating my efforts.
20:54 JoeJulian Looks like this is why I can't stop geosync... [2013-01-04 12:52:17.978751] E [glusterd-op-sm.c:2716:glusterd_op_ac_stage_op] 0-: Validate failed: -1
20:55 JoeJulian sjoeboo: Restarting glusterd on the slave allowed me to stop geo-replication.
20:56 sjoeboo hm, okay
20:58 JoeJulian Oh, I know why I keep vacillating between geo-rep and geosync in my terminology... gsyncd.
20:59 JoeJulian ... and now start, status, stop works every time for me.
20:59 JoeJulian I'll wipe this and try again.
21:02 sjoeboo JoeJulian: restarting glusterd on my slaves didn't help me much(at all), but i'm willing to try whatever you come up with!
21:03 duffrecords joined #gluster
21:11 duffrecords I started getting really slow performance on my newly-installed Gluster system so I decided to scrap it and start from the ground up, running some tests on my software RAID volume before installing Gluster (different stripe sizes, different file transfer commands, etc.)  I'm still seeing slow speeds and bad concurrency.  can someone recommend an appropriate RAID stripe size for handling multi-gigabyte virtual disk images?
21:14 _pol kkeithley: Is this what you were talking about as far as freely downloadable RHEL: ftp://ftp.redhat.com/pub/redhat/linux/enterprise/
21:14 _pol kkeithley: maybe you are just not supposed to run it without a license, but you can download it... I found the link on the CentOS page.
21:17 JoeJulian Without a license I don't think you can install updates.
21:17 _pol Well, that would certainly be a problem.
21:26 copec joined #gluster
21:28 JoeJulian sjoeboo: Damn, I can't make it fail again.
21:29 sjoeboo hrm
21:29 JoeJulian Here's the step-by-step I used. http://www.gluster.org/community/docume​ntation/index.php/HowTo:geo-replication
21:29 glusterbot <http://goo.gl/5SXU0> (at www.gluster.org)
21:30 sjoeboo yeah, not only can I not stop that one (the one geo-rep that hosws up in status), but there are others attempting to run that DON't show up i can't stop either
21:30 sjoeboo that is basically what i've done..
21:30 JoeJulian stop ALL your glusterd then start them all again.
21:30 sjoeboo so...if going glsuter to gluster...what is ssh used for ?
21:31 sjoeboo all = master and slave ?
21:31 JoeJulian So all glusterd are not running at some point in this process.
21:31 sjoeboo okay
21:31 sjoeboo stopped, btu on one of my master side nodes, still tons of procs running
21:32 JoeJulian glusterd, the managment process, seems to communicate directly now but gsyncd uses ssh for the tunnel over which it passes rsync.
21:32 JoeJulian sjoeboo: yeah, that's expected.
21:33 sjoeboo okay, so start them back up  then? or kill those procs?
21:33 JoeJulian start them back up.
21:33 sjoeboo (the procs are mostly/all gsync procs)
21:34 JoeJulian ps ax | grep gsyncd | fpaste (if you have it installed)
21:35 JoeJulian brb... gotta change a poopy diaper.
21:36 JoeJulian nm, false alarm.
21:36 sjoeboo https://gist.github.com/4456582
21:36 glusterbot Title: gist:4456582 (at gist.github.com)
21:39 JoeJulian yuck.
21:40 sjoeboo yeah, there are a fe bad attempts in there i need to get out
21:40 sjoeboo few*
21:41 JoeJulian kill 12854 19630 19711 49917 58938
21:41 JoeJulian then let's see where that leaves us.
21:41 sjoeboo done, status change to faulty
21:41 sjoeboo lets see if i can stop it..
21:41 sjoeboo nope
21:42 JoeJulian let's see the ps again
21:42 sjoeboo and they came back
21:42 JoeJulian I was afraid of that.
21:42 JoeJulian That's probably what /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py --monitor manages.
21:43 sjoeboo https://gist.github.com/4457118
21:43 glusterbot Title: gist:4457118 (at gist.github.com)
21:44 JoeJulian Oh, that's actually better.
21:44 sjoeboo i had previously gone as far as stoping glusterd on all masters, (didn't try slsaves), AND killing all the gsyncd procs, but they all came back
21:45 sjoeboo i would have even edited the gsyncd.conf, but i wasn't sure how to propagate that to the rest of the nodes ...
21:46 JoeJulian It looks like "kill $(pgrep /usr/libexec/glusterfs/pyt​hon/syncdaemon/gsyncd.py)" should work.
21:48 sjoeboo w/ glusterd stopped all over again?
21:49 JoeJulian Yeah, that sounds good.
21:52 sjoeboo okay, all gone, restarting...
21:52 sjoeboo they all came back
21:53 sjoeboo even teh bad ones/the ones now even showing in status
21:53 sjoeboo ewhat about gsyncd.conf ?
21:53 sjoeboo what*
21:54 JoeJulian Let me try something here and see if it works...
21:56 JoeJulian Ok, kill all glusterd
21:56 sjoeboo okay
21:57 JoeJulian (this assumes you want NO geo-rep remaining) rm -rf /var/lib/glusterd/geo-replication
21:58 sjoeboo yeah, i want it all gone so i can get it right
21:58 JoeJulian edit /var/lib/glusterd/vols/*/info and remove "geo-replication.indexing=on" and any line that starts with "slave"
21:58 sjoeboo on just one node or all of them (masters)
21:58 sjoeboo ?
21:58 sjoeboo cool
21:59 JoeJulian All members of the trusted pool.
21:59 JoeJulian (all peers)
22:01 JoeJulian Did the info files have multiple slave lines?
22:01 sjoeboo yeah
22:01 sjoeboo and not all teh same # !
22:01 JoeJulian That's what I was guessing.
22:01 sjoeboo done
22:01 sjoeboo restart i assume..
22:01 JoeJulian Should be able to start glusterd and be sane.
22:02 chirino joined #gluster
22:02 sjoeboo yes, no gsync procs, status shows nothing, whew!
22:02 JoeJulian Do you know how you got into this state?
22:02 * sjoeboo owes JoeJulian many beers
22:02 sjoeboo a few failed attempts, this was all part of ripping a 20x2 into 2 20's
22:03 sjoeboo while keeping the data intact
22:03 sjoeboo in a hurry
22:03 sjoeboo in production
22:03 JoeJulian Yeah, I know how that goes.
22:03 JoeJulian I get things into some broken state and file a bug but since I don't know how I was able to get it into that state, the bug report kind-of sucks.
22:03 glusterbot http://goo.gl/UUuCq
22:03 sjoeboo okay, cool, going to follow that how-to, to the letter, then head out....
22:04 JoeJulian You should probably still file one stating that it's possible to get multiple identical slave defined.
22:04 kkeithley _pol: I believe you can run it. You won't be able to register it to get updates probably, and if you call for support, don't expect to get much help unless you're ready to open your wallet. ;-)
22:05 chirino joined #gluster
22:06 JoeJulian I'm going to go take my daughter to a local coffee shop that has a kids play area so she can socialize. See y'all later.
22:06 sjoeboo JoeJulian: just to be extra sure...since i want to do a volume to the root of another volume, i can do:
22:06 sjoeboo luster volume geo-replication gstore ox60-gstore01:gstore-rep start
22:06 sjoeboo right?
22:06 sjoeboo (no need for a /gstore-rep ? )
22:07 JoeJulian I highly doubt it.
22:07 sjoeboo okay, leading / it is then!
22:07 sjoeboo some of the docs are kinda wihsy washy about this...
22:07 JoeJulian I would expect you'll need to target the fuse mount
22:08 sjoeboo oh, really? weird, maybe thart was one of my problems, as it seemed fromt eh docs you should just target the volume, basically like a mount ...slave:volume , not slave:/mount/point/of/fuse_vol
22:09 JoeJulian I hope I can find some time to work on the docs this weekend. I REALLY want to convert the docbook to asciidoc.
22:11 JoeJulian Yeah, every example lists "remote_dir" but nothing about a remote volume, so I'm sure that it has to be a client mountpoint.
22:12 JoeJulian Would be a nice feature request though.
22:17 sjoeboo okay, went for the remot dir (so not its listed as via ssh://) and a test file was replicated
22:17 sjoeboo good enough for the weekend!
22:20 m0zes I used gluster://slave-host:rep-vol for geo-rep in 3.2... no ssh.
22:25 JoeJulian Heh, yes. I see how that works now.
22:29 neofob_laptop left #gluster
22:38 tc00per joined #gluster
23:13 johnmark sjoeboo: be sure and send JoeJulian a beer :)
23:13 johnmark via air mail
23:36 hattenator joined #gluster
23:57 noob2 joined #gluster
23:57 noob2 left #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary