Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-02-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
23:09 mattappe_ joined #gluster
23:16 ira joined #gluster
23:28 mattapperson joined #gluster
23:29 mattapp__ joined #gluster
23:32 jiffe98 that is kind of confusing though
23:32 jiffe98 because when I think of replicas I think there is one original and one replica so if I were to list how many replicas there were in a 2 node mirrored setup I would think 1
23:41 mattapp__ joined #gluster
23:46 sarkis joined #gluster
23:50 dewey joined #gluster
00:06 andrewklau joined #gluster
00:26 sarkis hey guys
00:30 edong23 joined #gluster
00:34 sarkis so i am dealing with a gluster 3.0.x setup, trying to get to 3.4, looks like along the way glusterfsd.conf and glusterfs.conf were merged? is that right?
00:34 sarkis s/conf/vol
00:52 purpleidea jiffe98: it's just you that's confused about this, sorry.
00:53 purpleidea once you've setup at least one test cluster, then you'll figure it out. do this first, and then come back and ask questions.
00:54 gmcwhistler joined #gluster
00:54 purpleidea even _if_ setting up gluster is easy and worked perfectly, you would still need to do trial and error clusters so that you learn how it works, and the semantics
00:55 uebera|| joined #gluster
01:02 mattappe_ joined #gluster
01:07 andrewklau Hi, is there any setting that'll make glusterfsd not so io intensive that it hogs all the resources? eg. adding a new brick to a 2 host replica will absolutely bring the first host to a halt while it replicates
01:12 sarkis purpleidea: do you know if i can startup with 1 server? for some reason i get this... https://gist.github.com/sarkis/90f83860dfbea57f4960
01:12 glusterbot Title: gluster 3.0 to 3.4 (at gist.github.com)
01:15 sarkis or anyone else that has a moment ;)
01:33 sarkis anyone still around?
01:34 mattappe_ joined #gluster
01:35 purpleidea sarkis: you have to ask your question properly and be patient. pinging random people won't help your case
01:41 mattappe_ joined #gluster
01:48 sarkis purpleidea: sorry :(
01:48 sarkis purpleidea: are u doing the puppet gluster talk at SCALE?
01:49 sarkis was trying to figure out who it could be, you have the nicest puppet gluster module so i assumed it was you hehe
01:50 purpleidea sarkis: flattery :P
01:50 purpleidea yeah it's me
01:50 purpleidea glad you like the module
01:52 vpshastry joined #gluster
02:01 sarkis woot
02:01 sarkis well i'm trying to convert us to using it currently.. first step though is to get 3.4.x working with what we have currently
02:03 sarkis so if anyone has a quick second to look: https://gist.github.com/sarkis/90f83860dfbea57f4960, trying to figure out why i get the failure upon starting glusterd... i will be forever in debt. I have to step away but will be back after dinner hacking away, im sure i can figure this all out tonight :)
02:04 purpleidea sarkis: are you setting up these files manually?
02:04 purpleidea sarkis: 3.4 is much different than 3.0 everything happens with commands now. try building a test setup with something like ,,(vagrant)
02:04 glusterbot sarkis: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
02:04 glusterbot https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/
02:05 purpleidea sarkis: whether you use puppet or not, you shouldn't really have to edit config files by hand. you want the gluster command instead. you run it from any host in the cluster for management purposes.
02:06 purpleidea i've got to check out for tonight. good luck!
02:12 sprachgenerator joined #gluster
02:25 tokik joined #gluster
02:44 sprachgenerator joined #gluster
02:50 glusterbot New news from resolvedglusterbugs: [Bug 1060208] [RHSC] Skipped file count in remove-brick status dialog not shown <https://bugzilla.redhat.com/show_bug.cgi?id=1060208>
02:51 _pol joined #gluster
02:54 davinder joined #gluster
02:56 kdhananjay joined #gluster
03:07 kdhananjay joined #gluster
03:12 bala joined #gluster
03:19 kc6000 joined #gluster
03:20 sprachgenerator joined #gluster
04:01 mattappe_ joined #gluster
04:01 kc6000 left #gluster
04:50 dbruhn joined #gluster
04:53 bala joined #gluster
04:57 vpshastry joined #gluster
05:08 hagarth joined #gluster
05:12 jporterfield joined #gluster
05:23 vpshastry joined #gluster
05:23 ira joined #gluster
06:00 swaT30 joined #gluster
06:18 vpshastry joined #gluster
07:23 toki joined #gluster
08:06 ngoswami joined #gluster
08:32 andrewklau joined #gluster
08:35 StarBeast joined #gluster
08:53 avati joined #gluster
08:59 saltsa joined #gluster
08:59 ngoswami joined #gluster
09:10 jclift_ joined #gluster
09:10 jclift_ joined #gluster
09:18 saltsa_ joined #gluster
09:18 qdk joined #gluster
09:19 mikedep333 joined #gluster
09:19 sarkis_ joined #gluster
09:20 mkzero_ joined #gluster
09:29 cfeller_ joined #gluster
09:30 Jayunit100_ joined #gluster
09:31 jclift joined #gluster
09:34 andrewklau joined #gluster
09:34 juhaj joined #gluster
09:34 sputnik13 joined #gluster
09:34 mojorison joined #gluster
09:34 Krikke joined #gluster
09:46 tdasilva joined #gluster
10:43 jporterfield joined #gluster
10:51 jporterfield joined #gluster
10:53 tdasilva left #gluster
11:16 jporterfield joined #gluster
11:36 jporterfield joined #gluster
12:17 sputnik13 joined #gluster
12:19 diegows joined #gluster
12:33 mattappe_ joined #gluster
12:34 burn420 joined #gluster
12:36 burnalot joined #gluster
13:04 vpshastry joined #gluster
13:16 mattappe_ joined #gluster
13:19 saltsa joined #gluster
13:21 mattappe_ joined #gluster
13:53 realdannys joined #gluster
13:54 jporterfield joined #gluster
13:57 burnalot joined #gluster
14:05 [o__o] left #gluster
14:08 [o__o] joined #gluster
14:30 [o__o] left #gluster
14:32 [o__o] joined #gluster
14:52 mattappe_ joined #gluster
15:11 mikedep333 left #gluster
15:12 jporterfield joined #gluster
15:41 Dave2 joined #gluster
15:44 mattapperson joined #gluster
15:58 vpshastry joined #gluster
16:18 mattappe_ joined #gluster
16:20 jporterfield joined #gluster
16:26 jporterfield joined #gluster
16:37 sarkis i am trying to replicate these settings from 3.0.x: https://gist.github.com/sarkis/90f83860dfbea57f4960 on 3.4.1... so far i started gluster on both servers, probed peers, setup a volume via command "gluster volume create puppet-shared replica 2 server01:/opt/mt/data/gluster-storage server02:/opt/mt/data/gluster-storage"
16:37 burn420alot joined #gluster
16:37 glusterbot Title: gluster 3.0 to 3.4 (at gist.github.com)
16:38 sarkis how do i know it did all the "magic" of posix1, locks, and also how would i add stuff like performance/io threads and performance/write behinds?
16:53 vpshastry joined #gluster
16:57 rotbeard joined #gluster
17:05 sarkis https://gist.github.com/sarkis/7c24068a2b0eabb3f091 trying this out now... when i mount -a the fstab line: /etc/glusterfs/puppet-shared.vol        /etc/puppet     glusterfs       defaults,_netdev 0 0
17:05 glusterbot Title: etc-puppet.log (at gist.github.com)
17:05 sarkis i get errors detailed in the gist
17:07 samppah sarkis: umm.. do you have some special reason why you are trying to do 3.0 style config?
17:08 samppah you can set most of the options with gluster volume set command
17:10 sarkis samppah: im moving a 3.0.x cluster to 3.4.1
17:10 sarkis samppah: seems like it would be easier at this point but i could be wrong :(
17:12 sarkis i see the command now gluster volume help for the win
17:12 sarkis but the set command doesn't show the options set really or available options
17:13 samppah gluster volume set help
17:13 sarkis doh
17:13 sarkis thanks
17:13 samppah np :)
17:14 davinder joined #gluster
17:14 sarkis so i guess the other question in 3.0.x i would have like a /opt/storage
17:14 sarkis then i'd mount that at /etc/foobar
17:14 sarkis or /etc/puppet
17:16 sarkis do i still need to do that with a glusterfs filesystem
17:17 sarkis i think the unique thing about our systems is that we have these 2 servers which are also the clients... seems in most cases the servers are separate from the clients in a gluster setup
17:18 dbruhn sarkis, when the brick servers are also clients you still need to mount the file system
17:19 sarkis i see
17:20 sarkis hmmm so i don't even have the 2 servers currently replicating
17:20 sarkis i just checked the 2 bricks and 1 is completely empty :(
17:22 sarkis oh duh i used replica 2 i need replicae 1 i think
17:25 vpshastry left #gluster
17:26 sarkis nvm that wasn't it :*
17:26 sarkis :(
17:31 sarkis how can i see why the files aren't replicating?
17:31 sarkis all status and info shows ok
17:32 jporterfield joined #gluster
17:34 dbruhn sarkis, are you copying the files in through the mounted file system, or into the bricks directly?
17:36 sarkis ok worked when i mounted it
17:37 sarkis ya i was trying bricks, dbruhn thanks!
17:37 dbruhn np, you really never want to touch the bricks directly unless you are trying to fix a problem with the underlying storage
17:37 sarkis i see
17:37 sarkis so i mounted the 2 server/clients like so
17:38 sarkis [root@pm01 gluster-storage]# mount -t glusterfs pm01:/puppet-shared /etc/puppet
17:38 sarkis [root@pm02 gluster-storage]# mount -t glusterfs pm02:/puppet-shared /etc/puppet
17:38 sarkis all works as expected...
17:39 sarkis well how can i kick pm02 off the cluster to see what happens
17:39 dbruhn yep, when the client mounts to the first server a manifest is returned to the client with all of the servers/bricks
17:39 dbruhn the client then connects to all of the servers and bricks directly
17:39 sarkis i see
17:39 sarkis just to test what would i use to bring down a brick
17:40 dbruhn the client actually deals with the replication to the server
17:40 dbruhn you can stop the service on pm02
17:40 dbruhn or power pm02 down
17:41 sarkis whoa
17:41 sarkis i think pm01 hung up lol
17:41 dbruhn there is a 42 second timeout
17:41 sarkis oh no all good
17:42 sarkis ah interesting
17:42 sarkis bad idea to lower that?
17:42 dbruhn There is a 42 second timeout, if a brick server goes down, you'll get a 42 second hang
17:42 sarkis only on the servers huh?
17:43 sarkis so makes sense why the servers are seperate from clients heh
17:43 dbruhn Depends, if it gets too low, you will breaks you locks on files, and other things, re-establishing these things can be expensive in resources if it's happening all the time
17:43 dbruhn A lot of guys are running the server and client together
17:43 sarkis i see
17:43 dbruhn I have several clusters, and have had several cluster that run with the clients that are only the brick servers
17:54 sarkis dbruhn: do you know if the gluster port now changes?
17:55 dbruhn I believe they have in 3.4.x
17:55 sarkis ah
17:55 sarkis seems like its random unless i specify one?
17:56 dbruhn no, each brick increases the count by one I think
17:56 sarkis ah
17:56 dbruhn @ports
17:56 sarkis damn that sucks
17:56 glusterbot dbruhn: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
17:56 sarkis oh anyway
17:56 sarkis hmmm this sucks for firewalls hah
17:58 mattappe_ joined #gluster
17:58 dbruhn Do you need a firewall between your servers/clients?
17:59 sarkis well between the servers
18:00 sarkis i guess not
18:00 sarkis nvm :)
18:00 diegows joined #gluster
18:13 ilbot3 joined #gluster
18:13 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
18:22 mattappe_ joined #gluster
18:25 xeed0 joined #gluster
18:50 mick271 joined #gluster
19:18 mattappe_ joined #gluster
19:21 mattapperson joined #gluster
19:26 sarkis so question about replica
19:26 sarkis if i have 4 hots
19:26 sarkis hosts*
19:26 dbruhn ok?
19:26 sarkis i want them all to be in sync.. do i still use replica 2?
19:26 dbruhn So you want to have four copies of your data?
19:27 dbruhn and is server only providing a single brick?
19:27 sarkis yea they all need to have the same data
19:27 sarkis no 4 bricks
19:27 dbruhn 4 bricks per server?
19:27 sarkis oh sorry
19:27 sarkis 1 brick pers server
19:27 dbruhn ok then replica 4
19:27 sarkis ah got it..
19:27 sarkis and the question then is can that be changed?
19:28 dbruhn I believe it can, but I've never actually done it myself
19:28 sarkis got it.. i will look.. thanks
19:30 sarkis so the replica COUNT is up to how many servers are going to be replicated?
19:30 mattapperson joined #gluster
19:31 sarkis so if it were 2 and i have 4 nodes there will be 2 sets of 2 servers
19:31 sarkis er thanks confusing let me use the right terminology
19:31 sarkis 4 bricks at replica 2,  2 sets of 2 bricks that will ahve the same data
19:31 sarkis 4 bricks at replica 4, all bricks share same data
19:33 dbruhn Think of the replica count as how many copies of your data will be on the volume, you obviously need to have the bricks to facilitate the replica counts.
19:34 sarkis i see
19:34 dbruhn So replica 1 = 1 copy, replica 2 = 2 copies, replica 3 = 3 copies, so on so fourth
19:34 dbruhn and yep, replica 4 with 4 bricks will mean they are all sharing the same data
19:35 sarkis got it
19:35 dbruhn one thing to wrap your head around, is start thinking in terms of bricks, not servers
19:35 dbruhn its easier to communicate around that
19:35 sarkis and if i add 1 more server keeping replica 4 i can't guarantee all 5 servers at that point have the same data right?
19:35 sarkis yea haha i just did it again O_O
19:36 dbruhn if you are running a replica 4, you would either need to add 4 more servers, or change it to replica 5
19:36 sarkis dbruhn: thanks for the help... cool little community here
19:36 sarkis ah right cause it needs to be a multiple of replica count
19:36 dbruhn No problem, it tends to be a bit more active during the week
19:36 sarkis # of bricks in a volume that is
19:37 sarkis now to actually puppetizing all this... ahhhhhhhhhh
19:37 sarkis thank goodness for puppet-gluster module
19:37 dbruhn semiosis has from what I understand a really solid puppet module
19:37 dbruhn ph yeah
19:37 sarkis purpleidea does no?
19:37 sarkis or does semiosis also have one?
19:37 dbruhn um I think they both work on it
19:38 sarkis o hcool
19:38 dbruhn I honestly manage my stuff from the cli
19:38 dbruhn I really need to spend some time with puppet
19:38 dbruhn just haven't had the time to spend on it
19:38 sarkis yea i hear ya, we jsut have everything else puppeted, so make sense
19:38 sarkis hey thats great, then i can return the favor and answer your puppet questions :)
19:39 sarkis ping me about anything puppet anytime you'd like, i owe you a few :
19:39 dbruhn haha, maybe if I can get my laptop back in order, my driveway shoveled, and my lab re-racked today
19:39 dbruhn no worries, good luck!
19:39 sarkis booo snow
19:39 sarkis what are you in/
19:39 dbruhn minnesota
19:39 sarkis ah cool cool
19:39 dbruhn we got 8 inches this week
19:39 sarkis i'm in los angeles
19:39 * sarkis ducks
19:39 dbruhn I'll be in LA in a couple weeks for scale
19:40 sarkis oh cool, i will be there
19:40 sarkis going to the puppet gluster talk by purpleidea?
19:40 dbruhn planning on it
19:40 sarkis very cool
19:40 sarkis ya theres an entire puppet camp track too, would be great
19:41 dbruhn be back in a few
19:41 sarkis gotta head out real quick
19:41 sarkis hah see ya
20:06 DataBeaver joined #gluster
20:16 DataBeaver I'm seeing some very strange behavior after moving some directories around on the server.  A (fuse) client will occasionally see an EPERM when trying to access such a directory, or files within them, but immediately trying again works.  The directories have 755 permissions and the files 644.  I guess the obvious solution for the future is to only touch things through a client, but how do I rectify the situation for those that already got broken?  Mounting th
20:16 DataBeaver e filesystem on the server itself through glusterfs and copying the files to it through that comes to mind, but is there any simpler and faster way?
20:16 mattappe_ joined #gluster
20:21 DataBeaver Something really weird is going on, lines 7 and 12 in this strace are opening the same directory, but the first one succeeds and the second one fails: http://pastebin.ca/2613386
20:21 glusterbot Title: pastebin - Stuff - post number 2613386 (at pastebin.ca)
20:24 ProT-0-TypE joined #gluster
20:38 mattappe_ joined #gluster
20:55 jporterfield joined #gluster
20:55 diegows joined #gluster
21:00 jporterfield joined #gluster
21:24 dbruhn DataBeaver, what kind of volume?
21:46 burn420alot joined #gluster
21:54 DV joined #gluster
22:06 mattappe_ joined #gluster
22:07 sarkis joined #gluster
22:14 purpleidea dbruhn: in the beginning there were a few different ,,(puppet_ modules... atm semiosis doesn't maintain his anymore afaik.
22:14 purpleidea dbruhn: in the beginning there were a few different ,,(puppet) modules... atm semiosis doesn't maintain his anymore afaik.
22:14 glusterbot dbruhn: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
22:15 dbruhn agh ok, I just knew you guys talked puppet a lot
22:15 dbruhn I hear you are doing a talk at scale?
22:17 sarkis hi guys :)(
22:18 dbruhn hey sarkis
22:18 purpleidea dbruhn: indeed i am! feel free to bring plenty of questions or a laptop if you want help trying this stuff out.
22:19 purpleidea sarkis: hai
22:19 dbruhn For sure, I am flying down to San Diego on Thursday and hooking up with an old friend and we are coming up, I ear marked your talk as one I wanted to go to.
22:19 purpleidea awesome, i hope you enjoy it. i've recently posted some screencasts and other material which is a good intro
22:20 purpleidea https://ttboj.wordpress.com/
22:21 dbruhn Are you using gluster for kvm storage?
22:21 purpleidea currently i am not
22:21 purpleidea i do have a pretty cool (unreleased) puppet-libvirt module though :)
22:21 purpleidea if someone wants to use it, i'd release it if they wanted to sponsor gluster integration
22:22 dbruhn sweet
22:22 dbruhn I have been really wanting to spend a good chunk of time with cloudstack
22:23 purpleidea haven't tried it, but i am afraid of products that include the strings 'cloud' or 'multimedia'
22:23 dbruhn haha
22:23 dbruhn I run a cloud backup service
22:23 purpleidea "for multimedia!
22:23 purpleidea "
22:24 dbruhn In a previous life I was a sound engineer, and owned a recording studio
22:24 sarkis purpleidea: id be interested in the libvirt module too haha
22:24 purpleidea dbruhn: what's the cloud backup service called?
22:24 dbruhn offsitebackups.com
22:25 dbruhn Over the last year+ I have moved us completely over to gluster from NetApp
22:25 purpleidea sarkis: i've already published a lot of code at: https://github.com/purpleidea if you want to sponsor me to finish my libvirt module and integrate this with the existing puppet-gluster module, pm or email me
22:25 glusterbot Title: purpleidea (James) · GitHub (at github.com)
22:25 purpleidea dbruhn: cool. it's always nice when netapp dies
22:26 purpleidea dbruhn: is the website image a stock photo or your actual dc?
22:26 dbruhn Now I just have a bunch of older net app systems I use as SAN's in my lab stuff
22:26 dbruhn stock image
22:26 dbruhn lol
22:26 dbruhn I honestly don't like that website
22:26 purpleidea hehe
22:27 purpleidea how do you make a HIPAA compliant system using GlusterFS ?
22:27 purpleidea you actually don't have to answer that
22:28 dbruhn We encrypt the data before it leaves the customers environment, and then our physical access policies are fully audited and tracked under SSAE16 audits
22:28 qdk joined #gluster
22:28 dbruhn The customers set's their own encryption keys
22:29 purpleidea dbruhn: i had to do some work that required CFRp11, and therefore i feel your pain
22:29 dbruhn The thing I love about gluster is it allows me to run my application stack on the same servers I run for storage.
22:29 purpleidea can't agree more...
22:30 purpleidea do you optimize to prioritize local reads?
22:30 dbruhn makes things more complex when an issue arises, but works well and lowers op costs a ton
22:30 dbruhn I am running QDR infiniband
22:30 dbruhn and from my understanding gluster reacts on first response
22:31 dbruhn which works for me
22:31 purpleidea indeed
22:31 dbruhn my systems are all distributed+replication
22:31 purpleidea ~undocumented options | dbruhn
22:31 glusterbot dbruhn: Undocumented options for 3.4: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
22:31 dbruhn I am still on 3.3.2
22:31 dbruhn not sure if all the new options are supported
22:32 purpleidea idk
22:33 dbruhn I really need to get my lab back in order to start debugging RDMA for the project
22:33 purpleidea dbruhn: well, i know of a great tool for fast [re]deploys of GlusterFS setups...
22:33 purpleidea (including formatting physical drives, and other goodness)
22:34 dbruhn the puppet modules do all of that?
22:34 sprachgenerator joined #gluster
22:34 purpleidea dbruhn: it's optional, but yeah, mkfs too... it was invaluable when i first started testing gluster...
22:34 purpleidea i rebuilt the whole thing countless times...
22:34 dbruhn shit... I wouldn't have wasted my time creating build scripts, lol.
22:34 purpleidea well, that's why i wrote my ,,(puppet) stuff...
22:34 glusterbot (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
22:35 purpleidea dbruhn: and if you just want to prototype a setup without pushing to iron yet, you can use the ,,(vagrant) integration
22:35 glusterbot dbruhn: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
22:35 glusterbot https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16/testing-glusterfs-during-glusterfest/
22:35 dbruhn I am going to have to look at that more
22:36 purpleidea dbruhn: i haven't tested the fdisk/mkfs stuff recently, because i don't have any test iron, so let me know how it goes, and if you can make hardware available for me to test on, or donate some, then i can probably be sure to iron out any issues (pun intended)
22:37 dbruhn As soon as I get my lab equipment back up I can give you access for sure. I have 15 servers, 10 with 20GB IB.
22:38 dbruhn I had to pull it all out of my production racks recently and move it to a different location
22:38 purpleidea just email me, and we'll automate some shit
22:38 dbruhn sweet
22:38 purpleidea dbruhn: https://ttboj.wordpress.com/contact/
22:39 purpleidea anyways, i'm out for now, see you all at scale
22:39 dbruhn ttyl
22:39 purpleidea bring F20 laptops
22:39 dbruhn just sent you an email
22:41 purpleidea and replied
22:51 sarkis purpleidea: i'd love to help with the puppet efforts
22:51 sarkis purpleidea: i'm a newbie with gluster but way more advanced with puppet :) let me know how i can hel
22:51 sarkis help
22:52 purpleidea sarkis: start by testing it out: ,,(puppet)
22:52 glusterbot sarkis: (#1) https://github.com/purpleidea/puppet-gluster, or (#2) semiosis' unmaintained puppet module: https://github.com/semiosis/puppet-gluster
22:52 purpleidea (#1)
22:52 * semiosis should really remove #2 from that factoid
22:54 purpleidea @help forget
22:54 glusterbot purpleidea: (forget [<channel>] <key> [<number>|*]) -- Removes a key-fact relationship for key <key> from the factoids database. If there is more than one such relationship for this key, a number is necessary to determine which one should be removed. A * can be used to remove all relationships for <key>. If as a result, the key (factoid) remains without any relationships to a factoid (key),
22:54 glusterbot purpleidea: it shall be removed from the database. <channel> is only necessary if the message isn't sent in the channel itself.
22:54 semiosis @forget puppet 2
22:54 glusterbot semiosis: The operation succeeded.
22:54 semiosis @puppet
22:54 glusterbot semiosis: https://github.com/purpleidea/puppet-gluster
22:54 * semiosis feels better
22:54 * purpleidea feels alone
22:55 semiosis thats really where all the puppet effort should be focused
22:55 purpleidea semiosis: hey, you coming to scale?
22:55 semiosis purpleidea: no plans yet, doubtful.  when is it?
22:55 purpleidea feb 20-24
22:55 purpleidea LAX airport
22:55 semiosis but going to rh summit, submitted a talk proposal for devnation & very hopeful
22:55 semiosis prob. not scale
22:56 semiosis purpleidea: i cloned & reviewed puppet-gluster/vagrant... wow
22:56 purpleidea sweet, o hope you get a talk in. jmw ask me to submit something to devnation too... it was after the deadline, but apparently there were still slots free
22:56 purpleidea semiosis: wow?
22:56 semiosis i'm waaaay out of touch with the cfg mgmt scene
22:57 purpleidea :P you know i'm down to help you get back your edge
22:57 semiosis i did a degree's worth of puppet study 2-3 years ago and have barely touched it since
22:58 semiosis i tried that maven vagrant plugin the other day, it's a piece of junk afaict
22:58 purpleidea seriously, i'm just going to write the java integration thing for you
22:58 semiosis looks like it hasn't seen any maintenance in a long time & both maven & vagrant have moved on
22:59 purpleidea (not the java part, i mean the puppet part for auto testing)
22:59 semiosis if you really want to help me out, what i need is simple: a vagrant/puppet config that will start one vm, install glusterfs & make one volume with one brick and turn on the insecure ports stuff
22:59 purpleidea semiosis: i already emailed that to you
22:59 semiosis i want to do it myself, but looking at it pragmatically, i could use the help
23:00 semiosis hmm, i must have missed the point of that email
23:00 * semiosis re-reads
23:01 semiosis purpleidea: subject line or date?
23:01 * semiosis searching
23:01 purpleidea semiosis: https://gluster.org/pipermail/gluster-users/2014-January/038794.html
23:01 glusterbot Title: [Gluster-users] Gluster Volume Properties (at gluster.org)
23:01 semiosis woo
23:01 semiosis dude, you rock
23:01 purpleidea yw
23:02 semiosis ok so yeah thats the email i remember reading
23:02 purpleidea you can easily modify the vagrantfile in puppet-gluster/vagrant/gluster/ to make this happen... also you can modify puppet-gluster/vagrant/gluster/puppet/manifests/site.pp
23:02 purpleidea that's it!
23:02 semiosis the vagrant stuff you provide is EPIC
23:03 semiosis i was hoping i could do it with a short Vagrantfile & just a site.pp
23:03 semiosis you even included decoy files that just say "James wuz here" lol ;)
23:03 realdannys joined #gluster
23:03 purpleidea semiosis: which decoy files?
23:04 semiosis hieradata/common.yaml
23:04 purpleidea oh, lol. yeah, i needed it to be non-empty
23:05 semiosis :D
23:05 semiosis i smlied when i saw that
23:05 purpleidea semiosis: send me a patch... the new version can say "semiosis wuz here too"
23:06 semiosis is hiera even used?
23:06 semiosis my vague understanding is it's something to do with exported resources??
23:06 purpleidea semiosis: it's not used for puppet-gluster in my default setup, but it's supported as a method to configure puppet-gluster if you want to use it
23:06 purpleidea not exactly, no
23:06 semiosis ok then i dont even want to know
23:06 semiosis i'll just try to ignore it (distracting)
23:06 purpleidea semiosis: hiera is useful for "filling in" all the types with values from a yaml file...
23:07 semiosis how long do you think you'll be around?
23:07 purpleidea semiosis: https://ttboj.wordpress.com/2013/02/20/automatic-hiera-lookups-in-puppet-3-x/
23:07 purpleidea semiosis: max 2.3 hours
23:07 purpleidea i was going to watch a movie, but we can hack if you prefer
23:07 semiosis ok then, rearranging my todo list to try this vagrant thing now while you're around
23:08 semiosis i think 1 hr should be plenty, but if you have to go of course that's cool with me
23:08 purpleidea *hack mode on*
23:09 purpleidea semiosis: http://imgur.com/FrOFOgy
23:09 glusterbot Title: No hard feelins. - Imgur (at imgur.com)
23:10 semiosis so here's what i did the other night, and resuming now... made a new dir, 'java/' in the cloned puppet-gluster/vagrant next to your puppet-gluster/vagrant/gluster example
23:11 semiosis used vagrant init to get a default vagrantfile, set my preferred box, and uncommented a couple puppet lines.  made manifests & modules dirs, copied the puppet-gluster module into modules & added gluster simple to the default node , with shorewall disabled
23:12 purpleidea semiosis: i'd recommend using the existing (complicated) vagrantfile because it adds lots of features which are very useful (which is why i added them)
23:12 purpleidea semiosis: do you want to join a shared screen session to do this?
23:12 semiosis if my goal is a super simple integration test setup, why would i want extra features / what features would I want?
23:12 semiosis thx for shared screen option but not just yet
23:13 purpleidea the reason you want the extra features i added, is so that it's easy to manage the vagrant stuff... for example, to re-deploy, it will automatically clean the puppet server so you can re-test quickly
23:13 semiosis that would be helpful :)
23:14 purpleidea it does more things too... utsl for details
23:14 purpleidea semiosis: so imo, you should do this: 1) cdmkdir ~/code/javawhatever/
23:14 purpleidea 2) git clone --recursive
23:14 semiosis ok, so i take your big example... can I get rid of the puppetmaster node?  can i get rid of everything but one lonely gluster server?
23:14 purpleidea 3) cd puppet-gluster/vagrant/
23:15 purpleidea mv or 'cp -a' gluster java
23:15 purpleidea cd java
23:16 semiosis done
23:16 purpleidea wget http://paste.fedoraproject.org/73671/13912965 > puppet-gluster.yaml
23:16 glusterbot Title: #73671 Fedora Project Pastebin (at paste.fedoraproject.org)
23:16 purpleidea (check this worked and that you got a yaml file and not html)
23:16 purpleidea cd puppet/manifests
23:16 purpleidea # you should see site.pp <--- do you see this?
23:17 semiosis the wget/yaml goes in the top level java dir next to Vagrantfile?
23:17 semiosis yes i have puppet/manifests/site.pp
23:17 purpleidea okay, edit site.pp to add two things:
23:18 purpleidea both are listed in: https://gluster.org/pipermail/gluster-users/2014-January/038794.html
23:18 glusterbot Title: [Gluster-users] Gluster Volume Properties (at gluster.org)
23:18 purpleidea you want to add rpcauthallowinsecure => true, to the gluster::simple
23:18 purpleidea and underneath gluster::simple add:
23:18 purpleidea gluster::volume::property{ 'yourvolumename#server.allow-insecure': value => on,# you can use true/false, on/off
23:18 purpleidea }err
23:18 purpleidea s/yourvolumename/puppet/
23:18 glusterbot What purpleidea meant to say was: gluster::volume::property{ 'puppet#server.allow-insecure': value => on,# you can use true/false, on/off
23:18 semiosis k
23:19 purpleidea it makes one volume named 'puppet' by default...
23:19 purpleidea fpaste site.pp
23:19 purpleidea cd ../..
23:19 semiosis hang
23:19 zerick joined #gluster
23:20 * purpleidea hanging
23:22 semiosis purpleidea: gluster::volume::property{ 'yourvolumename#server.allow-insecure': value => on
23:22 semiosis oops
23:22 semiosis http://paste.ubuntu.com/6858141/
23:22 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
23:22 purpleidea semiosis: typo ; is a ,
23:23 purpleidea value => on, is correct
23:23 purpleidea not: on;
23:23 semiosis ah
23:23 semiosis that's my puppet 2.7 showing
23:23 purpleidea and your indentation is weird :P
23:23 purpleidea anyways
23:24 semiosis whatever, just trying to follow indentation that's there already
23:24 semiosis fwiw
23:24 * semiosis usually uses an IDE that formats his code automatically
23:24 purpleidea btw, all the ${::vagrant_gluster_foo} things, are facts which vagrant sets automatically... this gets pulled from the puppet-gluster.yaml file
23:24 semiosis do you use geppetto?
23:24 purpleidea no way on geppetto :P
23:24 semiosis haha
23:24 purpleidea vim/gedit
23:24 semiosis figured
23:25 purpleidea it's sort of crap
23:25 semiosis ok so i can vagrant up now?
23:25 purpleidea no
23:25 semiosis oh
23:25 purpleidea sort of
23:25 purpleidea you want to do:
23:25 purpleidea sudo -v && vagrant up puppet
23:25 semiosis wha?
23:25 semiosis why sudo?
23:25 purpleidea sudo -v && vagrant up annex1
23:25 purpleidea # wait, get a beverage...
23:25 semiosis hmm i really dont want a puppetmaster
23:25 purpleidea sudo -v && vagrant up client
23:26 semiosis dont want a client either
23:26 purpleidea ah, well a client is optional...
23:26 DataBeaver dbruhn: Sorry for the delay, did other stuff for a while.  It's a single brick, backed by an ext4 filesystem.
23:26 semiosis purpleidea: so why sudo?
23:27 purpleidea vagrant doesn't run _as_ sudo. sudo -v just warms the cache
23:27 purpleidea you can skip the sudo part
23:27 purpleidea vagrant just prompts you if it wants sudo to mount nfs
23:27 semiosis dont want nfs either :)
23:28 semiosis can we do this without a puppetmaster?
23:28 purpleidea neither do i
23:28 purpleidea you _need_ a puppetmaster somewhere... technically for one host, you probably don't, but i haven't tested it. also you could probably fold the puppetmaster into the single annex1 host, but i haven't done that either...
23:29 purpleidea i'd recommend you just make it once first, and then once you're comfortable, you can hack it to change those things...
23:29 semiosis pretty sure can make it work without the puppetmaster but will follow your advice for now & try it with
23:30 semiosis oh, by the way (should've mentioned this earlier) but of course I'm going to run this with an ubuntu vbox ;D
23:30 purpleidea :P now that is completely untested
23:30 purpleidea probably some stuff will not work
23:30 semiosis i'm sure
23:31 semiosis you can expect patches about that, if nothing else :)
23:31 purpleidea in fact, i'm sure it won't work
23:31 semiosis sooner or later
23:31 semiosis package names, yum/apt, these things are sure to not work
23:31 purpleidea i still don't have an ubuntu test env. so i've been waiting for that... i have some patches that aren't in git master yet... but i'm not applying until i can test them at least once
23:32 purpleidea anyways, the last thing you'll want to do, is write the puppet code that installs/compiles/whatever's your java stuff, and add it into the site.pp file.
23:32 purpleidea everytime you want to re-test, you vagrant provision puppet && vagrant provision annex1
23:32 semiosis well ubuntu supports vagrant as a first class platform with official box files... http://cloud-images.ubuntu.com/vagrant/
23:32 glusterbot Title: Index of /vagrant (at cloud-images.ubuntu.com)
23:33 purpleidea alternatively you can also vagrant provision puppet && vagrant destroy annex1 && vagrant up annex1
23:33 purpleidea semiosis: oh cool. did not know about those images! good to know!!
23:33 semiosis hope that helps
23:34 purpleidea can't wait for zippy-zebra
23:34 diegows joined #gluster
23:34 semiosis lol, *then* what?
23:35 purpleidea which of these (animal?) names do you recommend i use for puppet-gluster?
23:35 semiosis probably trusty
23:35 semiosis trusty tahr, the 14.04 LTS release, which will drop in april
23:35 semiosis it's in alpha now
23:35 purpleidea does it have the necessary puppet stuff installed?
23:36 semiosis it must
23:36 semiosis it's a vagrant image
23:36 purpleidea ubuntu sort of loves juju, so i wonder
23:36 purpleidea vagrant doesn't require there be puppet though...
23:37 semiosis in my experience, ubuntu has what you want it to have, most of the time.  i'd bet it has all the vagrant puppet goodness you desire
23:37 purpleidea cool
23:37 semiosis http://paste.ubuntu.com/6858206/
23:37 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
23:38 semiosis idk what thats trying to tell me
23:39 purpleidea semiosis: ssh in and tell me if 'puppet' command exists
23:40 semiosis puppet help ends with Puppet v3.2.4
23:40 purpleidea also looks like you're using virtualbox
23:40 semiosis yes vbox
23:40 purpleidea i think the networking works differently... i only tested with vagrant-libvirt
23:40 semiosis this is an ubuntu saucy vagrant box, latest stable ubuntu release (13.10)
23:41 purpleidea semiosis: http://cloud-images.ubuntu.com/vagrant/saucy/current/saucy-server-cloudimg-amd64-vagrant-disk1.box ?
23:42 semiosis yeah idk much about how vagrant does networking.  i have some permanent vbox guests and for those i use a host-only network and route through the host os
23:42 purpleidea balls, okay, i
23:42 semiosis saucy-server-cloudimg-amd64-vagrant-disk1.box
23:42 purpleidea ma download the ubuntu box and try it
23:42 semiosis the same
23:43 mattapperson joined #gluster
23:43 semiosis host-only -- because routing through linux is easy peasy & I like my VMs to have stable IPs
23:43 semiosis baller indeed
23:44 purpleidea i do this too, with vagrant-libvirt, but it still builds a local network and gives them all ip's... you'll have to patch the vagrantfile to network properly... gluster needs proper networking and dns to work... not sure if it still needs something in the one host case...
23:44 semiosis while you try the ubuntu box, i'm going to go back to trying to set up a minimal config
23:45 purpleidea ok
23:45 badone joined #gluster
23:45 semiosis lmk if you have any ubuntu questions, i know a thing or two about configuring debians ;)
23:45 purpleidea awesome. i have done lots of debian/ubuntu, i just haven't done any puppet on it
23:46 semiosis ah right
23:46 purpleidea cause it's a pita to support a module on multiple platforms when i'm only using one for server stuff anyways
23:46 semiosis i know!  i've made that argument countless times, that multi platform puppet modules are ridiculous, and yet here i am, advocating & helping port one!
23:47 semiosis s/advocating/agitating/
23:47 glusterbot What semiosis meant to say was: i know!  i've made that argument countless times, that multi platform puppet modules are ridiculous, and yet here i am, agitating & helping port one!
23:47 purpleidea since i don't use gluster on ubuntu (although i hear they have an awesome packager) i never bothered to port the puppet stuff there... i was hoping ubuntu.com or someone else would fund that work, but no luck so far
23:49 purpleidea vagrant box add ubuntu-13.10 http://cloud-images.ubuntu.com/vagrant/saucy/current/saucy-server-cloudimg-amd64-vagrant-disk1.box
23:49 semiosis hahahaha
23:49 purpleidea Downloading or copying the box...
23:49 purpleidea Progress: 15% (Rate: 213k/s, Estimated time remaining: 0:30:48))
23:49 semiosis yeah that's not how it works
23:51 semiosis i doubt they'd fund you (but hey, you could ask) however you might be able to recruit some canonical devs to help you.  i'd start with marcoceppi -- the gluster juju guy
23:51 semiosis we met him in NO
23:51 purpleidea semiosis: yeah, i actually emailed him once or twice, but he never replied back.
23:52 purpleidea oh he's in here
23:52 semiosis ping him here on a weekday during regular business hours, he's on irc pretty regularly
23:52 purpleidea he said in nola that he'd be able to do something... i was really just looking for some test hw, or vm access somewhere... anyways
23:53 bennyturns joined #gluster
23:58 purpleidea semiosis: is the 'trusty' stuff, stable enough ? (and do you have gluster packages for it?)
23:59 semiosis havent used it much but it's supposed to be stable, though moving fast (lots of pkg updates) afaik
23:59 purpleidea @deb
23:59 glusterbot purpleidea: I do not know about 'deb', but I do know about these similar topics: 'files edited with vim become unreadable on other clients with debian 5.0.6'
23:59 purpleidea @semiosis ubuntu
23:59 semiosis my ppas do have packages for trusty, in the ubuntu-glusterfs-3.4 & -3.5qa ppas

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary