Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2013-10-26

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 pipps1 joined #salt
00:03 pdayton joined #salt
00:07 jslatts joined #salt
00:11 taylorgumgum joined #salt
00:16 hazzadous joined #salt
00:19 pdayton joined #salt
00:21 woebtz joined #salt
00:24 mwmnj joined #salt
00:30 m0hit joined #salt
00:32 woebtz left #salt
00:37 redondos joined #salt
00:37 redondos joined #salt
00:41 Thiggy joined #salt
00:44 v0id_ joined #salt
00:52 noob2 left #salt
00:56 ajw0100 joined #salt
01:01 m0hit joined #salt
01:02 ctdawe joined #salt
01:08 Namko joined #salt
01:08 jalbretsen joined #salt
01:08 Namko left #salt
01:12 redondos joined #salt
01:12 redondos joined #salt
01:28 mwillhite joined #salt
01:36 log0ymxm joined #salt
01:42 ipmb joined #salt
01:44 rgarcia_ joined #salt
01:48 ajw0100 joined #salt
01:49 m0hit joined #salt
01:49 deepakmd_oc joined #salt
01:52 m_george|away joined #salt
01:54 bhosmer joined #salt
01:55 Thiggy joined #salt
02:02 dccc joined #salt
02:23 redondos joined #salt
02:23 redondos joined #salt
02:35 xmltok joined #salt
02:41 Ryan_Lane joined #salt
02:54 iguano joined #salt
02:54 iguano http://www.theweeklypay.com/index.php?share=19844/
03:08 Psi-Jack_ joined #salt
03:10 bemehow joined #salt
03:17 Thiggy joined #salt
03:47 jslatts joined #salt
04:17 carmony time for some hacking on salt :)
04:18 taylorgumgum joined #salt
04:25 taylorgumgum joined #salt
04:26 Ryan_Lane joined #salt
04:26 Ryan_Lane joined #salt
04:35 lesnail joined #salt
04:40 taylorgumgum joined #salt
04:57 dthom91 joined #salt
05:33 backjlack joined #salt
05:37 backjlack joined #salt
05:53 taylorgumgum joined #salt
06:00 tulu joined #salt
06:08 luminous pears: using unless is generally pretty good. I sometimes use a script which is smarter about those things, and let salt call it all the time
06:10 anuvrat joined #salt
06:17 pears luminous: ah, that's probably a better idea
06:18 pears I'm trying to weigh whether I should just suck it up and make rpm packages of the tar files we deploy internally
06:18 pears detecting whether a tar file is installed is tricky
06:19 luminous pears: use RPMs or debs, that's way better
06:19 luminous pears: there's a github repo I need to share for this.. one moment
06:20 pears is it this? https://github.com/jordansissel/fpm
06:24 cachedout joined #salt
06:24 luminous pears: exactly
06:24 luminous efffin pgk mgt
06:24 luminous love it
06:24 luminous I haven't used it, but I would for what
06:24 luminous you are talking about
06:26 apergos joined #salt
06:28 m0hit joined #salt
06:28 m0hit_ joined #salt
06:28 berto- joined #salt
06:34 micko joined #salt
06:34 log0ymxm joined #salt
06:37 log0ymxm_ joined #salt
06:44 nn0101 fpm is great but if you need to exec stuffs pre/post, you'd have to resort to distro-specific way of doing it (.spec for e.g.)
06:44 pears fortunately we don't have anything that complex
06:44 nn0101 then go for it
06:44 pears it's just *untar* *fart noise* done
06:44 luminous that'd be much better than tarball deploly
06:45 luminous pears: and if you have CI setup, you could integrate there.. so a build in CI produces pgk you can push to a repo / host and then deploy from salt
06:45 pears no we move much more slowly than that
06:47 nn0101 salt looks pretty good. besides unlike puppet one would write python code. no use case atm but sure reading docs and watching talks on youtube makes me wanna give it a spin :>
06:47 taylorgumgum joined #salt
06:48 nn0101 if i have a cyclic dependency, how does salt resolve it?
06:48 __number5__ I have no problem writing ruby code still I prefer Salt to Puppet, or Chef if tha matter
06:49 nn0101 __number5__: firstly puppet isn't ruby. secondly i am no starting a flame war.
06:50 nn0101 besides stung by remote code exec vuln in puppet, does salt not suffer from it? obvisouly it was due to the way puppet parsed+decoded yaml that came over the wire :>
06:51 __number5__ salt has remote execution builtin, using zeromq as transport
06:51 nn0101 sure
06:52 nn0101 we are talking differnt thing. i'll let it be for now.
06:53 __number5__ oh, you are talking about ruby's yaml bug
06:53 nn0101 that's right
06:54 __number5__ python's yaml code don't have that bug
06:55 nn0101 i assumed the devs handcoded yaml parser.. great.
06:55 nn0101 cool
06:56 nn0101 a->b, b->a, does salt nicely handle this case?
06:56 log0ymxm joined #salt
06:56 nn0101 a might be pkgs/files etc..
06:56 nn0101 and b
06:57 nn0101 maybe it isn't possible to *accidently* write such thing in salt :>
06:57 nn0101 (will have to try it out and see for myself)
06:59 aleszoulek joined #salt
07:00 __number5__ IIRC salt will give you an error about recursive requirements
07:02 nn0101 excellent and that's at a compilation phase i assume?
07:02 nn0101 (if there is such phase in salt terminology)
07:03 luminous nn0101: no use case???
07:03 __number5__ there is no compilation phase
07:03 luminous how can you not have a use case?
07:03 nn0101 haha
07:04 luminous hell, my personal desktops are salted
07:04 luminous NO MORE MANUAL SETUP :P
07:04 nn0101 luminous: not yet i mean :>
07:04 luminous bah, I don't believe you
07:04 pears https://github.com/saltstack/salt/blob/develop/salt/state.py#L351
07:04 luminous you haven't thought about it enough :P
07:04 __number5__ luminous: so you need write new states to update your desktop?
07:04 pears there's where the recursive check happens I think
07:05 luminous pears: verify_high() is ATTROCIOUS!
07:05 luminous well, looking it over at least
07:05 luminous NOT readable
07:05 pears looks pretty long a nesty
07:05 luminous __number5__: if you mean add a package, I just update the list of packages, yes
07:05 pears long and nesty, that should be
07:06 luminous __number5__: same for configs, just tweak the template / repo
07:06 __number5__ luminous: nice :)
07:07 luminous yea, we do it for servers, why not for desktops too?
07:07 luminous :P
07:07 luminous salt everything
07:07 __number5__ I might go saltify a Mac Mini at my office...
07:07 luminous is this the euro crowd or late-night west-coasters?
07:08 pears I am in the latter category
07:08 EugeneKay I think it's mostly the "I left my client open 24/7" crows
07:08 __number5__ I am in none of those two
07:08 luminous <3 that salt is your friday-night fun :)
07:08 EugeneKay s/crows/crowd
07:08 pears I already did friday night fun
07:08 luminous __number5__: right, you are just a number :P
07:08 __number5__ lol
07:09 luminous pears: well, this is round... X
07:18 bhosmer joined #salt
07:19 bud Hello all.
07:19 bud What does merge=salt['pillar.get']('libvirt:lookup') do in https://github.com/saltstack-formulas/libvirt-formula/blob/master/libvirt/map.jinja?
07:20 bud Does it somehow look for a key named libvirt in pillar and merge with map?
07:20 bud *it
07:22 pears http://docs.saltstack.com/ref/modules/all/salt.modules.grains.html search for "merge with"
07:22 pears I think that's the right thing anyway
07:22 bobbyrich joined #salt
07:22 bobbyrich its late and i am trying to wrap my head around this
07:22 bobbyrich salt seems great once a minion is up but how do i automated minion creation
07:23 bobbyrich seems like a chicken/egg problem
07:23 bobbyrich create an image with minion in init?
07:23 pears the easiest and most handwavey solution is to... yeah, that
07:24 pears if your deployed thing has the salt minion in it, and can resolve "salt" to an IP, you should be good
07:25 bobbyrich is that the ideal solution?
07:25 pears I can't think of a better one but I am open to other ideas
07:27 bobbyrich so i guess salt/puppet/chef are more about updating configs on many machines rather than starting on demand cluster nodes?
07:27 pears yes, they're about managing something on a host
07:27 pears whether that host exists or not is up to something else
07:28 bobbyrich gotcha, I appreciate the answers this late
07:34 __number5__ bobbyrich: salt-cloud is more oriented on-demand cluster nodes way
07:34 __number5__ but it can't replace something like AWS Auto Scaling Group
07:39 steveoliver what does file.recurse think of existing files (on minion)? it seems to ignore them (ok with me, i think)
07:40 druonysus joined #salt
07:40 druonysus joined #salt
07:42 log0ymxm joined #salt
07:42 steveoliver …ahhh… i just set replace: true if i need replacement
07:42 __number5__ steveoliver: all file states is about manage remote files(e.g. on salt master, salt:///)
07:42 steveoliver __number5__: thanks
07:42 steveoliver i figured it out
07:42 __number5__ :)
07:43 steveoliver the 'replace' property on file.managed
07:43 steveoliver (cool)
07:44 anuvrat joined #salt
07:45 __number5__ you can also use clean: True in file.recurse to delete all existing files
07:45 steveoliver nice to know
07:45 __number5__ http://docs.saltstack.com/ref/states/all/salt.states.file.html#salt.states.file.recurse
07:46 steveoliver docs++
07:52 hjubal joined #salt
07:52 lemao joined #salt
08:03 linjan joined #salt
08:07 luminous https://github.com/saltstack/salt/blob/develop/salt/modules/service.py#L92 << service.restart wants 'name', so what do you do when using this with the module.wait state?
08:08 luminous eg, module.wait expects name to be the name of the module to run, and then **kwargs
08:09 luminous if you use a custom name for the state id, you need one name for service.restart, and then supposedly a second name for service.restart's parameter..
08:10 luminous this doesn't work with two names :P
08:12 __number5__ I didn't use service.restart but service.running with restart:True as one parameter and wait/watch as another
08:13 luminous hmmmm
08:13 luminous I don't know if I can do that
08:13 luminous I'm restarting the salt-minion
08:13 luminous and it'd be weird to have that service.running in this other state
08:13 luminous I might just use cmd.run for now as a hack..
08:13 __number5__ uh, I'm not sure service.restart can deal with salt-minion...
08:14 luminous yea, that was another complication I was trying to test out
08:14 luminous eg, I wasn't sure if that attempt to restart will interfere with the run of state.highstate
08:17 __number5__ luminous: you might want to look at this github issue https://github.com/saltstack/salt/issues/1888
08:19 luminous __number5__: knowing we're almost to 8k, eeek...
08:20 luminous hah.
08:20 luminous that's a funny read
08:25 anuvrat joined #salt
08:43 patrek joined #salt
09:33 Furao joined #salt
09:52 lemao joined #salt
09:53 Furao joined #salt
10:00 fspot1 joined #salt
10:03 ako joined #salt
10:03 ako hello, is there a way to pass jinja variable to salt function inside a jinja template file?
10:03 Furao joined #salt
10:05 ako anybody can help?
10:06 __number5__ ako, can you paste your code somewhere?
10:06 ako ok gimme a minute
10:14 ako {% set MAIN_IP = grains['ip_interfaces'][{{ MAIN_IFACE[0] }}]
10:14 arnoldB remove {{ }}
10:14 ako will it work?
10:15 ako will salt understand the variable?
10:15 arnoldB it will destroy your system, that's why I wrote it ;)
10:15 ako i don't get what you want
10:16 ako you kidding/
10:16 ako ?
10:16 arnoldB just try it:     set MAIN_IP = grains['ip_interfaces'][ MAIN_IFACE[0] ]
10:16 ako hmm
10:17 ako didn't work
10:17 ako Comment:   expected token 'block_end', got '{'; line 13 in template
10:18 ako and if i passed as {{}}
10:18 ako salt consider it a string
10:18 arnoldB what's the content of MAIN_IFACE[0] ?
10:18 ako string
10:18 arnoldB not the type, the content
10:18 ako salt just can't parse the syntax
10:18 ako and ip
10:18 ako an ip
10:19 arnoldB could you nopaste      salt-call -g   please?
10:19 __number5__ the index for grains['ip_interfaces'] should be a number
10:20 arnoldB __number5__: I think it should be the name of the interface
10:20 __number5__ you can try salt 'minionid' grains.items
10:20 __number5__ arnoldB: yep, your are right
10:21 arnoldB ip_interfaces: {'lo': ['127.0.0.1'], 'eth0': ['84.201.x.x']}
10:24 arnoldB ako: you might want to use salt['network.ip_addrs'](..)  instead of grains
10:25 arnoldB ako: or salt['network.interfaces'](...)
10:34 ako yea i think i may use this module
10:34 ako lemme try and tell u\
10:37 ako set MAIN_IP = grains['ip_interfaces'][ MAIN_IFACE[0] ]
10:37 carlos joined #salt
10:37 ako man sorry, passing variable like that worked
10:37 ako it was syntax error from me
10:38 ako i forget the end %}
10:42 arnoldB ako: why are you using uppercase names for variables?
10:43 ako arnoldB: so that i can differ them from other normal string
11:50 Furao joined #salt
11:52 mapu joined #salt
11:59 bhosmer joined #salt
12:32 hazzadous joined #salt
12:40 az87c joined #salt
12:41 az87c_ joined #salt
12:43 ddv joined #salt
12:48 donatello joined #salt
12:48 donatello hi all
12:50 arnoldB oi
12:50 donatello is there a way to provide arguments to custom grain functions? (for example to generate grains for aws info like internal ip, instance id, etc)
12:51 donatello (AWS EC2 info i mean)
13:05 Furao donatello: https://github.com/bclermont/states/blob/master/states/_grains/ec2_info.py
13:12 zloidemon joined #salt
13:18 lemao joined #salt
13:20 jergerber joined #salt
13:31 donatello Thanks Furao! That looks good!
13:31 donatello Is there something like that for Elastic IP addresses and Route53 DNS mappings as well?
13:34 Furao I do have a route53 state
13:34 Furao to set IP
13:35 Furao but there is no "reverse" mapping
13:37 derelm joined #salt
13:43 jkleckner joined #salt
13:52 VSpike joined #salt
14:21 mgw joined #salt
14:39 dccc joined #salt
14:45 cachedout joined #salt
14:45 darrend joined #salt
14:53 bhosmer joined #salt
14:55 dcrouch joined #salt
15:00 honestly how do I refresh grains?
15:02 mwillhite joined #salt
15:04 * honestly tries sync_all
15:07 sebgoa joined #salt
15:11 honestly there we go
15:20 mgw joined #salt
15:23 luminous honestly: sync_grains will also do that, and refresh_modules is a variant that might be more specific at times
15:23 scalability-junk anyone having a few tb of data and backing it up to offsite tapes/disks? how to you structure them? perhaps having offsite backups as versions or pulling down data from some distributed storage array aka ceph, swift, s3?
15:25 luminous scalability-junk: structure what? the data?
15:25 luminous eg what comes from where?
15:25 scalability-junk imagine 20tb of data in an object store and you wanna back it up, how would you structure the needed offsite disks?
15:27 luminous I'd structure the data depending on where it came from, what the data was, and how I'd be using it afterwards  (eg in those disaster recovery scenarios).
15:27 luminous the last time I worked with multi-tb of data I setup a homebrew SAN with solaris, and worked with zfs
15:27 luminous if I had to work with 20tb of data, i'd use ZFS in a large array and not care about the structure of the disks
15:28 luminous you can build a home brew SAN for a fraction of the cost of the 'real-deal'
15:28 luminous and something just as robust and awesome
15:28 luminous eg with HA or fail-over, syncing across multiple SANs, iSCSI, FC, etc
15:28 scalability-junk luminous: still it's online backup not offline backups... could be removed via a malicious user too...
15:29 luminous how do you mean?
15:30 luminous scalability-junk: ^^
15:30 scalability-junk data -> encryption -> disk -> unplug -> offsite storage (huge air gap to prevent software bugs deleting or bad people doing bad things with my data)
15:30 scalability-junk automatic sync to an offsite SAN etc. could be "easily" wiped...
15:31 luminous sure, but what's your question there then?
15:31 luminous how to prevent that?
15:31 honestly replicate object store to disk, replace disk every week
15:31 scalability-junk how would anyone structure real offline storage or another solution to prevent that.
15:31 honestly that really depends on your threat model
15:31 luminous scalability-junk: I'd use multiple SANs with zfs snapshots
15:32 luminous and i'd have some monitoring / stats that must be checked
15:32 luminous this isn't something you set and forget
15:32 jdenning joined #salt
15:32 luminous but ZFS snapshots are awesome and could really help there
15:33 scalability-junk honestly: alright interesting model... so imagine having 20tb in s3 you would replicate all data to say 20 1tb disks and change these disks every week and rereplicate with the new ones right.
15:33 luminous can given that ZFS is so well setup on fbsd now too.. you wouldn't be limited to Solaris
15:33 scalability-junk still how would you structure them so you know where files are stored etc.
15:33 luminous scalability-junk: no, I'd get 2 or 3x the data storage and not swap disks, don't move disks around.. I'd use two SAN to replicate
15:34 luminous and again, use snapshots for versioning/etc
15:34 luminous use stats and monitoring to make sure you know what's going on
15:34 scalability-junk luminous: but replication is not preventing from replicating deletion ;)
15:34 flebel joined #salt
15:35 luminous scalability-junk: are you familiar with what zfs can do?
15:35 scalability-junk snapshots for versioning alright, but how do you store the snapshots? manual splitting onto several disks and managing them within a spreadsheet?
15:35 luminous ZFS.ZFS.ZFS
15:35 luminous maybe you ought to spend some time playing with ZFS and see it in action
15:35 scalability-junk you mean using storage pools for that
15:35 luminous that might help clarify why it would do you well here
15:35 jumperswitch joined #salt
15:35 luminous or why I believe it would do you well
15:36 scalability-junk luminous: To look at zfs doesn't help with the understanding of the concept I should use.
15:37 luminous sure, to learn more about what it can do would help you sort through the details of an implementation that would work well for your needs
15:37 honestly rent well-guarded storage space at 3 different locations
15:37 luminous I'm not you, and I don't know anywhere near enough about your requirements to be more specific
15:37 honestly deliver your backups there after pulling them
15:38 scalability-junk luminous: I'm trying ;)
15:38 luminous scalability-junk: and what I noted about 2 SANs will certainly help about deleted data
15:38 luminous that's part of your air-gap
15:40 scalability-junk Mhh I see 3 replicas in 3 locations not as a backup. Especially when they are programmed to be rewritable... and that's mostly possible when they are online SANs or similar storage systems.
15:40 scalability-junk Tape like archivals would be more like it. I just dislike tape.
15:40 luminous you can do what you want with SAN 'online'
15:41 luminous I have, with ZFS snapshots and monitoring/stats
15:41 luminous eg, if something was deleted, I'd see it and be able to review
15:41 luminous how you review depends on how you are creating/removing files in NORMAL operation
15:42 luminous I also had enough space to maintain months worth of data in snapshots
15:42 luminous we had a specific policy for snapshot retention, I don't remember the specifics
15:42 scalability-junk luminous: alright where do you store the snapshots?
15:42 luminous on both SANs
15:43 luminous but the one that really matters is the second
15:43 scalability-junk but then you have the storage storing your backup for perhaps maliciously deleted files on the same system o0
15:44 scalability-junk so a malicious party could just remove the snapshots too...
15:44 luminous no
15:44 luminous how would they remove snapshots?
15:44 luminous have you used solaris?
15:44 scalability-junk a while ago
15:44 luminous backup clients only had rsync and nfs access to a VM/dummy host which had its real storage on the SAN
15:45 luminous solaris was totally locked down
15:45 scalability-junk still the admin has access...
15:45 * scalability-junk knows he is being paranoid right now, but the interesting things are in the details :D
15:45 honestly and anyone could just shoot their way into the datacenter and put an explosive charge under your server, yes
15:46 scalability-junk honestly: yeah with tapes you would still have your storage facility :D
15:46 luminous scalability-junk: if you can't trust your admin, fsck it
15:47 luminous you can't totally prevent an admin in high position of responsibility
15:47 scalability-junk true true
15:47 honestly ^ as the NSA has found out
15:47 luminous you can only prevent them some of the time in some places by having need to know sort of thing
15:47 scalability-junk hdhd
15:47 scalability-junk *hehe
15:48 honestly well, you *could* build an architecture where data can only ever be appended and not deleted
15:48 luminous right, and even the NSA shows us how limited need to know even is
15:48 honestly I'm sure there are filesystems that can do that
15:48 scalability-junk alright so let's say a distributed storage with 20tb of data could be backed up using a replica in an offsite location and for prevention of replicating deletions you use snapshots.
15:48 honestly and you can lock down the access to the block device
15:49 scalability-junk while the backup replica is pulling in data with read only access.
15:49 scalability-junk sound alright
15:49 scalability-junk honestly: optical media :D
15:49 scalability-junk store all the data on CDs or DVDs \o/
15:49 honestly CDs are shit
15:50 honestly at least CD-Rs are
15:51 luminous optical? are you nuts?
15:51 * scalability-junk I was being sarcastic.
15:51 scalability-junk try to backup 20tb of data onto CDs :D
15:51 scalability-junk could be quite expensive
15:52 * luminous is doing too many other things
15:52 luminous sorry
15:53 scalability-junk thanks for all the suggestions luminous honestly
15:54 jkleckner joined #salt
15:54 luminous goodluck scalability-junk
15:54 luminous don't lose your mind too much :)
15:54 mua joined #salt
15:59 log0ymxm joined #salt
16:01 scalability-junk luminous: too much thinking time right now it seems.
16:09 derelm joined #salt
16:10 Furao joined #salt
16:13 pdayton joined #salt
16:13 linjan joined #salt
16:14 honestly I'm working on my salt setup
16:14 honestly and want to version my salt directory
16:14 honestly what's the right way to do that?
16:15 honestly I could just keep the whole thing in git
16:15 honestly but I want to use salt formulas too
16:20 Furao joined #salt
16:29 IAmNotARobot joined #salt
16:37 bhosmer joined #salt
16:40 mgw joined #salt
16:44 jkleckner joined #salt
16:53 linjan_ joined #salt
16:53 anuvrat joined #salt
16:55 honestly bah
16:55 honestly I don't know how to do this sanely
16:58 IAmNotARobot left #salt
17:15 jkleckner joined #salt
17:20 cachedout joined #salt
17:23 redondos joined #salt
17:24 elfixit joined #salt
17:30 mgw joined #salt
17:30 berto- joined #salt
17:31 Thiggy joined #salt
17:34 log0ymxm joined #salt
17:41 log0ymxm joined #salt
17:42 honestly woo, new formula: https://github.com/duk3luk3/openvpn-client-formula
17:45 log0ymxm_ joined #salt
17:46 jkleckner joined #salt
17:55 woebtz joined #salt
17:57 jumperswitch joined #salt
18:02 ajw0100 joined #salt
18:05 Gareth honestly: nic.
18:05 Gareth er
18:05 Gareth nice. :)
18:08 luminous so I can use pillar.get to retrieve this pillar key (a dictionary) from within a dictionary, but state.sls errors out with: UndefinedError: 'dict object' has no attribute 'oms:deploy_defaults'
18:08 luminous {% for default, value in pillar['oms:deploy_defaults'].items() -%}
18:08 luminous wtf
18:09 luminous honestly: use git, and use submodules? or create your own tree and pull in stuff from formulas manually
18:10 honestly luminous: pillar['oms']['deploy_defaults']
18:10 honestly the colon syntax only works on the salt command line
18:10 honestly I stumbled over that too
18:11 luminous oh, wtf
18:11 Gareth colon syntax does work in the states.
18:11 Gareth one sec
18:11 ckao joined #salt
18:11 luminous ah! got it
18:12 luminous {% for default, value in  salt['pillar.get']('oms:deploy_defaults').items() -%}
18:12 Gareth Yup. there ya go :)
18:12 luminous >.<
18:12 luminous what's wrong with the first version Gareth ?
18:12 Gareth put a {} after the last single quotes as a default in case that pillar item isn't defined.
18:12 honestly the first version accesses the pillar dict directly
18:12 luminous Gareth: ty, yes!
18:13 honestly and it's just a plain python dict
18:13 honestly but pillar.get has salt magic
18:13 luminous honestly: did I need the bracket syntax in the first version?
18:13 honestly well, salt['pillar.get'] means executing the pillar.get function from salt
18:13 honestly so it has salt magic
18:14 honestly luminous: what do you mean?
18:16 luminous honestly: I think I needed: for ... in pillar['oms']['deploy_defaults'].items() as you first noted
18:17 honestly why?
18:18 luminous it was either or, I used : at first
18:19 honestly I still don't understand what you mean
18:19 luminous nevermind
18:19 luminous it's not important :P
18:20 Gareth luminous: iirc the call with the colon, the pillar['pillar:deploy_defaults'] doesnt work because that doesn't exist in pillar, the pillar get does work because its using the interal function to translate the colon into the right structure.
18:22 luminous gotcha
18:22 luminous yea, I was complaining about these things being confusing the other day
18:23 pears I kind of wish they'd called pillar.get something else, it has extra smarts over the python dict get() method
18:23 Gareth I've come to like using the pillar.get option...since you can use a default value if the pillar item doesnt exist :)
18:23 luminous I'm of the opinion that the salt[] and pillar[] and grains[] interfaces should all be unified
18:23 luminous there are toooooo many options there
18:23 luminous and/or not clear enough docs
18:23 pears unified into what?  it's clear enough once you understand what each one is for
18:24 luminous the docs show you what's available but don't help clarify why you should use which
18:25 mgw joined #salt
18:26 luminous pears: you access grains and pillar in completely different ways despite the fact that they are nearly identical components
18:43 pears so far I haven't found myself accessing the salt structure at all, I think
18:44 pears right now I have stuff in the salt tree that gets automatically generated from what's in pillar for that host
18:45 pears for me the pillar data is the "what" and the salt data is the "how"
18:46 pears actually, do you access them in different ways?
18:47 pears I guess it depends on what tool you're using
18:48 matanya joined #salt
19:06 mapu joined #salt
19:12 mwillhite joined #salt
19:18 dcrouch joined #salt
19:19 smccarthy joined #salt
19:25 jlecren joined #salt
19:26 Tweeda joined #salt
19:30 mgw joined #salt
19:40 Tweeda I've been experimenting with the gitfs integration for my salt-masters which I find really awesome.  The implicit mapping of the base environment to the master branch is a bit of a curve ball.
19:41 Tweeda Are there any best practice type documention wrt to branching -> environment mapping or git branching and merging for git/saltstack integration?
19:55 fllr joined #salt
19:57 ajw0100 joined #salt
19:59 steveoliver master-base makes sense to me, since 'master' namespace in salt is kinda … taken :)
20:00 steveoliver what exists in salt to discover the diff state of a minion, i.e. has new .ssh keys … ?
20:04 gildegoma joined #salt
20:05 apergos ah there's a cro:  I wanted to ask what test environment you used to replicate the behavior in https://github.com/saltstack/salt/issues/8087 and with how many minions
20:05 cro apergos: Hi there.
20:05 apergos hello
20:06 cro 4 VMs in Rackspace, each with 2 GB of RAM, 100 minions each on 3 of the machines, 4th machine as the salt-master.
20:06 apergos ah... ok this is a test environment I cannot possibly duplicate... too bad
20:06 apergos at least not right now
20:06 cro We have a script in the tests directory called 'minionswarm.py' that makes that easy
20:07 * apergos makes a note
20:07 apergos in about a week I should have a hardware setup that will let me run that many vms
20:07 cro I tried on my Macbook Pro, quad core 2.3 GHz core i7, 8 GB of RAM, but I couldn't start enough minions to run a good test.
20:08 steveoliver docker?
20:08 apergos right, I have a laptop with a dual core processor :-D
20:08 cro steveoliver: docker is a good idea.  I just haven't played with it enough to know how to make it do what I want. :-)
20:08 steveoliver cro: i'm still trying to figure that out
20:08 steveoliver http://docs.saltstack.com/ref/states/all/salt.states.dockerio.html#module-salt.states.dockerio
20:09 steveoliver #needslove
20:09 * cro agrees.
20:09 steveoliver exciting, though
20:10 cro yes
20:12 SoR joined #salt
20:14 apergos how would one use docker for such tests if the underlying hardware isn't enough to support the number of minions required?  or do you mean something else?
20:14 apergos (I am reading intro docker stuff now, not familiar with it)
20:14 ajw0100 joined #salt
20:16 jumperswitch_ joined #salt
20:16 cro apergos: I think the issue there is that nobody really runs 100 minions on one machine.  But docker is lightweight enough to possibly run
20:17 cro maybe 50 containers, each with one minion
20:17 cro so if you could create several VMs, each with 50 containers, one minion apiece, it would be an interesting test.
20:18 apergos ok
20:18 apergos (in our case we have one minion on each physical box, that's where the numbers in the bug report came from)
20:20 hjubal joined #salt
20:20 hjubal joined #salt
20:27 steveoliver what's the salt command to get the ip of [a] minion?
20:28 TheSojourner joined #salt
20:28 TheSojourner joined #salt
20:29 cnelsonsic joined #salt
20:30 jdenning joined #salt
20:30 jslatts joined #salt
20:33 apergos I guess you could use salt minion-name grains.item ipv4    or maybe ip_interfaces
20:34 steveoliver saw that - that's it, hu?
20:34 steveoliver works for me
20:34 steveoliver thx
20:35 apergos if there's a better way hopefully someone will chime in with it
20:35 apergos yw
20:35 steveoliver i expected it coming from grains, actually — makes sense
20:37 apergos meh docker doesn't yet run natively on fedora it seems, though it will eventually
20:42 Ryan_Lane cro: does your minion test also ensure to make a number of the minions non-responsive?
20:43 cro Ryan_Lane: No--that's a good idea.  Did you have that issue in your env?
20:43 Ryan_Lane yes
20:43 Ryan_Lane it's especially common in our labs environment
20:43 cro were they non-responsive because they failed to restart, stacktraced, ran out of memory?  Something else?
20:43 Ryan_Lane all of the above? :)
20:44 cro :-)
20:44 Ryan_Lane our labs environment is a test/dev environment anyone can use
20:44 cro "she canna hold together Captain..."
20:44 Ryan_Lane so we tend to have lots of instances that are broken in some way
20:44 Ryan_Lane or properly sized for their workload, etc.
20:44 cro ah, right
20:44 cro # nodes?
20:45 Ryan_Lane ~400 in labs, ~1000 in production
20:45 cro got it
20:46 apergos about 950 in production known to salt
20:46 steveoliver i seem to have a pending key with the wrong name … is it cached on master somewhere i can delete?
20:46 Ryan_Lane steveoliver: salt-key -d <name>
20:46 steveoliver did that
20:46 Ryan_Lane is the minion setting the id in the config?
20:47 cro \/etc/salt/master/minios
20:47 cro oops
20:47 cro I mean /etc/salt/master/minios
20:47 cro grr
20:47 cro I mean /etc/salt/master/minions
20:49 steveoliver id: not set in minion::/etc/salt/minion
20:49 steveoliver ./etc/salt/master/minions does not exist on my master
20:49 Tweeda Can two or more minions share a key?  I'm thinking along the lines of AWS' auto scaling groups.  minion configuration should be identical.
20:50 steveoliver minion_id!
20:51 steveoliver /etc/salt/minion_id
20:51 zloidemon Hello, how to fix that http://pastebin.com/rjtFDUVH ?
20:51 Tweeda so, set minion_id to be something long the lines of "ASG0" for nodes requiring identical configurations?
20:51 redondos joined #salt
20:51 redondos joined #salt
20:52 cro steve: sigh, I'm sorry, should have cut and pasted.  I meant /etc/salt/pki/master/minions.
20:52 cro zloidemon: can you pastebin your .sls file?
20:52 steveoliver cro++
20:53 zloidemon cro: A sec
20:53 zloidemon cro: See that http://pastebin.com/wR4vJnpr
20:54 honestly I'm working on this formula: https://github.com/duk3luk3/dirty-user-sync-formula/tree/master/dirty-users, I'm trying to invoke it via "salt -l debug myminion state.sls dirty-users.users test=True"
20:54 honestly it seems to do nothing
20:54 honestly what am I doing wrong?
20:55 honestly output: https://gist.github.com/duk3luk3/7174415
20:55 honestly it doesn't appear to compile the file at all
20:55 cro zloidemon: so which file has the "interface: 127.0.0.100" line in it?
20:56 zloidemon cro: master config
20:57 zloidemon cro: Fixed, I found miss in config file
20:57 cro cool
21:01 matanya joined #salt
21:03 cro honestly: Is this using gitfs as your file_root?
21:03 honestly no
21:04 honestly I just symlinked the dirty-users folder into my file_root
21:08 honestly :|
21:08 cro ah...I'd need to dig at this a little bit more, but I can't right now.  I'll ping you later, but the first thing that comes to mind is that
21:08 cro oops
21:08 cro try
21:09 cro salt bb1 state.show_sls dirty-users.users
21:09 honestly yeah, just found that
21:09 honestly huh
21:09 cro help at all?
21:09 honestly that fails with a python exception
21:10 cro If the exception doesn't give a glue, gist it and I'll take a look later.
21:10 cro s/glue/clue
21:11 honestly https://gist.github.com/duk3luk3/7174575
21:11 zloidemon A new troubles http://pastebin.com/2fvF8V24 %)
21:11 honestly (copied users.sls into init.sls)
21:29 pouledodue joined #salt
21:34 premera joined #salt
21:45 dcrouch joined #salt
21:53 jalbretsen joined #salt
21:55 diegows joined #salt
22:05 honestly whelp, I think I know why it doesn't work
22:05 honestly because the minion crashes
22:05 honestly on the minion I just get that exception
22:06 lesnail joined #salt
22:07 lesnail Did anyone try to combine jinja renderer with the py renderer so far? I have tried to do so in an reactor.sls but it seems it's not working
22:08 honestly it *should* work
22:09 godog joined #salt
22:09 godog joined #salt
22:11 dcrouch joined #salt
22:13 lesnail just an extract from my reactor: http://pastebin.com/NUCwaPBS
22:14 honestly lesnail: where is {{data}} supposed to come from?
22:15 godog joined #salt
22:16 lesnail honestly: As far as I understood reactor files, they are by default rendered by jinja + yaml and the data variable contains some metadata and the payload of the event
22:16 lesnail so I thought I could use this variable when using jinja + py aswell
22:17 redondos joined #salt
22:17 redondos joined #salt
22:17 honestly no idea about reactor files
22:18 lesnail I think I should try it again with 0.17.1, I ran it on debian which is unfortunately still at 0.16.3
22:19 lesnail thank you anyway
22:22 steveoliver i'm thinking i need specific pillar data on each minion, for i.e. the users on that minion…  should I instead be targeting users to minions using top file(s)?
22:26 oz_akan_ joined #salt
22:27 honestly oh for chrissakes
22:28 honestly steveoliver: you can set pillar-data per minion
22:28 steveoliver same location: /srv/pillar/.. .?
22:29 honestly just use the top.sls
22:29 steveoliver /srv/pillar/top.sls on minion to define local users ?
22:31 honestly https://gist.github.com/duk3luk3/7175324
22:32 steveoliver ok, so if i set users in common-pillar-data, and set users again in minion-two-pillar-data, what happens?
22:33 steveoliver because from my /etc/salt/users/init.sls, I want to enforce all the relevant users
22:33 honestly it should also work with templating: https://gist.github.com/duk3luk3/7175350
22:34 steveoliver if i have foo and bar users in common data, and bar (new name) and baz in minion-two-pillar-data, will each new pillar overwrite the previous?
22:34 honestly I think so
22:34 steveoliver is that how you think it's designed?
22:34 steveoliver so bar overwrites common bar...
22:34 steveoliver and baz is added
22:34 honestly let me find the docs
22:35 honestly I think it works like that
22:35 steveoliver ok, thx
22:35 steveoliver this helps
22:35 steveoliver i /am/ googling over here before i ask these questions, btw… :)
22:35 honestly http://docs.saltstack.com/topics/pillar/index.html
22:35 steveoliver right
22:35 steveoliver i'm there
22:35 honestly http://docs.saltstack.com/topics/pillar/index.html#pillar-namespace-flattened
22:36 steveoliver ok, so i get that..
22:36 steveoliver that's namespacing..
22:36 steveoliver ah!
22:36 steveoliver right
22:36 steveoliver it overwrites it
22:36 steveoliver yep yep
22:36 steveoliver ok thx
22:37 honestly (:
22:40 honestly zz_cro: it was a stupid typo in my sls file
22:48 Thiggy joined #salt
22:49 oz_akan_ joined #salt
22:54 avienu joined #salt
23:00 mwillhite joined #salt
23:01 kermit joined #salt
23:05 avienu joined #salt
23:06 jdenning joined #salt
23:08 ctdawe joined #salt
23:28 redondos joined #salt
23:34 Fandekasp joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary