Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2015-01-17

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 iggy stevednd: try generating the archive with the version of tar on that server
00:00 iggy there was a header change at some point in tar that could be causing it
00:01 iggy although that error message doesn't seem to imply that... that's the only thing I can think of really
00:04 stevednd the same states worked fine in 2014.1, so I'm pretty sure it's specific to salt
00:04 KyleG left #salt
00:05 pdayton joined #salt
00:06 iggy ahh, then i got nothing
00:09 iggy I suppose you've done all the basic stuff, like making sure that file is there, etc.
00:09 stevednd yep
00:09 stevednd I can untar it myself just fine
00:09 deathbypugs joined #salt
00:11 Raging_fenrir joined #salt
00:12 deathbypugs Hello all. Jumping to the point. We're POCing salt on Ubuntu14. We installed via the bootstrap script. We have 209 minions and the salt master is getting TCP syn floods and is essentially unresponsive
00:12 spookah joined #salt
00:13 deathbypugs Even test.ping on the master to the minion on the master times out.
00:15 iggy 209 minions shouldn't be a problem
00:15 iggy I mean unless you are running it on an f1-micro or something
00:17 otter768 joined #salt
00:19 abrahamrhoffman 1. You need to throttle the number of connections being sent to the master
00:20 abrahamrhoffman 2. You need to ensure that you are using the latest ZeroMQ
00:20 iggy you might have to up worker_threads
00:20 abrahamrhoffman e.g. Do you have a 2048MB instance for the Master?
00:23 jerematic joined #salt
00:25 deathbypugs 2 cores and 2GB. Mem is good. I changed worker threads to 25 but no go.
00:25 deathbypugs root@nafc01:~# salt --versions-report            Salt: 2014.7.0          Python: 2.7.6 (default, Mar 22 2014, 22:59:56)          Jinja2: 2.7.2        M2Crypto: 0.21.1  msgpack-python: 0.3.0    msgpack-pure: Not Installed        pycrypto: 2.6.1         libnacl: Not Installed          PyYAML: 3.10           ioflo: Not Installed           PyZMQ: 14.0.1            RAET: Not Installed             ZMQ: 4.0.4            Mako: 0.9.1
00:25 deathbypugs Yikes. Sorry about that
00:26 deathbypugs I can add more mem or cpu (it's a vm). I've read the 'salt at scale' doc but for 209 clients it seems like something is broken before needing to change minion settings
00:26 druonysuse joined #salt
00:26 druonysuse joined #salt
00:27 deathbypugs When I run tcpdump for port 4506, i see only inbound and nothing outbound at all. I don't have iptables running
00:28 iggy what provider?
00:30 tracphil deathbypugs: have you accepted they keys for your minions?
00:31 abrahamrhoffman joined #salt
00:32 deathbypugs I have accepted all keys (except two)
00:32 deathbypugs What do you mean 'what provider?'
00:32 yomilk joined #salt
00:33 iggy I was assuming it was a VPS/cloud instance
00:33 deathbypugs Sorry. VMware Enterprise 5.0.3
00:33 deathbypugs Ubuntu 14.01 LTS
00:34 iggy using the paravirt drivers?
00:36 deathbypugs I am runningthe paravirt scsi controller. And I have the vmxnet 3 drives and vmwar tools is running and up to date
00:37 deathbypugs Load on the VM is 1.09 which is perfect, but in 'top' the salt-master process is at 100%
00:38 deathbypugs Likely the network queue is causing the high cpu for the master process
00:38 iggy can you kill minions a chunk at a time to see where things settle down?
00:38 tracphil deathbypugs: so on your master you are running something like: tcpdump -nn port 4506 and all you see is incoming traffic right?
00:39 tracphil what does iptables -L give you... I know you have checked this... we are just catching up :)
00:39 deathbypugs yes. only inbound
00:40 deathbypugs I don't have iptables running (accept policy all)
00:40 deathbypugs 1997  netstat -npt | awk '{print $6}' | sort | uniq -c | sort -nr | head
00:41 deathbypugs netstat -npt | awk '{print $6}' | sort | uniq -c | sort -nr | head    4309 ESTABLISHED      63 SYN_RECV       5 CLOSE_WAIT       1 Foreign       1
00:41 tracphil telnet localhost 4505 and see what you get
00:41 deathbypugs sorry but I have 4309 established connections. They're piling up. Even the test.ping to the master from the master times out
00:42 iggy I'm not sure it's abnormal to only see incoming traffic (depending on how long you trace)... that's the way salt works (the minions connect to the master)
00:43 tracphil if you run salt '*" state.highstate from the maste, do you see anything going out?
00:43 shaggy_surfer joined #salt
00:43 deathbypugs telnet to localhost 4505 connection refused; telnet 10.1.249 refused
00:43 tracphil are you sure salt-master is running?
00:44 deathbypugs should 4505 be open? Sorryif that sounds like a dumb question but I'm still learning
00:44 tracphil lsof -i :4505
00:45 tracphil yes, it should be open http://docs.saltstack.com/en/latest/topics/tutorials/firewall.html
00:45 deathbypugs I have 31 salt-master processes
00:45 deathbypugs I don't have a firewall going
00:45 deathbypugs root@nafc01:~# iptables -L Chain INPUT (policy ACCEPT) target     prot opt source               destination  Chain FORWARD (policy ACCEPT) target     prot opt source               destination  Chain OUTPUT (policy ACCEPT) target     prot opt source               destination
00:45 tracphil do you have fail2ban installed by chance?
00:46 tracphil k
00:47 deathbypugs Ok .somethign seems to be going out: salt-mast 4863 root   15u  IPv4 191369084      0t0  TCP nafc01.foo:4505->10.201.98.27:55251 (ESTABLISHED) salt-mast 4863 root   16u  IPv4 191369085      0t0  TCP nafc01.foo:4505->sh-mkt-db2-prod.lv.foo
00:47 hvn joined #salt
00:48 tracphil sh-mkt-db2-prod is a minion right?
00:48 deathbypugs yes, it is a minion
00:48 tracphil good
00:49 deathbypugs no fail2ban
00:49 deathbypugs something is bleeding out but the salt \* test.ping gets nothing
00:50 tracphil do you have to escapse that * for some reason?
00:50 deathbypugs FWIW, our Windows admin is trying salt for windows and has installed quite a few windows minions
00:51 tracphil I have always just used '*'
00:51 tracphil with ' '
00:51 deathbypugs If you don't escape the shell it can be problematic
00:51 iggy yeah, either \* or '*' works the same way
00:51 deathbypugs '*' times out with nothing
00:52 tracphil no firewall between your master and minions?
00:52 deathbypugs lemme run tcpdump before I run test.ping
00:52 deathbypugs even the minions on the same subnet (no FW) are not replying
00:56 deathbypugs Things were working for awhile
00:56 deathbypugs when we had < 10 minions or so
00:57 deathbypugs I can't tell if its useless now due to the configs, or what
00:57 deathbypugs but from what I've read, I shouldn't need to tweak minion timeouts for 200 minions
00:59 iggy I've tweaked it for <50, but I think that has more to do with shit networking in gce than anything else
00:59 iggy but either way, you should see some returns
01:03 deathbypugs Is there some sort of automagical way to tweak the clients w/o having to manually `vim /etc/salt/minon' or edit the windows config and restart salt-minion?
01:04 iggy salt-formula... but that's not going to help you much until you get things running
01:06 deathbypugs Is there something I can call from a client (salt.hightstate or whatnot) to verify?
01:07 deathbypugs I can telnet from a client to master:4505
01:07 tracphil use salt-call on a client
01:08 tracphil /usr/bin/salt-call state.highstate
01:09 sbx joined #salt
01:09 JDiPierro joined #salt
01:11 deathbypugs jfilla@ubuntu12:~$ sudo /usr/bin/salt-call state.hightsate [WARNING ] SaltReqTimeoutError: Waited 60 seconds Minion failed to authenticate with the master, has the minion key been accepted?
01:11 deathbypugs and on the master:
01:11 deathbypugs root@nafc01:~# salt-key | grep ubun
01:11 stevednd iggy: so it looks like the archive error is occurring even on 14.04
01:12 deathbypugs ubuntu12.foo
01:12 deathbypugs salt key is accepted
01:13 tracphil I still think it is a key issue
01:13 tracphil see if you can remove the key and add it again
01:14 deathbypugs Ok. salt-key -d 'hostname' is that it?
01:14 tracphil I don't think the master is not communicating because it doesn't see the minion as something it wants to talk to... thats my .02 :)
01:15 tracphil pretty sure that is it
01:21 deathbypugs Hmm. I deleted the minion key ,but and restarted the minion, but no new key requests are coming to the master
01:24 jerematic joined #salt
01:27 sbx joined #salt
01:30 deathbypugs root@ubuntu12:~# salt-minion [WARNING ] SaltReqTimeoutError: Waited 60 seconds
01:30 deathbypugs this is really busted.
01:32 aqua^mac joined #salt
01:36 deathbypugs OK. I rebooted the master (to add some cpu) and now when I run salt-minion in the foreground, i see this:
01:36 deathbypugs [ERROR   ] The Salt Master has cached the public key for this node, this salt minion will wait for 10 seconds before attempting to re-authenticate
01:37 deathbypugs And now on the master, I see the key request and I've accepted it
01:37 deathbypugs And now the 'test.ping' works!
01:38 nitti joined #salt
01:38 murrdoc joined #salt
01:38 deathbypugs Can someone interpret this? I had to delete they key, re-add it, and it's ok
01:40 deathbypugs All commands work, as well as remote execution (cmd.run 'df-i'
01:41 Whissi joined #salt
01:44 Ryan_Lane joined #salt
01:44 tracphil http://superuser.com/questions/695917/how-to-make-salt-minion-generate-new-keys
01:45 tracphil for step three do service salt-minion restart
01:45 zzzirk joined #salt
01:45 murrdoc or http://salt.readthedocs.org/en/latest/ref/modules/all/salt.modules.saltutil.html
01:46 murrdoc you can run regen keys
01:46 murrdoc if needed
01:46 deathbypugs I deleted all keys. and now new key requests are dribbling in
01:47 murrdoc regenkeys in the saltutil module is the salt-ier way
01:47 murrdoc heh saltier
01:47 tracphil murrdoc: very nice!
01:47 deathbypugs Is this a bug? Sysadmin (me) error?
01:47 murrdoc is what a bug
01:48 hvn joined #salt
01:48 hvn joined #salt
01:48 tracphil I have not has them not work
01:49 tracphil but then again, I use Debian and not Ubuntu ;)
01:49 deathbypugs If this is a bug, it's potentially an enormous hassle, having to manually fix keys across tons of nodes
01:49 murrdoc oh what version of salt do u run
01:50 deathbypugs Is the 'my salt master all of a sudden stopped working' issue a bug or a sysadmin error?
01:50 murrdoc i havent had it happen on either 2014.7 or 2014.1.10+
01:51 murrdoc but i have a salt scheduled job to test.ping the master every hour
01:51 deathbypugs Salt: 2014.7.0
01:52 deathbypugs I had to remove a key, restart the salt-master service and then, and only then, did the new key request come in.
01:52 murrdoc well there are few things you could look at, number of file handles you assign to salt on the master, number of threads, and also maybe running two masters
01:52 murrdoc are you using RAET or 0mq
01:53 deathbypugs ulimit -n is 1024. Thats more than 2x needed as in the docs
01:53 murrdoc is it ubuntu (so much better than debian, amirite)
01:53 deathbypugs 0MQ. RAET is not installed
01:53 murrdoc http://docs.saltstack.com/en/latest/ref/states/all/salt.states.schedule.html i have something similar to job3 setup
01:53 murrdoc and i install the salt-minion with an authorized key when i provision
01:54 murrdoc sec ops wants me to consider regen'ing the keys once a month for security
01:55 deathbypugs murrdoc. I get the sec ops, but that's a diff point. Why did my keys 'go bad'
01:55 tracphil murrdoc: thank you thank you thank you. I didn't evenknow about the schedular.
01:57 murrdoc not quite sure deathbypugs
01:57 murrdoc it used to be a key expiration issue in 2014.1.* releases
01:57 murrdoc np tracphil
01:57 yomilk joined #salt
01:57 deathbypugs stevednd: what is the archive error? Is this what happend?
01:58 deathbypugs OK. Going forward, do you recommend regular 'key maintenance?' I don't have to do this in my puppet deployments
01:59 deathbypugs Or is it something perhaps w/Ubu14
02:00 murrdoc try the scheduled job with the test.ping
02:01 murrdoc if its still an issue, then its worth key maintenance
02:02 schristensen joined #salt
02:03 murrdoc also docs.saltstack.com/en/latest/ref/runners/all/salt.runners.manage.html
02:04 deathbypugs Thanks all. As this is POC and salt is still 'new' I'm hoping my windows sysadmin likes it. These things don't install confidence. They're not used to open source software on their boxes. But he seems to like it
02:04 deathbypugs Thanks again.
02:05 murrdoc np
02:07 murrdoc what are you loking for in salt
02:08 murrdoc in your poc
02:09 murrdoc deathbypugs:  if you have more questions holler
02:11 murrdoc http://ryandlane.com/blog/2014/08/04/moving-away-from-puppet-saltstack-or-ansible/ is a good read
02:11 bhosmer joined #salt
02:11 yetAnotherZero joined #salt
02:13 hvn joined #salt
02:15 d0xb joined #salt
02:16 aqua^lsn joined #salt
02:25 TheThing joined #salt
02:33 hvn joined #salt
02:33 hvn joined #salt
02:38 dwfreed [A
02:38 dwfreed whoops
02:45 otter768 joined #salt
02:47 hvn joined #salt
02:47 hvn joined #salt
02:55 bhosmer joined #salt
03:06 shaggy_surfer joined #salt
03:07 aparsons joined #salt
03:12 aparsons joined #salt
03:18 redzaku joined #salt
03:21 aqua^mac joined #salt
03:23 zzzirk joined #salt
03:24 brianfeister joined #salt
03:30 redzaku joined #salt
03:31 favadi joined #salt
03:33 hasues joined #salt
03:34 hasues left #salt
03:42 monkey66 joined #salt
03:47 markizano joined #salt
03:47 aparsons joined #salt
03:50 twellspring joined #salt
03:51 fxhp joined #salt
03:53 aparsons joined #salt
04:03 zzzirk joined #salt
04:04 mkropinack joined #salt
04:10 bhosmer joined #salt
04:14 jerematic joined #salt
04:14 elfixit1 joined #salt
04:14 otter768 joined #salt
04:15 yetAnotherZero joined #salt
04:22 mkropinack joined #salt
04:33 jtang joined #salt
05:10 aqua^mac joined #salt
05:25 ndrei joined #salt
05:26 arno joined #salt
05:37 JlRd joined #salt
06:03 jerematic joined #salt
06:12 catpigger joined #salt
06:23 hvn joined #salt
06:37 bantone_ left #salt
06:43 favadi joined #salt
06:48 arno joined #salt
06:53 forrest joined #salt
06:59 aqua^mac joined #salt
07:10 rogst joined #salt
07:15 Ryan_Lane joined #salt
07:18 Ryan_Lane joined #salt
07:25 bhosmer joined #salt
07:29 hvn joined #salt
07:29 hvn joined #salt
07:30 zadock joined #salt
07:42 auser joined #salt
07:46 forrest joined #salt
07:52 brianfeister joined #salt
08:04 auser joined #salt
08:40 Andre-B joined #salt
08:43 cberndt joined #salt
08:48 aqua^mac joined #salt
08:50 Raging_fenrir joined #salt
09:12 felskrone joined #salt
09:15 zzzirk joined #salt
09:17 zadock joined #salt
09:35 chiui joined #salt
09:40 jerematic joined #salt
09:49 Shenril joined #salt
09:55 JlRd joined #salt
09:58 hvn joined #salt
09:58 hvn joined #salt
10:04 CeBe joined #salt
10:14 sbx joined #salt
10:15 sbx joined #salt
10:25 d0xb joined #salt
10:31 zadock joined #salt
10:46 ilbot3 joined #salt
10:46 Topic for #salt is now Welcome to #salt | SaltConf 2015 Call for Speakers is open! http://saltconf.com/call-for-speakers/ | 2014.7.0 is the latest | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
10:51 linjan joined #salt
11:00 favadi_ joined #salt
11:02 tomh- joined #salt
11:08 hvn joined #salt
11:08 hvn joined #salt
11:10 jtang joined #salt
11:14 Iota joined #salt
11:14 Iota left #salt
11:35 linjan joined #salt
11:45 yomilk joined #salt
11:51 d0xb joined #salt
11:52 markm_ joined #salt
11:57 ecdhe joined #salt
11:58 bhosmer joined #salt
12:02 booly-yam_ joined #salt
12:03 Micromus joined #salt
12:05 xt joined #salt
12:06 gebi joined #salt
12:06 gebi hi all
12:07 gebi any hints on what could be broken when salt-ssh hangs at 100% cpu for ages? (salt-ssh 'hostname' state.highstate test=True  522.42s user 43.42s system 98% cpu 9:37.13 total)
12:09 gebi that only a tiny config with about 25 states
12:19 hvn joined #salt
12:19 hvn joined #salt
12:25 jtang joined #salt
12:26 aqua^mac joined #salt
12:35 JlRd joined #salt
12:35 mrjk joined #salt
12:38 arno joined #salt
12:40 nullptr joined #salt
12:40 redzaku joined #salt
12:46 jtang joined #salt
12:49 nmadhok joined #salt
12:50 xliiv joined #salt
13:00 jramnani` joined #salt
13:01 bhosmer_ joined #salt
13:05 ev8 joined #salt
13:06 ev8 left #salt
13:14 gebi the problem came from broken dns on the client side
13:15 gebi but why is salt-ssh burning CPU like hell? does salt-ssh busy waiting on something?
13:20 hvn joined #salt
13:20 hvn joined #salt
13:28 amustafa joined #salt
13:34 redzaku joined #salt
13:34 linjan joined #salt
13:36 arno joined #salt
13:47 evelo joined #salt
13:54 evelo Question:  I am using GitFS in a masterless setup.  I have two repos... one is a formula and the other has a top.sls along with an sls file that uses the formula.
13:54 evelo Can I do this without any "file_roots"?
13:55 evelo Basically I want to pair repos with GitFS to get different outcomes.
14:00 evelo Sort of like this: http://pastebin.com/pHmB3k5X
14:00 evelo The goal would be to build virtual machines for use with things like Vagrant.
14:03 booly-yam_ joined #salt
14:10 jerematic joined #salt
14:11 linjan joined #salt
14:15 aqua^mac joined #salt
14:34 viq joined #salt
14:39 linjan joined #salt
14:41 evelo It seems like it should all work fine. I get this though... [CRITICAL] Pillar render error: Rendering Primary Top file failed, render error:
14:41 evelo virtualbox-iso: 'NoneType' object has no attribute 'ends
14:44 evelo I don't quite understand because I am not attempting to use Pillar data, or does the formula require any.
14:51 hvn joined #salt
14:52 pcdummy joined #salt
15:08 pdayton joined #salt
15:09 booly-yam-9806 joined #salt
15:27 booly-yam-5315 joined #salt
15:31 booly-yam-1412 joined #salt
15:39 badon joined #salt
15:41 booly-yam-4658 joined #salt
15:43 booly-yam-464 joined #salt
15:43 markm_ joined #salt
15:45 booly-yam-4421 joined #salt
15:52 booly-yam-3171_ joined #salt
16:03 aqua^mac joined #salt
16:03 zzzirk joined #salt
16:06 jramnani` joined #salt
16:08 jtang joined #salt
16:19 elco_ joined #salt
16:25 amustafa_ joined #salt
16:28 d0xb joined #salt
16:28 catpigger joined #salt
16:36 Mso150 joined #salt
16:39 hvn joined #salt
16:39 hvn joined #salt
16:42 Bilge joined #salt
16:42 Bilge When will a new version be released?
16:45 ndrei joined #salt
16:45 aquinas joined #salt
16:46 rm_jorge joined #salt
16:48 jsm joined #salt
16:49 toddnni_ joined #salt
16:51 vbabiy joined #salt
16:51 brianfeister joined #salt
16:52 toddnni- joined #salt
17:01 toddnni_ joined #salt
17:02 xliiv joined #salt
17:09 tracphil I need to create this directory structure /usr/local/admin/{bin,etc,backup} is there a way to do that with one file.managed statement?
17:09 whaity joined #salt
17:11 jonatas_oliveira joined #salt
17:13 nkuttler tracphil: you could use one file.directory, but you'd have to create the dirs locally afaik
17:13 twellspring joined #salt
17:14 tracphil nkuttler: Thanks. I am seeing that as well.
17:14 nkuttler or was it directory.managed..
17:14 yomilk joined #salt
17:14 tracphil I read between the lines :D
17:15 whaity joined #salt
17:16 nkuttler ah, no, i think i used file.copy + recursion to do that
17:16 tracphil k
17:21 jsm joined #salt
17:25 jonatas_oliveira joined #salt
17:25 moderation joined #salt
17:27 wiqd_ joined #salt
17:28 kwmiebach joined #salt
17:34 The_ joined #salt
17:35 The_ Hi everyone :) I'm just starting with Salt, and I've got a couple of questions, can I bother all you people with it?
17:36 whaity well can't speak for all, I if I can I will
17:36 The_ hehehe hi :)
17:36 whaity I have not long started myself
17:36 whaity hi
17:37 The_ Well, the first one regards the networking module; When I try to set an MTU on a F21 box, salt tries to fill ETHTOOL_OPTS with "mtu 9000"; 'mtu' is not a valid option for ethtool, and you should use MTU= in the config file for at least RHEL6+
17:37 The_ Thing is, I seem to be the only one with this problem ;)
17:38 gyre007_ joined #salt
17:38 arno joined #salt
17:39 The_ But I can't seem to figure out *WHERE* that decision is made; if I look in https://github.com/saltstack/salt/blob/2014.7/salt/modules/rh_ip.py, 'mtu' is indeed in the CONFIG_OPTS rather than the ETHTOOL_OPTS so I'm a bit confused
17:41 simonmcc joined #salt
17:43 whaity and to check you are using the same version as the src code you are looking at?
17:43 The_ yeah, vanilla 2014.7.0
17:43 manytrees joined #salt
17:43 whaity via pip / git or epel?
17:44 otter768 joined #salt
17:45 The_ deb repo
17:47 The_ (or Fedora's own in the case of said minion)
17:49 whaity mmmm
17:49 whaity good one
17:50 whaity I'm guess not that many people changing the MTU
17:50 whaity guessing you need jumbo frames
17:50 rlarkin joined #salt
17:50 codekobe joined #salt
17:52 whaity do you have the state file online?
17:52 The_ well, need, need ;o) want ;) anyway, I'll dig somewhat further
17:52 The_ nah, but it's really nothing special
17:52 The_ enp1s0:   network.managed:     - type: slave     - master: bond0     - mtu: 9000  enp2s0:   network.managed:     - type: slave     - master: bond0     - mtu: 9000  bond0:   network.managed:     - type: bond     - bridge: br040-servers     - use_in:         - network: enp1s0         - network: enp2s0     - require:         - network: enp1s0         - network: enp2s0     - mode: 802.3ad     - miimon: 100     - lacp_rate: fast     - xmit_h
17:52 The_ etc
17:52 aqua^mac joined #salt
17:53 a7p joined #salt
17:54 whaity let me see if I can add an extra interface on my DO machine
17:54 whaity if I change the mtu on the main if I may well in trouble
17:54 whaity ;)
17:54 The_ hehehe be careful there ;)
17:57 vbabiy joined #salt
17:57 whaity my private network is being configured
17:57 whaity .....
18:00 EWDurbin joined #salt
18:05 toddnni joined #salt
18:06 whaity right
18:07 whaity I get the same thing I think
18:07 whaity ETHTOOL_OPTS="mtu 9000 "
18:07 whaity salt-minion 2014.7.0 (Helium)
18:07 whaity on that version
18:08 whaity the salt-master is salt-master 2014.7.0 (Helium)
18:08 whaity so it is not just you
18:09 The_ figured as much ;) I'll see about hacking up a fix then
18:13 whaity you could always sed it using file after as a work around for now
18:15 yomilk joined #salt
18:16 nmadhok joined #salt
18:17 whaity The_: quick question, are you doing this for a number of machines or just one?
18:17 whaity The_: I am debating adding some of the servers at work with their network configured like this
18:17 whaity The_: but with the data from Pillar
18:18 whaity The_: and then using that to add to the DNS server
18:21 nullptr joined #salt
18:23 The_ I'm just starting up, so this is actually my first machine ;) Many more will follow if all will be well ;)
18:26 wnkz_ joined #salt
18:26 whaity I have started replacing Puppet at work with it. Part of a global move. It does have some good points
18:26 whaity Seems simpler
18:28 twellspring joined #salt
18:28 hvn joined #salt
18:28 hvn joined #salt
18:34 twellspring joined #salt
18:38 The_ we're investigating it for the exact same reason
18:46 The_ does a minion get updated modules etc. pushed over from the master or is that local code always?
18:49 ndrei joined #salt
18:51 booly-yam-3799 joined #salt
18:53 jerematic joined #salt
18:58 otter768 joined #salt
18:59 hasues joined #salt
18:59 hasues left #salt
19:03 yetAnotherZero joined #salt
19:14 cpowell joined #salt
19:35 lamasnik joined #salt
19:41 aqua^mac joined #salt
19:44 vbabiy joined #salt
19:49 rodrigc joined #salt
19:49 rodrigc Hi, I am trying to set up saltstack on FreeBSD with 2014.7.0.  I deployed it to 6 nodes, and so far it works fine, but
19:50 rodrigc I ran into this issue with disk.usage : https://github.com/saltstack/salt/issues/19794
19:50 rodrigc it looks like this might have been fixed in August for MacOS X, which also might fix FreeBSD
19:51 rodrigc what is the best way to run a newer version of saltstack than the 2014.7.0 release?
19:52 pcdummy Just published "A collection of saltstack resources." - https://github.com/pcdummy/saltstack-resources
19:55 dnai23 joined #salt
20:03 kossy joined #salt
20:06 toddnni joined #salt
20:08 toddnni_ joined #salt
20:09 Mso150_c joined #salt
20:17 hvn joined #salt
20:17 hvn joined #salt
20:23 twellspring joined #salt
20:35 Mso150 joined #salt
20:43 toddnni joined #salt
20:45 xliiv joined #salt
21:14 toddnni_ joined #salt
21:17 redzaku joined #salt
21:26 bhosmer_ joined #salt
21:27 redzaku joined #salt
21:28 otter768 joined #salt
21:30 aqua^mac joined #salt
21:32 Mso150 joined #salt
21:35 yomilk joined #salt
21:36 bhosmer_ joined #salt
21:40 pogotech joined #salt
21:45 pogotech left #salt
21:50 toddnni joined #salt
21:52 TheThing joined #salt
21:54 FRANK_T joined #salt
21:54 Mso150 joined #salt
22:03 wnkz__ joined #salt
22:06 hvn joined #salt
22:06 hvn joined #salt
22:18 Guest72549 joined #salt
22:34 rm_jorge joined #salt
22:36 yomilk joined #salt
22:40 jerematic joined #salt
22:42 nullptr joined #salt
22:44 higgs001 joined #salt
22:47 cheus joined #salt
22:52 redzaku joined #salt
23:05 redzaku joined #salt
23:06 ckao joined #salt
23:19 aqua^mac joined #salt
23:22 sk_0 how do i stop salt-minion from starting at boot time on ubuntu 14.10? chmod -x /etc/init.d/salt-minion doesn't seem to be working
23:29 otter768 joined #salt
23:36 yomilk joined #salt
23:46 CeBe1 joined #salt
23:46 bhosmer_ joined #salt
23:48 evelo joined #salt
23:48 Ryan_Lane joined #salt
23:48 joehoyle joined #salt
23:49 hobakill joined #salt
23:50 joehoyle I'm having difficulty tracking down why my custom module isn't being loaded. Anyone had issues using s3fs with custom modules?
23:51 evelo Pillar failed to render with the following messages:
23:51 evelo ----------
23:51 evelo Rendering Primary Top file failed, render error:
23:51 evelo 'NoneType' object has no attribute 'endswith'
23:51 evelo Would anyone be able to point me in the right direction with this error?
23:51 evelo thanks so much!
23:52 evelo im using a masterless setup and no pillar data is set
23:53 sk_0 answered my own question. update-rc.d
23:56 hvn joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary