Perl 6 - the future is here, just unevenly distributed

IRC log for #salt, 2014-06-28

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 maxskew joined #salt
00:18 ahammond how can I find out which formula would be applied if I ran state.highstate on a given box? Ideally, in the context of another formula. :)
00:18 forrest test=True
00:19 ahammond TaiSHi what kind of files? If source code where you have small changes, git is awesome. If binaries, doing something like bittorrent works really well.
00:21 TaiSHi ahammond: it's mostly user content, images and such
00:22 ksalman any ideas why i am getting hit by this again? https://github.com/saltstack/salt/issues/11781
00:22 TaiSHi I thought of gluster, but it requires instances to act as nodes
00:22 TaiSHi I want the replication to happen between the webservers seamlessly
00:23 mateoconfeugo joined #salt
00:26 forrest ksalman, did you modify the timeout for auth?
00:27 forrest or did you use that fix that whiteinge suggested?
00:27 ksalman forrest: all the minions have the config i pasted in latest comment
00:27 ksalman yes, that
00:27 forrest and they were restarted?
00:27 ksalman yea
00:27 forrest well, then shiiiiiiiiiiiit
00:27 forrest turn off services, go home, deal with Monday :P
00:27 ksalman salt-master was running just fine until couple of hours ago when i restarted it, and boom
00:27 ksalman QQ
00:27 forrest interesting
00:28 forrest was there any configuration change with the master?
00:28 ksalman i am about to go home, actually lol
00:28 forrest you didn't change the master version and then restart right?
00:28 ksalman no,no chagne to master
00:28 forrest lame
00:28 bhosmer joined #salt
00:28 ksalman all master and minions are at 2014.1.1, with a handful of minons at 2014.1.0.
00:28 ksalman about 600 minions
00:28 ksalman in total
00:30 ahammond TaiSHi there are some decent general purpose distributed filesystems. I rather like ceph: http://ceph.com/ceph-storage/file-system/
00:33 ksalman well, I am out of the day i guess..
00:33 ahammond TaiSHi, you also have to understand that distributed filesystems _always_ come with caveats. There's also Swift and Riak-CS. You'll have to compare them and decide which one best meets your needs.
00:33 ksalman s/of/for
00:34 TaiSHi Yeah but with ceph I still have to 'serve' the files, the main issue is I don't want a fileserver
00:36 TaiSHi I want to fire up a VM, add it to a 'cluster' and get its files from the other nodes, like a big friendly raid 1
00:53 nhubbard joined #salt
00:53 nliadm joined #salt
00:53 modafinil_ joined #salt
00:53 munhitsu_ joined #salt
00:53 xsteadfastx joined #salt
00:54 goki joined #salt
00:54 akoumjian joined #salt
00:54 wiqd joined #salt
00:54 Cidan joined #salt
00:54 simonmcc joined #salt
00:56 zz_RedDeath joined #salt
00:56 codysoyland joined #salt
00:56 HACKING-TWITTER joined #salt
01:11 krow joined #salt
01:22 viq joined #salt
01:22 viq joined #salt
01:27 ipalreadytaken joined #salt
01:33 jhauser_ joined #salt
01:42 soasme joined #salt
01:46 ilbot3 joined #salt
01:46 Topic for #salt is now Welcome to #salt | 2014.1.4 is the latest | SaltStack trainings coming up in SLC/London: http://www.saltstack.com/training | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
01:49 ajolo joined #salt
01:52 * TaiSHi is deploying 3 vms with salt-cloud to play with glusterfs
01:52 * TaiSHi pokes xt heavily
01:52 ajolo_ joined #salt
01:55 zartoosh__ joined #salt
01:57 krow joined #salt
02:10 CeBe2 joined #salt
02:16 bhosmer joined #salt
02:26 CeBe joined #salt
02:30 acabrera joined #salt
02:36 krow joined #salt
02:46 rieux joined #salt
02:46 HACKING-TWITTER joined #salt
02:49 HACKING-TWITTER joined #salt
02:51 troyready joined #salt
02:52 HACKING-TWITTER joined #salt
02:55 HACKING-TWITTER joined #salt
02:59 HACKING-TWITTER joined #salt
03:04 borgstrom joined #salt
03:07 notpeter_ joined #salt
03:12 HACKING-TWITTER joined #salt
03:36 HACKING-TWITTER joined #salt
03:38 HACKING-TWITTER joined #salt
03:48 mateoconfeugo joined #salt
04:00 marco_en_voyage joined #salt
04:05 bhosmer joined #salt
04:05 krow joined #salt
04:05 krow1 joined #salt
04:11 krow joined #salt
04:13 jalbretsen joined #salt
04:14 krow1 joined #salt
04:16 ipalreadytaken joined #salt
04:23 troyready joined #salt
04:31 krow joined #salt
04:34 ipalreadytaken joined #salt
04:50 nyov joined #salt
04:50 ajolo__ joined #salt
05:08 jnials joined #salt
05:14 krow joined #salt
05:23 NV joined #salt
05:23 NV [A
05:29 quickdry21 joined #salt
05:30 TyrfingMjolnir joined #salt
05:35 krow joined #salt
05:45 xmj did anyone mention ZFS
05:45 xmj joehh: any news on Salt 2014.1.6? :)
05:45 non7top joined #salt
05:45 xmj also, someone needs to fix this channel's topic
05:46 xmj terminalmage: do it!!!
05:49 nyov I just installed salt-master on a debian VM from the debian package. no minion, no configuration. worker threads 5, but after startup I'm seeing a lot more - http://paste.debian.net/107122/
05:51 Topic for #salt is now Welcome to #salt | 2014.1.5 is the latest | SaltStack trainings coming up in SLC/London: http://www.saltstack.com/training | Please be patient when asking questions as we are volunteers and may not have immediate answers | Channel logs are available at http://irclog.perlgeek.de/salt/
05:51 terminalmage xmj: happy?
05:51 terminalmage :P
05:52 goodwill joined #salt
05:52 TyrfingMjolnir joined #salt
05:53 bhosmer joined #salt
05:53 xmj terminalmage: much obliged
05:53 xmj I did some nagging before and no one listened :p
05:53 terminalmage well, early next week we'll be releasing .6
05:53 xmj terminalmage: did you guys already...ah
05:53 xmj terminalmage: how early?
05:53 terminalmage so, prepare to nag some more
05:54 xmj I wanna work with cedwards on getting salt into the freebsd portstree the day the release is made public
05:54 terminalmage maybe monday. we still have a couple tests that aren't passing
05:54 xmj hm, could probably pull it off monday.
05:54 terminalmage He's aware, we have an internal mailing list for that
05:54 terminalmage he'll be notified
05:55 terminalmage what we usually do though is tag the release and hold off on announcing for a couple days
05:55 terminalmage because the first thing we are asked when the release is announced is "Where are the packages?"
06:01 terminalmage anyway, it's way past my bedtime
06:02 terminalmage good night
06:05 TyrfingMjolnir joined #salt
06:08 DaveQB joined #salt
06:15 xmj terminalmage: thanks for teh info, good night!
06:18 Nexpro joined #salt
06:32 anuvrat joined #salt
06:32 ifur joined #salt
06:34 picker joined #salt
06:48 mastrolinux joined #salt
06:54 mastrolinux Hi, Looking at http://irclog.perlgeek.de/salt/search/?nick=&q=git.latest many people have this issue with git.latest giving overwrite error but no one had a reply, can I help you in some way to fully report the bug and help you discover the problem? I can tell that I saw no -f option in git pull when running the state in debug mode
06:58 Hell_Fire_ joined #salt
07:01 felskrone joined #salt
07:09 scott_w joined #salt
07:11 dvogt joined #salt
07:20 vu joined #salt
07:21 non7top guys, how do I match the osmajorrelease grain? 5 == grains['osmajorrelease'] doesn't seem to work
07:23 anuvrat joined #salt
07:28 ramteid joined #salt
07:28 ipalreadytaken joined #salt
07:32 georgemarshall joined #salt
07:33 TheThing joined #salt
07:41 bhosmer joined #salt
07:50 non7top nvm, found it grains['osmajorrelease'][0] == '6'
07:53 felskrone1 joined #salt
08:01 scott_w joined #salt
08:02 ndrei joined #salt
08:04 viq terminalmage: xmj: when it's tagged you can poke me as well, either I'll prepare it myself or poke maintainer to have it updated on OpenBSD side
08:05 xmj yeh
08:11 krow joined #salt
08:14 krow1 joined #salt
08:16 vu_ joined #salt
08:24 scott_w joined #salt
08:28 bhosmer joined #salt
08:43 anuvrat joined #salt
08:45 mprymek joined #salt
08:59 ml_1 joined #salt
09:05 taterbase joined #salt
09:17 ninkotech__ joined #salt
09:24 scott_w joined #salt
09:26 faust joined #salt
09:29 bhosmer joined #salt
09:30 gmoro joined #salt
09:31 scott_w joined #salt
09:58 malinoff joined #salt
10:07 mprymek joined #salt
10:10 mprymek left #salt
10:23 TheThing joined #salt
10:28 sxar joined #salt
10:44 scott_w joined #salt
10:48 sxar joined #salt
10:49 scarcry joined #salt
10:50 TyrfingMjolnir joined #salt
11:21 vu joined #salt
11:22 marco_en_voyage joined #salt
11:32 chiui joined #salt
11:36 bhosmer joined #salt
11:42 marco_en_voyage joined #salt
11:44 joehh xmj: haven't heard anything yet
11:45 joehh nyov: I think it is worth filing an issue at github
11:46 mapu joined #salt
11:55 happytux joined #salt
12:03 happytux joined #salt
12:05 elfixit joined #salt
12:24 vu left #salt
12:43 ndrei joined #salt
12:46 to_json joined #salt
12:55 sxar joined #salt
12:58 nyov joehh: I don't know if it's an issue or expected behaviour though, as I haven't used saltstack before. It is reproducible on a second machine, and in the 2014.1.5 (debian) version as well.
13:05 bhosmer_ joined #salt
13:08 aquinas joined #salt
13:13 scott_w joined #salt
13:13 joehh nyov: having looked a little closer, what you are seeing is not completely unexpected behaviour - but is probably worth some documentation
13:14 joehh the number of processes you are seeing is pretty typical
13:15 joehh how the worker_threads config value relates to total number of processes is probably worthy of documentation though
13:18 joehh I'll add an issue
13:18 picker joined #salt
13:19 nyov here is a debug run. even debug level doesn't tell me much, but is all this duplicate execution really expected behaviour? http://paste.debian.net/107162/
13:20 joehh https://github.com/saltstack/salt/issues/13819
13:21 joehh just looking at paste now
13:21 joehh not sure....
13:23 nyov okay. well so far things seem to work, if maybe a little slow :) so I'll just have to live with it until I find some time to dig into the codebase
13:24 joehh nyov: its pretty quiet here now - I'm interested because I'm the debian maintainer - but I think you'll get better answers through the week US time
13:25 joehh (in my completely personal capacity) I suspect that dodgy DNS can cause things to run more slowly than expected - though I don't know if this applies to all versions
13:27 joehh I've got some systems that run very quickly and some that run very slowly - I haven't had a chance to dig deeply, but at this point I'm blaming it on DNS
13:28 nyov ah! awesome to meet the debian maintainer himself :) I can do US time, will see how things look after testing some more
13:30 nyov okay, I'll look if it could be DNS related here.
13:30 joehh depends how slow you are essin
13:30 joehh are seeing
13:31 joehh assuming you have a few minions, how long does a salt '*' test.ping take?
13:32 honestly not more than 30 seconds since that's the default timeout
13:32 joehh that is pretty slow
13:32 nyov well that really differs between systems and so far I haven't really looked if it's different between physical and virtual systems (might be some VMs are just slow)
13:33 honestly salt doesn't scale well and it has pretty long processing delays :/
13:33 joehh assuming all online, a variety of operating systems (including windows minions) I've normally seen everything return with a few seconds
13:33 honestly a test.ping has to go through many layers of glue code
13:33 joehh if some are offline then i may get most return, but command "time out"
13:34 nyov what I found so far, is that some times a test.ping doesn't return from all systems the first time I run it.
13:34 joehh what os are your minions?
13:34 nyov all debian wheezy or jessie
13:35 joehh that seems surprising, how many minions do you have
13:35 joehh ?
13:35 nyov so far, eight
13:35 honestly i always run my commands with --async
13:36 joehh nothing surprising there - in the salt scale, that is very low, in my experience, I have a number of "systems" at roughly that scale
13:37 honestly your test.ping probably comes back it just takes more than 30 seconds
13:37 joehh honestly: agreed
13:37 joehh the question is why?
13:37 nyov but what I think is bad about it, is not telling which systems did not respond (out of all the questioned minions)
13:37 honestly ou can also set the timeout to an hour or something
13:37 joehh I believe you can query the particular job to get more info later
13:38 honestly you can, but if it just times out it doesn't print the job numer :/
13:38 honestly number*
13:38 joehh when we expect something to take awhile (slow windows installers/large downloads) we tend to run salt -t 6000
13:38 joehh honestly: agreed - that is a little painful
13:39 joehh would be nice to print jid
13:40 joehh nyov: given what you have said, first place I would look is DNS wierdness, but I would exepct 8 or so minions to respond very quickly (within a second or so)
13:41 joehh maybe run a minion in a shell with debug logging
13:41 nyov they communicate over zmq?
13:41 joehh ie salt-minion -l debug
13:41 joehh yes
13:42 nyov it seems the first call takes too long. once I run the ping a second time, all check in reasonably fast. so this seems to be a connection establishing thing
13:43 nyov will have to look at what the MQ is up to, I guess
13:43 joehh go for it :)
13:44 joehh I suspect the issue is elsewhere though
13:44 luminous joined #salt
13:44 joehh I think you will see a lot by running a minion in a shell
13:44 nyov okay
13:44 nyov another thing I just noticed, after logging to syslog: lots of whitespace in the formatting
13:45 joehh maybe importing modules is slow in your environment (io) or something else
13:45 joehh re syslog whitespace - haven't noticed that, but will look
13:45 nyov http://paste.debian.net/107163/
13:46 nyov seems to be formatting to a specific number of characters
13:46 nyov *aligning
13:48 joehh nyov: 2014.1.5?
13:49 nyov yes
13:49 nyov (current sid package)
13:51 joehh reading source now
13:51 diegows joined #salt
13:52 diegows joined #salt
13:52 felskrone joined #salt
13:53 diegows joined #salt
13:55 scott_w joined #salt
13:57 nyov all issues aside though, I'm really glad to have found saltstack (again, actually) and finally have time for running it.
13:58 joehh me too - solves many of our real problems
13:58 nyov now I can stop hating the guts out of ruby, shoot the chef and shoot the pupper master too!
13:58 joehh I'm also keen to get the niggles out too :)
13:58 nyov awesome
13:59 nyov finally a config management that does things right. mostly
14:00 joehh i agree - our civil and mech engineers can make the right decision with it
14:01 mateoconfeugo joined #salt
14:01 joehh no answer on your whitespace issue yet - i'll check out a few systems on monday and either let you know or figure it out
14:05 nyov not a problem. Just wanted to mention it, though I should be able to pick apart some python code myself.
14:06 nyov wanted to get this up and running first, though, before going bug hunting ;)
14:08 nyov interesting - the syslog tells me which minions did not return in time, even if the console doesn't
14:10 iMil joined #salt
14:11 happytux joined #salt
14:11 joehh nyov: can you paste.debian that?
14:12 sxar joined #salt
14:14 nyov no need, it's just one line:
14:14 nyov 2014-06-28T13:35:42.623041+00:00 system 2014-06-28 13:35:42,212 [salt.client      ][INFO    ] jid 20140628133538319635 minions set(['s1.example.com', 's2.example.com', 's3.example.com', 's4.example.com', 's5.example.com']) did not return in time
14:14 nyov (between all the other "Got return from xyz for job #" lines from salt.master)
14:15 joehh putting my user hat on, that would be really nice to return from the command line call
14:15 joehh assuming not too long
14:17 jcsp left #salt
14:28 joehh nyov: off to bed now (12:30 am here) so don't expect any quick answers from me :)
14:29 nyov joehh: alright :) good night then
14:40 mrchrisadams hi peeps
14:42 mrchrisadams I'm looking for some guidance on testing salt states, like the information here on ansible - http://docs.ansible.com/test_strategies.html
14:43 mrchrisadams could anyone point me to a similar example for Saltstack?
14:48 nyov not knowing ansible and reading the docs on salt just now, but maybe this is what you're looking for? https://salt.readthedocs.org/en/latest/topics/tutorials/starting_states.html#running-and-debugging-salt-states
14:50 bhosmer joined #salt
14:53 bhosmer_ joined #salt
15:02 oz_akan_ joined #salt
15:06 acabrera joined #salt
15:07 gildegoma joined #salt
15:12 FL1SK joined #salt
15:18 yomilk joined #salt
15:27 bhosmer joined #salt
15:28 soasme joined #salt
15:31 soasme_ joined #salt
15:34 laubosslink joined #salt
15:44 to_json joined #salt
15:48 diegows joined #salt
15:54 ptyticker joined #salt
16:00 borgstrom joined #salt
16:02 nyov 42
16:03 ptyticker joined #salt
16:09 sxar joined #salt
16:09 sxar joined #salt
16:10 borgstrom joined #salt
16:10 zain_ joined #salt
16:19 sxar joined #salt
16:20 scott_w joined #salt
16:20 sxar joined #salt
16:21 Kraln joined #salt
16:23 to_json joined #salt
16:24 sxar joined #salt
16:25 pdayton joined #salt
16:27 sxar joined #salt
16:28 hhtpcd joined #salt
16:29 scott_w joined #salt
16:31 happytux joined #salt
16:33 sxar joined #salt
16:36 borgstrom joined #salt
16:42 bhosmer_ joined #salt
16:49 anuvrat joined #salt
16:50 kossy joined #salt
17:03 krow joined #salt
17:08 bhosmer joined #salt
17:13 elfixit joined #salt
17:14 mgw joined #salt
17:18 sxar joined #salt
17:19 krow joined #salt
17:22 ksalman shouldn't this give me a full output? I am only getting a terse output 'salt minion1 state.sls test --state-output=full'
17:22 sxar joined #salt
17:22 ksalman my master config is set to 'mixed' but i think --state-output will override that?
17:22 ksalman but it does not seem to have any effect
17:41 oz_akan_ joined #salt
17:45 krow joined #salt
17:46 sxar joined #salt
17:48 krow1 joined #salt
18:02 oz_akan_ joined #salt
18:03 anuvrat joined #salt
18:03 bhosmer joined #salt
18:08 ipalreadytaken joined #salt
18:12 Hell_Fire joined #salt
18:15 viq ksalman: how about changed order? salt --state-output=full minion1 state.sls test
18:20 ksalman viq: same result
18:20 ckao joined #salt
18:23 ajolo joined #salt
18:30 bhosmer joined #salt
18:31 happytux joined #salt
18:37 yetAnotherZero joined #salt
18:37 yetAnotherZero joined #salt
18:44 picker joined #salt
19:00 scott_w joined #salt
19:01 to_json joined #salt
19:03 oz_akan_ joined #salt
19:05 schimmy joined #salt
19:11 gildegoma joined #salt
19:16 Mathnerd626 joined #salt
19:16 schimmy joined #salt
19:17 happytux joined #salt
19:23 bhosmer joined #salt
19:30 marco_en_voyage joined #salt
19:30 kossy joined #salt
19:31 xzarth joined #salt
19:39 schimmy joined #salt
19:39 marco_en_voyage joined #salt
19:50 micah_chatt joined #salt
20:04 oz_akan_ joined #salt
20:09 Andrevan joined #salt
20:11 ipalreadytaken joined #salt
20:18 bhosmer_ joined #salt
20:29 DaveQB joined #salt
20:31 krow joined #salt
20:32 krow1 joined #salt
20:36 TommyG_ joined #salt
20:40 ajw0100 joined #salt
20:40 bhosmer joined #salt
20:43 to_json joined #salt
20:43 Guest0852 joined #salt
20:43 thehaven joined #salt
20:55 sxar joined #salt
20:59 chamunks joined #salt
21:04 oz_akan_ joined #salt
21:06 oz_akan__ joined #salt
21:09 schimmy joined #salt
21:15 sxar joined #salt
21:16 to_json joined #salt
21:23 krow1 joined #salt
21:30 goodwill joined #salt
21:31 baniir joined #salt
21:31 Luke_ joined #salt
21:36 Luke_ joined #salt
21:39 ajw0100 joined #salt
22:03 scott_w joined #salt
22:04 krow joined #salt
22:06 bhosmer joined #salt
22:07 oz_akan_ joined #salt
22:30 Luke_ joined #salt
22:33 yidhra joined #salt
22:37 krow joined #salt
22:47 ksalman is it possible to get two grains from minions? someting like 'salt \* grains.get id, hostname' ?
22:47 ml_1 joined #salt
22:49 ksalman other then getting everything, i.e, grains.items
22:51 manfred [root@salt salt]# salt-call grains.item id host
22:51 manfred local:
22:51 manfred ----------
22:51 manfred host:
22:51 manfred salt
22:51 manfred id:
22:52 manfred salt.gtmanfred.com
22:53 acabrera joined #salt
22:53 ksalman manfred: thanks!
22:53 manfred np
23:01 msciciel1 joined #salt
23:02 pdayton1 joined #salt
23:08 oz_akan_ joined #salt
23:10 micah_chatt joined #salt
23:13 micah_chatt_ joined #salt
23:13 dvogt joined #salt
23:15 Luke_ joined #salt
23:15 pdayton joined #salt
23:19 dvogt joined #salt
23:21 ksalman in the reactor how would i target the minion that fired the event? 'tgt: grains['id']' ?
23:21 ksalman that doesn't seem to be working
23:22 manfred ksalman: you will need to use the data that was passed
23:22 manfred possibly data['id']
23:23 ksalman well, that requires the event having the data dict. The event should already have the minions id, no?
23:24 manfred not necessarily
23:24 manfred https://github.com/saltstack/salt/blob/develop/salt/utils/cloud.py#L1314
23:24 manfred https://github.com/saltstack/salt/blob/develop/salt/utils/cloud.py#L407
23:25 manfred ksalman: any information you use should be passed to the data dict when firing the event
23:25 manfred it is worth noting that there is an event state in Helium
23:26 manfred http://docs.saltstack.com/en/latest/ref/states/all/salt.states.event.html
23:26 ksalman look at this event dump for example https://gist.githubusercontent.com/anonymous/e6401a45f1a010b3a954/raw/ee640e77e76b7120d78b44ec5b539a2d93be3a58/gistfile1.txt
23:26 ksalman it has the id of the minion
23:26 manfred because that is fired inside the data dict
23:26 ksalman i should be able to target on that
23:26 manfred data['id']
23:27 pdayton joined #salt
23:27 manfred ksalman: what are you trying to do? put a minion into a highstate when it auths?
23:27 manfred http://docs.saltstack.com/en/latest/ref/states/startup.html
23:27 ksalman pretty much
23:28 manfred use startup_states ^^
23:28 manfred if you wanted to do it in the reactor, the reactor already has id in the data dict, so data['id']
23:29 ksalman ah i didn't know about startup_states, thanks. Though i still wnt to figure out how to get the reactor system working =)
23:29 ksalman i'll try data['id']
23:32 borgstrom joined #salt
23:33 ksalman manfred: thanks that worked.. tgt: {{ data['id'] }}
23:33 manfred np
23:33 kalessin joined #salt
23:33 ksalman now to use startup_states =)
23:33 manfred indeed
23:36 kalessin Hello! Quick question: I've got a state watching a directory (like this: /foo/bar/*), the directory is itself defined as a previous state, when running salt I'm getting this "The following requisites were not found: watch: file: /foo/bar/*" any idea why?
23:42 krow1 joined #salt
23:44 taterbase joined #salt
23:49 HACKING-TWITTER joined #salt
23:49 krow joined #salt
23:54 bhosmer joined #salt
23:55 krow1 joined #salt
23:55 felskrone joined #salt

| Channels | #salt index | Today | | Search | Google Search | Plain-Text | summary